diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md deleted file mode 100644 index beae6c54dd31539971bac0207965f85d67871431..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Electrotechnique Industrielle Guy Seguier Pdf Download A Modern and Comprehensive Treatment of Industrial Electrical Technology.md +++ /dev/null @@ -1,104 +0,0 @@ -
-

Electrotechnique Industrielle Guy Seguier Pdf Download

-

Are you interested in learning more about industrial electrical engineering? Do you want to read a comprehensive and authoritative book on this subject? If so, you might want to download Electrotechnique Industrielle by Guy Seguier in PDF format. In this article, we will tell you what this book is about, who the author is, why it is important, and how you can get it for free. Let's get started!

-

What is Electrotechnique Industrielle?

-

Electrotechnique Industrielle is the French term for industrial electrical engineering. It is a branch of engineering that deals with the design, installation, operation, and maintenance of electrical systems and equipment used in industrial settings. Some of the topics covered by this field include:

-

Electrotechnique Industrielle Guy Seguier Pdf Download


Download File >>> https://byltly.com/2uKz9A



- -

Industrial electrical engineering is essential for the development and improvement of various industries, such as manufacturing, transportation, communication, energy, and more. It also contributes to the safety, efficiency, and sustainability of industrial processes and products.

-

Who is Guy Seguier?

-

Guy Seguier was a French engineer and professor who specialized in industrial electrical engineering. He was born in 1925 and died in 2013. He obtained his engineering degree from the Ecole Centrale de Paris in 1948 and his doctorate from the University of Paris in 1956. He worked as a research engineer at the French National Center for Scientific Research (CNRS) from 1949 to 1964. He then became a professor at the Ecole Nationale Supérieure d'Electricité et de Mécanique (ENSEM) in Nancy, where he taught until his retirement in 1990. He also served as the director of the Laboratory of Electrical Engineering and Industrial Electronics (LGEP) from 1970 to 1985.

-

Guy Seguier was a prolific author who wrote several books and articles on various aspects of industrial electrical engineering. He was also a respected expert who participated in many national and international committees and projects related to his field. He received several awards and honors for his contributions, such as the Grand Prix de l'Académie des Sciences in 1987 and the Legion of Honor in 1994.

-

Why is his book important?

-

One of his most famous books is Electrotechnique Industrielle, which he co-authored with Francis Notelet. This book was first published in 1977 by Technique et Documentation and has been revised and updated several times since then. The latest edition was published in 1994 by TEC et Doc and has 484 pages.

-

This book is considered to be one of the most comprehensive and authoritative references on industrial electrical engineering. It covers all the fundamental concepts and principles, as well as the practical applications and examples, of this field. It also includes many diagrams, tables, formulas, exercises, and solutions to help the readers understand and apply the theory. The book is written in a clear and concise style that makes it accessible to both students and professionals.

-

The book is divided into six parts:

-

Download Electrotechnique Industrielle by Guy Seguier in PDF format
-Guy Seguier Electrotechnique Industrielle PDF free download
-How to download Electrotechnique Industrielle Guy Seguier PDF book
-Electrotechnique Industrielle Guy Seguier PDF ebook download
-Download PDF of Electrotechnique Industrielle by Guy Seguier for free
-Guy Seguier Electrotechnique Industrielle book PDF download
-Electrotechnique Industrielle Guy Seguier PDF file download
-Where to download Electrotechnique Industrielle Guy Seguier PDF
-Electrotechnique Industrielle by Guy Seguier PDF download link
-Guy Seguier Electrotechnique Industrielle PDF online download
-Download Electrotechnique Industrielle Guy Seguier PDF for free
-Guy Seguier Electrotechnique Industrielle free PDF download
-Electrotechnique Industrielle Guy Seguier download PDF
-Download PDF Electrotechnique Industrielle by Guy Seguier
-Guy Seguier Electrotechnique Industrielle PDF download free
-Electrotechnique Industrielle by Guy Seguier free PDF download
-Download free PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle download free PDF
-Electrotechnique Industrielle Guy Seguier PDF free online download
-Download free Electrotechnique Industrielle by Guy Seguier PDF
-Guy Seguier Electrotechnique Industrielle free online PDF download
-Electrotechnique Industrielle by Guy Seguier download free PDF
-Download free online PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle online free PDF download
-Electrotechnique Industrielle by Guy Seguier online free PDF download
-Download online free PDF of Electrotechnique Industrielle Guy Seguier
-Guy Seguier Electrotechnique Industrielle online PDF free download
-Electrotechnique Industrielle by Guy Seguier online PDF free download
-Download online PDF of Electrotechnique Industrielle by Guy Seguier for free
-Guy Seguier Electrotechnique Industrielle online download PDF for free
-Free download of Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of Electrotechnique Industrielle by Guy Seguier as a PDF file
-Free online download of Electrotechnique Industrielle by Guy Seguier as a PDF file
-Free download of the book Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the book Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of the ebook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the ebook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free download of the textbook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free online download of the textbook Electrotechnique Industrielle by Guy Seguier in PDF format
-Free access to the PDF version of Electrotechnique Industrielle by Guy Seguier
-Free access to the online PDF version of Electrotechnique Industrielle by Guy Seguier
-Get the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Get the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Access the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Access the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Read the PDF version of Electrotechnique Industrielle by Guy Seguier for free
-Read the online PDF version of Electrotechnique Industrielle by Guy Seguier for free
-View the PDF version of Electrotechnique Industrielle by Guy Seguier for free

-
    -
  1. Generalities: This part introduces the basic notions of electrical engineering, such as voltage, current, power, energy, resistance, capacitance, inductance, impedance, etc.
  2. -
  3. Electrical machines: This part covers the different types of electrical machines used in industrial settings, such as transformers, generators, motors, alternators, etc.
  4. -
  5. Power electronics: This part deals with the devices and circuits that convert and control electrical power, such as rectifiers, inverters, choppers, cycloconverters, etc.
  6. -
  7. Electrical networks: This part explains how electrical power is transmitted and distributed through various types of networks, such as AC or DC networks, single-phase or three-phase networks, balanced or unbalanced networks, etc.
  8. -
  9. Control and automation: This part describes how electrical systems are regulated and automated using various methods and tools, such as feedback control, PID control, state-space control, PLCs, SCADA systems, etc.
  10. -
  11. Renewable energy sources: This part discusses how electrical power can be generated from renewable sources, such as solar energy, wind energy, hydroelectric energy, biomass energy, etc.
  12. -
-

How to download his book in PDF format?

-

If you want to download Electrotechnique Industrielle by Guy Seguier in PDF format, you have several options:

- -

Conclusion

-

In conclusion, Electrotechnique Industrielle by Guy Seguier is a great book for anyone who wants to learn more about industrial electrical engineering. It covers all the essential topics, from theory to practice, in a clear and comprehensive way. It is suitable for both students and professionals who want to improve their knowledge and skills in this field. If you want to download this book in PDF format, you can either buy it online, borrow it from a library or a friend, or search for a free version on the internet. However, you should always be careful about the quality, the legality, and the security of the sources you use.

-

FAQs

-
    -
  1. What is industrial electrical engineering?
    -Industrial electrical engineering is a branch of engineering that deals with the design, installation, operation, and maintenance of electrical systems and equipment used in industrial settings.
  2. -
  3. Who is Guy Seguier?
    -Guy Seguier was a French engineer and professor who specialized in industrial electrical engineering. He wrote several books and articles on this subject, including Electrotechnique Industrielle.
  4. -
  5. Why is Electrotechnique Industrielle important?
    -Electrotechnique Industrielle is one of the most comprehensive and authoritative references on industrial electrical engineering. It covers all the fundamental concepts and principles, as well as the practical applications and examples, of this field.
  6. -
  7. How many pages does Electrotechnique Industrielle have?
    -Electrotechnique Industrielle has 484 pages in its latest edition published in 1994 by TEC et Doc.
  8. -
  9. How can I download Electrotechnique Industrielle in PDF format?
    -You can download Electrotechnique Industrielle in PDF format by buying it online, borrowing it from a library or a friend, or searching for a free version on the internet. However, you should always be careful about the quality, the legality, and the security of the sources you use.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md deleted file mode 100644 index 94f3864005f6a4737e8650f6512b8763371511e6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cyberpunk delivers free upgrade to Xbox owners Everything you need to know.md +++ /dev/null @@ -1,31 +0,0 @@ -
-

Cyberpunk 2077 on PS5 and Xbox Series X/S will be a free upgrade for everyone who purchased a copy of the respective PS4 and Xbox One editions. It was originally planned for release this year but was understandably delayed considering how bugged and broken the last-gen versions were at launch.

-

You can also upgrade to PS5 versions if you have a physical PS4 game, as long as you bought the PS5 with a disc drive. You'll always need to use the PS4 disc to play the PS5 version; upgrading doesn't get you a free digital copy of the game. You'll still download the PS5 update from the PSN, but you won't need a PS5-specific disc -- your PS4 one will become an authenticator.

-

Cyberpunk to give Xbox gamers a free upgrade


DOWNLOAD ★★★ https://imgfil.com/2uy0oH



-

Sony initially said 2022 exclusive Horizon Forbidden West wouldn't let you upgrade from the PS4 to the PS5 version for free unless you bought the more expensive Digital Deluxe, Collector's or Regalla Edition. It later reversed course, saying anyone who bought the PS4 version would be entitled to a free PS5 upgrade.

-

Patch 1.5 adds ray-traced local light shadows, smooth gameplay at 60fps with dynamic 4K scaling and DualSense haptic feedback to the game for PS5 and Xbox Series X gamers, as well as platform-agnostic improvements like "various improvements to the game, numerous quest, and gameplay fixes, as well as a number of free DLC."

-

It's worth noting that the Cyberpunk 2077 next-gen upgrade will be free if you already own the game on last-gen consoles. When originally confirming the Xbox Series X Cyberpunk 2077 upgrade, CD Projekt Red said (via Twitter (opens in new tab)) that "gamers should never be forced to purchase the same game twice or pay for upgrades," and we've seen nothing to indicate that's going to change.

-

"Earlier in the year we announced that if you pick up Cyberpunk 2077 on Xbox One you'll be able to play it on Xbox Series X when the console launches," the stream states. "If you pick up Cyberpunk 2077 on PS4, you'll also be able to play it on PS5 when the console launches. And that's not all. There will be a free upgrade for Xbox Series X and PS5, but we'll have more details on that soon."

-

CD Projekt Red announced via Twitter that it has an Xbox Series X upgrade of Cyberpunk 2077 in the works. It also said that when it's ready, gamers who already purchased the title for Xbox One will get it for free. "Gamers should never be forced to purchase the same game twice or pay for upgrades," the developer said. "Owners of Cyberpunk 2077 will receive the Xbox Series X upgrade for free when it is available."

-

Quick, everyone act surprised! CD Projekt Red has confirmed that Cyberpunk 2077's free Xbox Series X and Xbox Series S upgrade is available TODAY, and you can start downloading it right now. It clocks in at around a whopping 62GB.

-

"Xbox One players will be able to upgrade to the next-gen version of this completely original open-world survival adventure game for free. Xbox Series X users will be able to choose between 4K or Ray Tracing functions (Ray Tracing unavailable on Xbox Series S)."

-

I bought the Witcher 3 goty for a shockingly low £6.99 in anticipation of the upgrade...I completed the base game on release ...but being goty edition It gives me extra incentive because they are separate achievements aswell

-

-

Cue a lot of disgruntled customers that cannot access the shiny new version of the game on their new-gen consoles because they can't find the upgrade option on the PlayStation or Xbox storefronts in their region. For those affected, the upgrade is either locked or showing up a paid upgrade (when the new-gen versions should be free to anyone that already owns the game).

-

For players on Xbox Series X|S and PlayStation 5, Patch 1.5 marks the introduction of a dedicated next-gen version of the game featuring enhancements like dynamic 4K scaling and ray-tracing features on Xbox Series X and PlayStation 5, faster loading times, and better reflections, among others. All of this, fueled by the extra power of next-gen hardware, is available to owners of the PlayStation 4 and Xbox One version of Cyberpunk 2077 via a free upgrade.

-

Furthermore, this latest update also comes with new pieces of free additional content that expands what Cyberpunk 2077 has to offer gamers: rentable apartments featuring unique player character interactions, fresh weapons and gear, new customization options, and more.

-

But what happens when developers release a game for the Xbox One X? Well, the Smart Delivery feature means you can enjoy games like Cyberpunk 2077 on the Xbox One X, as well as a free upgrade to the Xbox Series X. Whether you have a physical or digital copy of the game, all you need to do is launch it on your Xbox One or Series X|S console, and the best version will be installed for you. When the optimized version is released, the backward-compatible version will automatically be replaced.

-

Tying in with the latest Xbox Series X details, Cyberpunk 2077 developer CD Projekt Red has confirmed that the game will be coming to next-gen systems -- in a way, at least. The gist of it is that if you buy Cyberpunk 2077 on Xbox One, you'll be able to upgrade the title for free on Xbox Series X. Based on the company's tweet, we assume that the same will apply to the PlayStation 4 version of the release once the PlayStation 5 hits later this year.

-

"Gamers should never be forced to purchase the same game twice or pay for upgrades," writes the official Cyberpunk 2077 Twitter account. "Owners of #Cyberpunk2077 for Xbox One will receive the Xbox Series X upgrade for free when available."

-

@3Above But you are comparing an upgrade from PS4 to PS5, to different versions on different platforms with the port being made by a different studio on the Switch. Of course Nintendo won't accept the game being given for free on their console since they didn't have a cut on other the platform's sale. Try to buy a game on Steam and ask GoG or epic store for a free key, I doubt it will work and this will have nothing to do with the developer.

-

This is entirely different since it will be the first time a console with an eShop will be backword compatible. So this offers a whole new range of possibilities for developers, and CDPR is the very first studio who is talking about free upgrade across console generations

-

CD Projekt Red has announced that gamers who own the Xbox One version of the highly-anticipated title Cyberpunk 2077 will receive the Xbox Series X upgrade for free when it becomes available. You can check out the Twitter announcement below!

-

Owners of The Witcher 3: Wild Hunt on PlayStation 4 and Xbox One will receive a free "next-gen" upgrade to the current-gen PS5 and Xbox X/S consoles in 2022. Fans have been awaiting the opportunity to see every detail of the grizzled White Wolf since the enhancement was first announced back in 2020. PC players do not have to worry, as the new features coming with the update will also hit he PC version. The enhanced edition of The Witcher 3 was scheduled to be released in the back half of 2021, then later delayed until the second quarter of 2022. Unfortunately, no word was given as to why this setback occurred, but the rough launch of Cyberpunk 2077 is a likely suspect.

-

The Witcher 3: Wild Hunt was released on May 18, 2015, and has received two expansions. Players were immediately drawn in by the vast open world, topped with stunning visuals and exciting combat. The game lives on in 2022 as fans continue to make mods for The Witcher 3. These fun changes add replayability, by enhancing Geralt's combat capabilities or altering characters in various ways. The game reached a new audience when players got their hands on the Nintendo Switch release, in October 2019. Currently, CD Projekt Red has yet to give an exact date for the next-gen upgrade to The Witcher 3.

-

The reason given for the new delay was that the decision was, "Based on recommendations supplied by teams supervising the development of both games." Most likely, CD Projekt Red does not want to repeat the disastrous launch of Cyberpunk 2077 and is making sure the upgrades are as polished as possible. Based on the details given for the new version, Witcher 3 fans will be able to experience the riveting open-world game like never before.

-

Based on reports, the next-generation upgrade may feature enhancements from one notable modder who goes by the name Halk Hogan. In an article by Kotaku, they reported on Halk's announcement that his creation of The Witcher 3 HD Reworked Project may be officially implemented in the new release. CD Projekt Red has not yet confirmed this collaboration, but Witcher 3 has gone through many changes since its launch, and given that Halk already made major improvements to the graphics of the base game, a prolific modder officially working with the developers could make for the best overall upgrade. Whether the collaboration happens or not, players can expect to enjoy The Witcher 3 at 60 FPS and 4K resolution for PC, Xbox Series X/S, and PlayStation 5 sometime in the second quarter of 2022.

-

As expected the PlayStation 5 and Xbox Series X|S upgrades for Cyberpunk 2077 have been announced and they are available to download today alongside a huge Patch 1.5 update! Hoping to convince people that the game is now ready for prime time, a free trial is also available, giving you the first five hours of the game.

-

While Microsoft is making aggressive moves to ensure buyers grab the upcoming Xbox Series X, Sony is sort of taking a "wait and see" approach with the PS5. This lackadaisical attitude is putting developer CD Projekt Red (the studio behind The Witcher and Cyberpunk 2077) in a bind as it can't confirm if its upcoming open-world RPG will be able to offer a free upgrade to PS5 users.

-

One of the key selling points of the Xbox One version of Cyberpunk is that Microsoft will be offering a free upgrade to the Series X version when its next console launches. This means that players don't have to wait around for a "better" version of the game. They can simply buy Cyberpunk on release, begin playing the title, then get all of the visual enhancements if they decide on upgrading their console.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md deleted file mode 100644 index a3a5c9caed756e06d98f31a5c3982dc616318cce..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Drawings 6 Pro Crack REPACK.md +++ /dev/null @@ -1,32 +0,0 @@ -

drawings 6 pro crack


DOWNLOAD https://imgfil.com/2uxYEv



-
-In addition, for QuickEmbroidery product, there are "Text", "Complex", "Designer" and "Keyboards" modules as well. - -Wings 5 and Wings 6 are not compatible with each other. - -References - -External links - -Wings - -Category:Embroidery softwarePosts Tagged ‘clarion west’ - -Wow, it’s been quite a while since I’ve posted. I’m well aware that it’s been quite a while since I’ve posted. I should make sure to share some of the great work I’ve been doing as well. But, mostly what I want to share is a song that has been keeping me company during the last month or so of being on my own for the first time in about 8 years, staying in a place that had lots of room and was relatively quiet and still. - -My name is Ross Farrar and I’m the singer, songwriter, and guitarist for the trio Clarity West. We have been around for a few years now, but I’m only just starting to understand what we do a little more clearly as we begin to play more shows. This is my first time posting anything I’ve written. I hope you enjoy it and that you can find a way to come see us sometime. - -Shepard Fairey, in another powerful remix, gives us the track “Losing Tomorrow” from the self-titled debut album from Portland, Oregon’s Clarion West. In addition to the track that originally appeared on the record, this remix also features remixes by The Weeping Choir, P.O.S., and Fatty Gainz. - -We’ve been playing some of our stuff lately at the North Shore Music Festival in Chicago. Check out a couple of videos and see what we’ve been doing and what we’re about. Hope you enjoy and get to see us out on a big stage soon.Q: - -What kind of tax does a head of household pay? - -I'm currently working as a software engineer and planning to file as self-employed. My earnings are going to come from two sources: direct contract, and consulting/contracting. - -What I'm confused about is: - -I can't charge more than a standard rate set by my state, so a freelance engineer will 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md deleted file mode 100644 index 0034812dc96ab729f274f3b4d7e6e0b9e5f6d53f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Webcam Bright Sn 21162510905.md +++ /dev/null @@ -1,6 +0,0 @@ -

driver webcam bright sn 21162510905


Download File ••• https://imgfil.com/2uxY0s



- -ESET.NOD32.OnDemand.Scanner.17.03.2.rar free download java e book of khalid mugal scjp1 6 driver webcam bright sn 21162510905.rar 1fdad05405
-
-
-

diff --git a/spaces/1line/AutoGPT/autogpt/memory/no_memory.py b/spaces/1line/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md deleted file mode 100644 index f1e911fa26db0d3789539dc332e57cc182e9b9c7..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8 Ball Pool Ultima Version APK The Most Realistic Pool Game Ever.md +++ /dev/null @@ -1,155 +0,0 @@ -
-

8 Ball Pool Ultima Version APK: Everything You Need to Know

-

If you are a fan of pool games, you might have heard of 8 Ball Pool, one of the most popular and addictive online multiplayer games for Android devices. But do you know what is 8 Ball Pool Ultima Version APK and why you should download it? In this article, we will tell you everything you need to know about this amazing game, how to play it, and what benefits it can bring to you.

-

8 ball pool ultima version apk


DOWNLOADhttps://urlin.us/2uT0yr



-

What is 8 Ball Pool?

-

A brief introduction to the game and its features

-

8 Ball Pool is a pool game developed by Miniclip.com that allows you to play with millions of players from around the world. You can choose from different game modes, such as 1-on-1 matches, tournaments, or practice mode. You can also customize your cue and pool table with various items that you can buy with coins or cash. Coins are the main currency of the game that you can earn by winning matches or spinning the wheel. Cash is the premium currency that you can use to buy exclusive cues, chat packs, or mini-games.

-

The difference between 8 Ball Pool and other pool games

-

Unlike other pool games that follow different rules and formats, 8 Ball Pool is based on the American style of eight-ball pool. This means that there are 15 object balls on the table, divided into two groups: solids (numbered 1-7) and stripes (numbered 9-15). The goal of the game is to pocket all the balls from your assigned group (either solids or stripes) and then pocket the black 8 ball in a called pocket. You have to do this before your opponent does or before you commit a foul. A foul occurs when you fail to hit any ball with your cue ball, hit the wrong ball first, pocket the cue ball or the 8 ball prematurely, or scratch (pocket the cue ball after hitting another ball).

-

What is 8 Ball Pool Ultima Version APK?

-

A description of the latest version of the game and its benefits

-

8 Ball Pool Ultima Version APK is a modified version of the original game that offers some extra features and advantages. Some of these features are:

-

8 ball pool latest version download apk
-8 ball pool mod apk unlimited coins and cash
-8 ball pool hack apk free download
-8 ball pool online multiplayer game apk
-8 ball pool apk for android 10
-8 ball pool update version apk download
-8 ball pool cheat engine apk no root
-8 ball pool offline mode apk
-8 ball pool pro version apk
-8 ball pool apk with facebook login
-8 ball pool rewards apk download
-8 ball pool legendary cues mod apk
-8 ball pool old version apk 2019
-8 ball pool beta version apk
-8 ball pool instant win apk
-8 ball pool guideline hack apk
-8 ball pool long line mod apk
-8 ball pool miniclip game apk
-8 ball pool club feature apk
-8 ball pool premium cues apk
-8 ball pool cracked version apk
-8 ball pool anti ban mod apk
-8 ball pool unlimited money and cash apk
-8 ball pool best mod apk download
-8 ball pool mega mod apk latest version
-8 ball pool original game apk download
-8 ball pool auto win hack apk
-8 ball pool all tables unlocked apk
-8 ball pool extended stick guideline apk
-8 ball pool full unlocked version apk
-8 ball pool unlimited coins and cash generator apk
-8 ball pool low mb version apk download
-8 ball pool aim tool pro apk free download
-8 ball pool mod menu apk download android
-8 ball pool new update version download apk
-8 ball pool unlimited scratchers mod apk
-8 ball pool vip cues mod apk download
-8 ball pool all in one hack mod apk download
-8 ball pool archangel cue mod apk download free
-8 ball pool level up fast mod apk

- -

8 Ball Pool Ultima Version APK is updated regularly to match the latest version of the original game, so you don't have to worry about missing out on any new content or updates. You can also play the game on any Android device, regardless of the model or specifications.

-

How to download and install the APK file on your device

-

Downloading and installing 8 Ball Pool Ultima Version APK is very easy and simple. Just follow these steps:

-
    -
  1. Go to [this link] and click on the download button to get the APK file.
  2. -
  3. Once the download is complete, go to your device settings and enable the option to install apps from unknown sources.
  4. -
  5. Locate the APK file in your device storage and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Launch the game and enjoy playing 8 Ball Pool Ultima Version APK with unlimited coins and cash.
  10. -
-

How to Play 8 Ball Pool Ultima Version APK?

-

A step-by-step guide on how to start a game and choose a table

-

Playing 8 Ball Pool Ultima Version APK is very similar to playing the original game. Here is how you can start a game and choose a table:

-
    -
  1. Open the game and sign in with your Facebook account or Miniclip ID. You can also play as a guest if you don't have an account.
  2. -
  3. Select the game mode you want to play. You can choose from 1-on-1 matches, tournaments, or practice mode.
  4. -
  5. Select the table you want to play on. You can choose from different locations, such as London, Sydney, Moscow, Tokyo, Las Vegas, etc. Each location has a different entry fee and prize pool.
  6. -
  7. Select your cue and pool table from the shop. You can use coins or cash to buy different cues and tables with different attributes, such as power, aim, spin, time, etc.
  8. -
  9. Tap on the play button and wait for an opponent to join. You can also invite your friends to play with you by tapping on the friends button.
  10. -
-

Some tips and tricks to improve your skills and win more coins

-

If you want to become a better player and win more coins in 8 Ball Pool Ultima Version APK, here are some tips and tricks that you should keep in mind:

- -

Why You Should Play 8 Ball Pool Ultima Version APK?

-

A list of the advantages of playing this game for your mental and physical health

-

Playing 8 Ball Pool Ultima Version APK is not only fun but also beneficial for your mental and physical health. Here are some of the advantages of playing this game:

- -

A table comparing 8 Ball Pool Ultima Version APK with other pool games

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Conclusion

-

In conclusion, 8 Ball Pool Ultima Version APK is a fantastic game that you should definitely try if you love pool games. It offers you unlimited coins and cash, all cues and tables unlocked, all game modes and features accessible, all players and locations available, no ads or pop-ups, frequent updates and content, high compatibility and performance, and many more benefits. It also improves your concentration, focus, hand-eye coordination, motor skills, stress relief, brain stimulation, memory function, social skills, confidence, etc. So what are you waiting for? Download 8 Ball Pool Ultima Version APK today and enjoy playing the best pool game ever!

-

FAQs

-

Q1: Is 8 Ball Pool Ultima Version APK safe to download?

-

A1: Yes, 8 Ball Pool Ultima Version APK is safe to download. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source like [this link] to avoid any fake or corrupted files.

-

Q2: Can I play 8 Ball Pool Ultima Version APK offline?

-

A2: No, 8 Ball Pool Ultima Version APK is an online game that requires an internet connection to play. You cannot play it offline or without wifi. However, you can play it on any network speed or quality without any lag or connection issues.

-

Q3: How can I customize my cue and pool table in 8 Ball Pool Ultima Version APK?

-

A3: You can customize your cue and pool table in 8 Ball Pool Ultima Version APK by going to the shop section of the game. There you can find a variety of cues and tables with different designs, colors, patterns, attributes, etc. You can buy them with coins or cash that you have unlimited in this version of the game. You can also change your cue or table anytime during the game by tapping on the gear icon on the top right corner of the screen.

-

Q4: How can I challenge my friends in 8 Ball Pool Ultima Version APK?

-

A4: You can challenge your friends in 8 Ball Pool Ultima Version APK by tapping on the friends button on the bottom left corner of the screen. There you can see a list of your Facebook friends or Miniclip friends who are online or offline. You can also search for a friend by their name or ID. To challenge a friend, just tap on their name and select the table you want to play on. You can also chat with them before or during the game by tapping on the chat button on the top left corner of the screen.

-

Q5: How can I get more coins and cash in 8 Ball Pool Ultima Version APK?

-

A5: You don't need to worry about getting more coins and cash in 8 Ball Pool Ultima Version APK because you have unlimited amounts of them in this version of the game. You can use them to buy anything you want from the shop, play any game mode or feature, or enter any tournament. However, if you want to earn more coins and cash in the original game, you can do so by winning matches, spinning the wheel, playing mini-games, watching videos, completing offers, or inviting friends.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md deleted file mode 100644 index b4780082008b28e922962920931d9d080cb08c19..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Disney 39s Aladdin 1994 Video Game Apk.md +++ /dev/null @@ -1,74 +0,0 @@ - -

Disney's Aladdin 1994 Video Game APK: A Retro Classic on Your Smartphone

-

Introduction

-

If you grew up in the 90s, chances are you have fond memories of playing Disney's Aladdin video game on your Sega Genesis, Game Gear, or Master System. Based on the animated film of the same name, this side-scrolling platformer was one of the best-selling and most acclaimed games of its time. It featured stunning graphics, catchy music, and addictive gameplay that captured the magic and adventure of the movie.

-

But what if you want to relive those memories on your smartphone? Is there a way to play Disney's Aladdin 1994 video game on your Android or iOS device? The answer is yes, thanks to a special APK file that allows you to run the game on your phone or tablet. In this article, we will tell you everything you need to know about Disney's Aladdin 1994 video game APK, including its features, how to download and install it, and some tips and tricks to optimize your experience.

-

disney 39;s aladdin 1994 video game apk


DOWNLOADhttps://urlin.us/2uSY6I



-

Features of Disney's Aladdin 1994 video game

-

Gameplay and controls

-

Disney's Aladdin 1994 video game is a side-scrolling platformer in which you control Aladdin, the street-smart hero who falls in love with Princess Jasmine. You have to navigate through various levels inspired by the movie, such as the streets of Agrabah, the Cave of Wonders, and the Sultan's Palace. Along the way, you have to avoid enemies and obstacles, collect gems and apples, and use your scimitar and throwing skills to defeat foes.

-

The game has two difficulty settings: normal and hard. The normal mode has six levels, while the hard mode has seven levels. The hard mode also has more enemies, traps, and hazards. The game also has a password system that allows you to resume your progress from any level.

-

The controls are simple and intuitive. You can use the virtual buttons on the screen or tilt your device to move Aladdin left or right. You can also swipe up or down to jump or crouch. To attack with your scimitar, tap the sword button. To throw an apple, tap the apple button. You can also use the magic lamp button to summon Genie for help in certain situations.

-

Graphics and sound

-

One of the most impressive aspects of Disney's Aladdin 1994 video game is its graphics. The game features colorful and detailed sprites and backgrounds that faithfully recreate the look and feel of the movie. The animations are smooth and fluid, and the characters have expressive facial expressions. The game also has some cinematic cutscenes that tell the story between levels.

-

The sound is equally impressive. The game features a high-quality soundtrack that includes songs from the movie, such as "A Whole New World", "Prince Ali", and "Friend Like Me". The sound effects are also realistic and immersive, such as the clashing of swords, the roaring of tigers, and the cheering of crowds.

-

-

Levels and challenges

-

Disney's Aladdin 1994 video game has a variety of levels that offer different challenges and surprises. Some of the levels are:

- -

Each level has its own challenges and secrets, such as hidden items, bonus stages, and mini-games. For example, in the Cave of Wonders, you can find a magic flute that lets you play a snake-charming mini-game. In the Escape, you can find a magic carpet that lets you play a flying mini-game. In the Rooftops, you can find a scarab that lets you enter a bonus stage where you can collect extra lives and gems.

-

Bonus content and secrets

-

Disney's Aladdin 1994 video game also has some bonus content and secrets that add more fun and replay value to the game. Some of them are:

- -

How to download and install Disney's Aladdin 1994 video game APK

-

Requirements and compatibility

-

To download and install Disney's Aladdin 1994 video game APK, you need to have an Android or iOS device that meets the following requirements:

-
Features8 Ball Pool Ultima Version APKOther Pool Games
Coins and CashUnlimitedLimited
Cues and TablesAll UnlockedSome Locked
Game Modes and FeaturesAll AccessibleSome Restricted
Players and LocationsAll AvailableSome Unavailable
Ads and Pop-upsNoneSome
Updates and ContentFrequentInfrequent
Compatibility and PerformanceHighLow
- - - - - - - - - - - - -
Operating systemVersion
Android4.0 or higher
iOS8.0 or higher
-

You also need to have enough storage space on your device to install the APK file, which is about 50 MB in size.

-

Steps to download and install

-

To download and install Disney's Aladdin 1994 video game APK, follow these steps:

-
    -
  1. Go to the official website of Disney's Aladdin 1994 video game APK (link here) and click on the download button.
  2. -
  3. Wait for the download to finish and locate the APK file on your device.
  4. -
  5. If you are using an Android device, you need to enable the installation of apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  6. -
  7. If you are using an iOS device, you need to trust the developer of the app. To do this, go to Settings > General > Device Management > Trust Developer Name and tap on Trust.
  8. -
  9. Tap on the APK file and follow the instructions to install it on your device.
  10. -
  11. Launch the app and enjoy playing Disney's Aladdin 1994 video game on your smartphone.
  12. -
-

Tips and tricks to optimize your experience

-

To optimize your experience while playing Disney's Aladdin 1994 video game on your smartphone, here are some tips and tricks:

- - 197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md b/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md deleted file mode 100644 index b82b586dd587653c7d29b19e0edfa16954a942f1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Sniper 3D Mod Apk and Unlock Unlimited Money and Diamonds.md +++ /dev/null @@ -1,99 +0,0 @@ -
-

Sniper 3D Mod APK Unlimited Money and Diamonds 2021: A Review

-

If you are a fan of shooting games and want to experience the thrill of being a professional sniper, then you should try Sniper 3D Mod APK. This is a modified version of the popular game Sniper 3D, which gives you access to unlimited money and diamonds, as well as all the weapons and upgrades in the game. In this article, we will review the features, benefits, and drawbacks of Sniper 3D Mod APK, as well as provide some tips and tricks on how to play it.

-

What is Sniper 3D?

-

Sniper 3D is a free-to-play action game developed by Fun Games For Free. It is available for Android and iOS devices, as well as Windows and Mac computers. The game puts you in the role of a deadly assassin who has to complete various missions around the world. You can choose from over 180+ authentic weapons, customize them with different attachments, and upgrade them to improve their performance. You can also play in different modes, such as offline missions, online PvP battles, squad wars, and special ops.

-

sniper 3d mod apk unlimited money and diamonds 2021


Downloadhttps://jinyurl.com/2uNN12



-

Main features of Sniper 3D

-

Some of the main features of Sniper 3D are :

- -

How to download and install Sniper 3D Mod APK?

-

If you want to enjoy the benefits of Sniper 3D Mod APK, you will need to download and install it on your device. Here are the steps to do so:

-
    -
  1. Go to a trusted website that offers Sniper 3D Mod APK download link, such as [APK TRIGGER](^1^) or [GoogleModAPK](^2^).
  2. -
  3. Click on the download button and wait for the file to be downloaded.
  4. -
  5. Once the file is downloaded, go to your device settings and enable unknown sources installation.
  6. -
  7. Locate the downloaded file in your file manager and tap on it to start the installation process.
  8. -
  9. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Launch the game and enjoy Sniper 3D Mod APK unlimited money and diamonds 2021!
  12. -
-

Why use Sniper 3D Mod APK?

-

Sniper 3D Mod APK is a modified version of the original game that gives you some extra advantages and features that are not available in the official version. Here are some of the reasons why you should use Sniper 3D Mod APK:

-

Unlimited money and diamonds

-

One of the main benefits of Sniper 3D Mod APK is that it gives you unlimited money and diamonds, which are the two main currencies in the game. You can use them to buy new weapons, upgrade them, unlock new skins, and more. You don't have to worry about running out of money or diamonds, or spending real money to get them. With Sniper 3D Mod APK, you can enjoy the game without any limitations.

-

Unlock all weapons and upgrades

-

Another benefit of Sniper 3D Mod APK is that it unlocks all the weapons and upgrades in the game. You can access over 180+ authentic weapons, from sniper rifles to assault rifles, and customize them with different attachments and scopes. You can also upgrade your weapons to increase their damage, accuracy, range, stability, and more. You don't have to complete missions or level up to unlock them. With Sniper 3D Mod APK, you can have the best weapons in the game at your disposal.

-

sniper 3d hack apk download free coins and gems 2021
-sniper 3d modded apk latest version unlimited everything 2021
-sniper 3d cheats apk no root unlimited gold and energy 2021
-sniper 3d premium apk mod free download vip and weapons 2021
-sniper 3d cracked apk full unlocked unlimited ammo and lives 2021
-sniper 3d mod menu apk no ban unlimited cash and diamonds 2021
-sniper 3d unlimited money and diamonds apk offline download 2021
-sniper 3d mod apk android 1 unlimited coins and gems 2021
-sniper 3d hack online generator free money and diamonds 2021
-sniper 3d mod apk rexdl unlimited everything download 2021
-sniper 3d mod apk revdl unlimited money and diamonds 2021
-sniper 3d hack apk ios free coins and gems download 2021
-sniper 3d mod apk happymod unlimited money and diamonds 2021
-sniper 3d mod apk an1 unlimited coins and gems download 2021
-sniper 3d hack tool apk no survey unlimited cash and diamonds 2021
-sniper 3d mod apk obb unlimited money and diamonds download 2021
-sniper 3d mod apk pure unlimited everything free download 2021
-sniper 3d hack version apk unlimited coins and gems online 2021
-sniper 3d mod apk apkpure unlimited money and diamonds download 2021
-sniper 3d hack apk latest version unlimited cash and diamonds online 2021
-sniper 3d mod apk for pc unlimited money and diamonds download 2021
-sniper 3d hack apk uptodown unlimited coins and gems online 2021
-sniper 3d mod apk old version unlimited money and diamonds download 2021
-sniper 3d hack apk android unlimited cash and diamonds online 2021
-sniper 3d mod apk new version unlimited money and diamonds download 2021

-

Enjoy offline and online modes

-

A third benefit of Sniper 3D Mod APK is that it allows you to enjoy both offline and online modes of the game. You can play offline missions without internet connection, or join online PvP battles and squad wars with other players around the world. You can also play special ops missions that require teamwork and strategy. You don't have to choose between offline and online modes. With Sniper 3D Mod APK, you can have the best of both worlds.

-

Tips and tricks for Sniper 3D Mod APK

-

Sniper 3D Mod APK is a fun and addictive game that will test your skills as a sniper. However, it can also be challenging and frustrating at times. Here are some tips and tricks that will help you improve your gameplay and become a master shooter:

-

Aim for headshots and moving targets

-

One of the most important tips for Sniper 3D Mod APK is to aim for headshots and moving targets. Headshots will deal more damage and earn you more points than body shots. Moving targets will also give you more points than stationary ones. However, they are also harder to hit, so you need to be patient and precise. Use your scope to zoom in on your target, wait for the right moment, and pull the trigger. Don't forget to account for wind direction and bullet drop as well.

-

Choose the right weapon for each mission

-

Another tip for Sniper 3D Mod APK is to choose the right weapon for each mission. Different missions will require different weapons, depending on the distance, environment, number of enemies, and other factors. For example, if you need to shoot from a long range, you should use a sniper rifle with a high magnification scope. If you need to shoot in a crowded area, you should use an assault rifle with a silencer. If you need to shoot in a dark place, you should use a weapon with a night vision scope. You can switch between different weapons before starting each mission.

-

Use the environment to your advantage

-

A third tip for Sniper 3D Mod APK is to use the environment to your advantage. The game features various locations with different elements that can help or hinder your shooting. For example, you can use buildings, cars, barrels, crates, and other objects as cover or distractions. You can also shoot explosive objects to cause chain reactions and eliminate multiple enemies at once. You can also shoot glass windows, lights, cameras, alarms, and other devices to create noise or confusion. Be creative and observant when using the environment.

-

Conclusion

-

Sniper 3D Mod APK is a great game for anyone who loves shooting games and wants to experience the thrill of being a professional sniper. It offers unlimited money and diamonds, as well as all the weapons and upgrades in the game. It also allows you to enjoy both offline and online modes of the game. However, it also requires skill, patience, precision, and strategy to complete various missions and challenges. If you follow our tips and tricks, you will be able to improve your gameplay and become a master shooter.

- FAQs -

Here are some of the frequently asked questions about Sniper 3D Mod APK:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Sniper 3D Mod APK safe to use?Sniper 3D Mod APK is generally safe to use, as long as you download it from a trusted website and scan it with an antivirus program. However, you should be aware that using a modded version of the game may violate the terms and conditions of the original game, and may result in your account being banned or suspended. Therefore, you should use Sniper 3D Mod APK at your own risk.
Can I play Sniper 3D Mod APK with my friends?Yes, you can play Sniper 3D Mod APK with your friends, as long as they also have the same version of the game installed on their devices. You can join online PvP battles and squad wars with your friends, or compete against them in leaderboards and rankings. You can also chat with them in the game and share your achievements and tips.
What are the minimum requirements for Sniper 3D Mod APK?The minimum requirements for Sniper 3D Mod APK are:
  • Android version: 4.4 or higher
  • RAM: 2 GB or more
  • Storage: 100 MB or more
  • Internet connection: required for online modes
How can I update Sniper 3D Mod APK?To update Sniper 3D Mod APK, you will need to download and install the latest version of the game from the same website that you downloaded it from. You may also need to uninstall the previous version of the game before installing the new one. However, you should be careful when updating Sniper 3D Mod APK, as some updates may not be compatible with the modded version of the game, and may cause errors or crashes.
Where can I get more information about Sniper 3D Mod APK?If you want to get more information about Sniper 3D Mod APK, you can visit the official website of the original game at [Sniper 3D], or follow their social media accounts on [Facebook], [Twitter], [Instagram], and [YouTube]. You can also check out some online forums and blogs that discuss Sniper 3D Mod APK, such as [Reddit] and [Quora].

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/modules/gfpgan_inference.py b/spaces/4Taps/SadTalker/modules/gfpgan_inference.py deleted file mode 100644 index f4e7dc80eac012906b797843aa6019c2c4a39b3b..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/modules/gfpgan_inference.py +++ /dev/null @@ -1,36 +0,0 @@ -import os,sys - -def gfpgan(scale, origin_mp4_path): - current_code_path = sys.argv[0] - current_root_path = os.path.split(current_code_path)[0] - print(current_root_path) - gfpgan_code_path = current_root_path+'/repositories/GFPGAN/inference_gfpgan.py' - print(gfpgan_code_path) - - #video2pic - result_dir = os.path.split(origin_mp4_path)[0] - video_name = os.path.split(origin_mp4_path)[1] - video_name = video_name.split('.')[0] - print(video_name) - str_scale = str(scale).replace('.', '_') - output_mp4_path = os.path.join(result_dir, video_name+'##'+str_scale+'.mp4') - temp_output_mp4_path = os.path.join(result_dir, 'temp_'+video_name+'##'+str_scale+'.mp4') - - audio_name = video_name.split('##')[-1] - audio_path = os.path.join(result_dir, audio_name+'.wav') - temp_pic_dir1 = os.path.join(result_dir, video_name) - temp_pic_dir2 = os.path.join(result_dir, video_name+'##'+str_scale) - os.makedirs(temp_pic_dir1, exist_ok=True) - os.makedirs(temp_pic_dir2, exist_ok=True) - cmd1 = 'ffmpeg -i \"{}\" -start_number 0 \"{}\"/%06d.png -loglevel error -y'.format(origin_mp4_path, temp_pic_dir1) - os.system(cmd1) - cmd2 = f'python {gfpgan_code_path} -i {temp_pic_dir1} -o {temp_pic_dir2} -s {scale}' - os.system(cmd2) - cmd3 = f'ffmpeg -r 25 -f image2 -i {temp_pic_dir2}/%06d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p {temp_output_mp4_path}' - os.system(cmd3) - cmd4 = f'ffmpeg -y -i {temp_output_mp4_path} -i {audio_path} -vcodec copy {output_mp4_path}' - os.system(cmd4) - #shutil.rmtree(temp_pic_dir1) - #shutil.rmtree(temp_pic_dir2) - - return output_mp4_path diff --git a/spaces/801artistry/RVC801/demucs/__init__.py b/spaces/801artistry/RVC801/demucs/__init__.py deleted file mode 100644 index d4182e356427e1b05a79f8da641c70bb732514fa..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -__version__ = "2.0.3" diff --git a/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts b/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/AIConsultant/MusicGen/tests/data/__init__.py b/spaces/AIConsultant/MusicGen/tests/data/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py deleted file mode 100644 index 3b625295a118845c01a3677004070714d11c162b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/configs/tts/emotion/pre_align.py +++ /dev/null @@ -1,25 +0,0 @@ -import os - -from data_gen.tts.base_preprocess import BasePreprocessor -import glob -import re - -class EmoPreAlign(BasePreprocessor): - - def meta_data(self): - spks = ['0012', '0011', '0013', '0014', '0015', '0016', '0017', '0018', '0019', '0020'] - pattern = re.compile('[\t\n ]+') - for spk in spks: - for line in open(f"{self.raw_data_dir}/{spk}/{spk}.txt", 'r'): # 打开文件 - line = re.sub(pattern, ' ', line) - if line == ' ': continue - split_ = line.split(' ') - txt = ' '.join(split_[1: -2]) - item_name = split_[0] - emotion = split_[-2] - wav_fn = f'{self.raw_data_dir}/{spk}/{emotion}/{item_name}.wav' - yield item_name, wav_fn, txt, spk, emotion - - -if __name__ == "__main__": - EmoPreAlign().process() diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py deleted file mode 100644 index cf1deeaef4e51fcc7cc42f4f3e2d9a34296371f9..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_detection/target_sound_detection/src/utils.py +++ /dev/null @@ -1,353 +0,0 @@ -# !/usr/bin/env python -# -*- coding: utf-8 -*- -# @Time : 2021/3/9 16:33 -# @Author : dongchao yang -# @File : train.py - -import collections -import sys -from loguru import logger -from pprint import pformat - -import numpy as np -import pandas as pd -import scipy -import six -import sklearn.preprocessing as pre -import torch -import tqdm -import yaml - -from scipy.interpolate import interp1d - -def parse_config_or_kwargs(config_file, **kwargs): - """parse_config_or_kwargs - :param config_file: Config file that has parameters, yaml format - :param **kwargs: Other alternative parameters or overwrites for config - """ - with open(config_file) as con_read: - yaml_config = yaml.load(con_read, Loader=yaml.FullLoader) - arguments = dict(yaml_config, **kwargs) - return arguments - - -def find_contiguous_regions(activity_array): # in this part, if you cannot understand the binary operation, I think you can write a O(n) complexity method - """Find contiguous regions from bool valued numpy.array. - Copy of https://dcase-repo.github.io/dcase_util/_modules/dcase_util/data/decisions.html#DecisionEncoder - Reason is: - 1. This does not belong to a class necessarily - 2. Import DecisionEncoder requires sndfile over some other imports..which causes some problems on clusters - """ - change_indices = np.logical_xor(activity_array[1:], activity_array[:-1]).nonzero()[0] - change_indices += 1 - if activity_array[0]: - # If the first element of activity_array is True add 0 at the beginning - change_indices = np.r_[0, change_indices] - - if activity_array[-1]: - # If the last element of activity_array is True, add the length of the array - change_indices = np.r_[change_indices, activity_array.size] - # print(change_indices.reshape((-1, 2))) - # Reshape the result into two columns - return change_indices.reshape((-1, 2)) - - -def split_train_cv( - data_frame: pd.DataFrame, - frac: float = 0.9, - y=None, # Only for stratified, computes necessary split - **kwargs): - """split_train_cv - - :param data_frame: - :type data_frame: pd.DataFrame - :param frac: - :type frac: float - """ - if kwargs.get('mode', - None) == 'urbansed': # Filenames are DATA_-1 DATA_-2 etc - data_frame.loc[:, 'id'] = data_frame.groupby( - data_frame['filename'].str.split('_').apply( - lambda x: '_'.join(x[:-1]))).ngroup() - sampler = np.random.permutation(data_frame['id'].nunique()) - num_train = int(frac * len(sampler)) - train_indexes = sampler[:num_train] - cv_indexes = sampler[num_train:] - train_data = data_frame[data_frame['id'].isin(train_indexes)] - cv_data = data_frame[data_frame['id'].isin(cv_indexes)] - del train_data['id'] - del cv_data['id'] - elif kwargs.get('mode', None) == 'stratified': # stratified --> 分层的 ? - # Use statified sampling - from skmultilearn.model_selection import iterative_train_test_split - index_train, _, index_cv, _ = iterative_train_test_split( - data_frame.index.values.reshape(-1, 1), y, test_size=1. - frac) - train_data = data_frame[data_frame.index.isin(index_train.squeeze())] - cv_data = data_frame[data_frame.index.isin(index_cv.squeeze())] # cv --> cross validation - else: - # Simply split train_test - train_data = data_frame.sample(frac=frac, random_state=10) - cv_data = data_frame[~data_frame.index.isin(train_data.index)] - return train_data, cv_data - - - -def pprint_dict(in_dict, outputfun=sys.stdout.write, formatter='yaml'): # print yaml file - """pprint_dict - :param outputfun: function to use, defaults to sys.stdout - :param in_dict: dict to print - """ - if formatter == 'yaml': - format_fun = yaml.dump - elif formatter == 'pretty': - format_fun = pformat - for line in format_fun(in_dict).split('\n'): - outputfun(line) - - -def getfile_outlogger(outputfile): - log_format = "[{time:YYYY-MM-DD HH:mm:ss}] {message}" - logger.configure(handlers=[{"sink": sys.stderr, "format": log_format}]) - if outputfile: - logger.add(outputfile, enqueue=True, format=log_format) - return logger - -# according label, get encoder -def train_labelencoder(labels: pd.Series, sparse=True): - """encode_labels - - Encodes labels - - :param labels: pd.Series representing the raw labels e.g., Speech, Water - :param encoder (optional): Encoder already fitted - returns encoded labels (many hot) and the encoder - """ - assert isinstance(labels, pd.Series), "Labels need to be series" - if isinstance(labels[0], six.string_types): - # In case of using non processed strings, e.g., Vaccum, Speech - label_array = labels.str.split(',').values.tolist() # split label according to ',' - elif isinstance(labels[0], np.ndarray): - # Encoder does not like to see numpy array - label_array = [lab.tolist() for lab in labels] - elif isinstance(labels[0], collections.Iterable): - label_array = labels - encoder = pre.MultiLabelBinarizer(sparse_output=sparse) - encoder.fit(label_array) - return encoder - - -def encode_labels(labels: pd.Series, encoder=None, sparse=True): - """encode_labels - - Encodes labels - - :param labels: pd.Series representing the raw labels e.g., Speech, Water - :param encoder (optional): Encoder already fitted - returns encoded labels (many hot) and the encoder - """ - assert isinstance(labels, pd.Series), "Labels need to be series" - instance = labels.iloc[0] - if isinstance(instance, six.string_types): - # In case of using non processed strings, e.g., Vaccum, Speech - label_array = labels.str.split(',').values.tolist() - elif isinstance(instance, np.ndarray): - # Encoder does not like to see numpy array - label_array = [lab.tolist() for lab in labels] - elif isinstance(instance, collections.Iterable): - label_array = labels - # get label_array, it is a list ,contain a lot of label, this label are string type - if not encoder: - encoder = pre.MultiLabelBinarizer(sparse_output=sparse) # if we encoder is None, we should init a encoder firstly. - encoder.fit(label_array) - labels_encoded = encoder.transform(label_array) # transform string to digit - return labels_encoded, encoder - - # return pd.arrays.SparseArray( - # [row.toarray().ravel() for row in labels_encoded]), encoder - - -def decode_with_timestamps(events,labels: np.array): - """decode_with_timestamps - Decodes the predicted label array (2d) into a list of - [(Labelname, onset, offset), ...] - - :param encoder: Encoder during training - :type encoder: pre.MultiLabelBinarizer - :param labels: n-dim array - :type labels: np.array - """ - # print('events ',events) - # print('labels ',labels.shape) - #assert 1==2 - if labels.ndim == 2: - #print('...') - return [_decode_with_timestamps(events[i],labels[i]) for i in range(labels.shape[0])] - else: - return _decode_with_timestamps(events,labels) - - -def median_filter(x, window_size, threshold=0.5): - """median_filter - :param x: input prediction array of shape (B, T, C) or (B, T). - Input is a sequence of probabilities 0 <= x <= 1 - :param window_size: An integer to use - :param threshold: Binary thresholding threshold - """ - x = binarize(x, threshold=threshold) # transfer to 0 or 1 - if x.ndim == 3: - size = (1, window_size, 1) - elif x.ndim == 2 and x.shape[0] == 1: - # Assume input is class-specific median filtering - # E.g, Batch x Time [1, 501] - size = (1, window_size) - elif x.ndim == 2 and x.shape[0] > 1: - # Assume input is standard median pooling, class-independent - # E.g., Time x Class [501, 10] - size = (window_size, 1) - return scipy.ndimage.median_filter(x, size=size) - - -def _decode_with_timestamps(events,labels): - result_labels = [] - # print('.......') - # print('labels ',labels.shape) - # print(labels) - change_indices = find_contiguous_regions(labels) - # print(change_indices) - # assert 1==2 - for row in change_indices: - result_labels.append((events,row[0], row[1])) - return result_labels - -def inverse_transform_labels(encoder, pred): - if pred.ndim == 3: - return [encoder.inverse_transform(x) for x in pred] - else: - return encoder.inverse_transform(pred) - - -def binarize(pred, threshold=0.5): - # Batch_wise - if pred.ndim == 3: - return np.array( - [pre.binarize(sub, threshold=threshold) for sub in pred]) - else: - return pre.binarize(pred, threshold=threshold) - - -def double_threshold(x, high_thres, low_thres, n_connect=1): - """double_threshold - Helper function to calculate double threshold for n-dim arrays - - :param x: input array - :param high_thres: high threshold value - :param low_thres: Low threshold value - :param n_connect: Distance of <= n clusters will be merged - """ - assert x.ndim <= 3, "Whoops something went wrong with the input ({}), check if its <= 3 dims".format( - x.shape) - if x.ndim == 3: - apply_dim = 1 - elif x.ndim < 3: - apply_dim = 0 - # x is assumed to be 3d: (batch, time, dim) - # Assumed to be 2d : (time, dim) - # Assumed to be 1d : (time) - # time axis is therefore at 1 for 3d and 0 for 2d ( - return np.apply_along_axis(lambda x: _double_threshold( - x, high_thres, low_thres, n_connect=n_connect), - axis=apply_dim, - arr=x) - - -def _double_threshold(x, high_thres, low_thres, n_connect=1, return_arr=True): # in nature, double_threshold considers boundary question - """_double_threshold - Computes a double threshold over the input array - - :param x: input array, needs to be 1d - :param high_thres: High threshold over the array - :param low_thres: Low threshold over the array - :param n_connect: Postprocessing, maximal distance between clusters to connect - :param return_arr: By default this function returns the filtered indiced, but if return_arr = True it returns an array of tsame size as x filled with ones and zeros. - """ - assert x.ndim == 1, "Input needs to be 1d" - high_locations = np.where(x > high_thres)[0] # return the index, where value is greater than high_thres - locations = x > low_thres # return true of false - encoded_pairs = find_contiguous_regions(locations) - # print('encoded_pairs ',encoded_pairs) - filtered_list = list( - filter( - lambda pair: - ((pair[0] <= high_locations) & (high_locations <= pair[1])).any(), - encoded_pairs)) # find encoded_pair where inclide a high_lacations - #print('filtered_list ',filtered_list) - filtered_list = connect_(filtered_list, n_connect) # if the distance of two pair is less than n_connect, we can merge them - if return_arr: - zero_one_arr = np.zeros_like(x, dtype=int) - for sl in filtered_list: - zero_one_arr[sl[0]:sl[1]] = 1 - return zero_one_arr - return filtered_list - - -def connect_clusters(x, n=1): - if x.ndim == 1: - return connect_clusters_(x, n) - if x.ndim >= 2: - return np.apply_along_axis(lambda a: connect_clusters_(a, n=n), -2, x) - - -def connect_clusters_(x, n=1): - """connect_clusters_ - Connects clustered predictions (0,1) in x with range n - - :param x: Input array. zero-one format - :param n: Number of frames to skip until connection can be made - """ - assert x.ndim == 1, "input needs to be 1d" - reg = find_contiguous_regions(x) - start_end = connect_(reg, n=n) - zero_one_arr = np.zeros_like(x, dtype=int) - for sl in start_end: - zero_one_arr[sl[0]:sl[1]] = 1 - return zero_one_arr - - -def connect_(pairs, n=1): - """connect_ - Connects two adjacent clusters if their distance is <= n - - :param pairs: Clusters of iterateables e.g., [(1,5),(7,10)] - :param n: distance between two clusters - """ - if len(pairs) == 0: - return [] - start_, end_ = pairs[0] - new_pairs = [] - for i, (next_item, cur_item) in enumerate(zip(pairs[1:], pairs[0:])): - end_ = next_item[1] - if next_item[0] - cur_item[1] <= n: - pass - else: - new_pairs.append((start_, cur_item[1])) - start_ = next_item[0] - new_pairs.append((start_, end_)) - return new_pairs - - -def predictions_to_time(df, ratio): - df.onset = df.onset * ratio - df.offset = df.offset * ratio - return df - -def upgrade_resolution(arr, scale): - print('arr ',arr.shape) - x = np.arange(0, arr.shape[0]) - f = interp1d(x, arr, kind='linear', axis=0, fill_value='extrapolate') - scale_x = np.arange(0, arr.shape[0], 1 / scale) - up_scale = f(scale_x) - return up_scale -# a = [0.1,0.2,0.3,0.8,0.4,0.1,0.3,0.9,0.4] -# a = np.array(a) -# b = a>0.2 -# _double_threshold(a,0.7,0.2) \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py deleted file mode 100644 index c049ef047e209b0488b73ec9ae283bf425b5abe8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/dataset.py +++ /dev/null @@ -1,147 +0,0 @@ -import collections -import csv -import logging -import os -import random -from glob import glob -from pathlib import Path - -import numpy as np -import torch -import torchvision - -logger = logging.getLogger(f'main.{__name__}') - - -class VGGSound(torch.utils.data.Dataset): - - def __init__(self, split, specs_dir, transforms=None, splits_path='./data', meta_path='./data/vggsound.csv'): - super().__init__() - self.split = split - self.specs_dir = specs_dir - self.transforms = transforms - self.splits_path = splits_path - self.meta_path = meta_path - - vggsound_meta = list(csv.reader(open(meta_path), quotechar='"')) - unique_classes = sorted(list(set(row[2] for row in vggsound_meta))) - self.label2target = {label: target for target, label in enumerate(unique_classes)} - self.target2label = {target: label for label, target in self.label2target.items()} - self.video2target = {row[0]: self.label2target[row[2]] for row in vggsound_meta} - - split_clip_ids_path = os.path.join(splits_path, f'vggsound_{split}.txt') - if not os.path.exists(split_clip_ids_path): - self.make_split_files() - clip_ids_with_timestamp = open(split_clip_ids_path).read().splitlines() - clip_paths = [os.path.join(specs_dir, v + '_mel.npy') for v in clip_ids_with_timestamp] - self.dataset = clip_paths - # self.dataset = clip_paths[:10000] # overfit one batch - - # 'zyTX_1BXKDE_16000_26000'[:11] -> 'zyTX_1BXKDE' - vid_classes = [self.video2target[Path(path).stem[:11]] for path in self.dataset] - class2count = collections.Counter(vid_classes) - self.class_counts = torch.tensor([class2count[cls] for cls in range(len(class2count))]) - - # self.sample_weights = [len(self.dataset) / class2count[self.video2target[Path(path).stem[:11]]] for path in self.dataset] - - def __getitem__(self, idx): - item = {} - - spec_path = self.dataset[idx] - # 'zyTX_1BXKDE_16000_26000' -> 'zyTX_1BXKDE' - video_name = Path(spec_path).stem[:11] - - item['input'] = np.load(spec_path) - item['input_path'] = spec_path - - # if self.split in ['train', 'valid']: - item['target'] = self.video2target[video_name] - item['label'] = self.target2label[item['target']] - - if self.transforms is not None: - item = self.transforms(item) - - return item - - def __len__(self): - return len(self.dataset) - - def make_split_files(self): - random.seed(1337) - logger.info(f'The split files do not exist @ {self.splits_path}. Calculating the new ones.') - # The downloaded videos (some went missing on YouTube and no longer available) - available_vid_paths = sorted(glob(os.path.join(self.specs_dir, '*_mel.npy'))) - logger.info(f'The number of clips available after download: {len(available_vid_paths)}') - - # original (full) train and test sets - vggsound_meta = list(csv.reader(open(self.meta_path), quotechar='"')) - train_vids = {row[0] for row in vggsound_meta if row[3] == 'train'} - test_vids = {row[0] for row in vggsound_meta if row[3] == 'test'} - logger.info(f'The number of videos in vggsound train set: {len(train_vids)}') - logger.info(f'The number of videos in vggsound test set: {len(test_vids)}') - - # class counts in test set. We would like to have the same distribution in valid - unique_classes = sorted(list(set(row[2] for row in vggsound_meta))) - label2target = {label: target for target, label in enumerate(unique_classes)} - video2target = {row[0]: label2target[row[2]] for row in vggsound_meta} - test_vid_classes = [video2target[vid] for vid in test_vids] - test_target2count = collections.Counter(test_vid_classes) - - # now given the counts from test set, sample the same count for validation and the rest leave in train - train_vids_wo_valid, valid_vids = set(), set() - for target, label in enumerate(label2target.keys()): - class_train_vids = [vid for vid in train_vids if video2target[vid] == target] - random.shuffle(class_train_vids) - count = test_target2count[target] - valid_vids.update(class_train_vids[:count]) - train_vids_wo_valid.update(class_train_vids[count:]) - - # make file with a list of available test videos (each video should contain timestamps as well) - train_i = valid_i = test_i = 0 - with open(os.path.join(self.splits_path, 'vggsound_train.txt'), 'w') as train_file, \ - open(os.path.join(self.splits_path, 'vggsound_valid.txt'), 'w') as valid_file, \ - open(os.path.join(self.splits_path, 'vggsound_test.txt'), 'w') as test_file: - for path in available_vid_paths: - path = path.replace('_mel.npy', '') - vid_name = Path(path).name - # 'zyTX_1BXKDE_16000_26000'[:11] -> 'zyTX_1BXKDE' - if vid_name[:11] in train_vids_wo_valid: - train_file.write(vid_name + '\n') - train_i += 1 - elif vid_name[:11] in valid_vids: - valid_file.write(vid_name + '\n') - valid_i += 1 - elif vid_name[:11] in test_vids: - test_file.write(vid_name + '\n') - test_i += 1 - else: - raise Exception(f'Clip {vid_name} is neither in train, valid nor test. Strange.') - - logger.info(f'Put {train_i} clips to the train set and saved it to ./data/vggsound_train.txt') - logger.info(f'Put {valid_i} clips to the valid set and saved it to ./data/vggsound_valid.txt') - logger.info(f'Put {test_i} clips to the test set and saved it to ./data/vggsound_test.txt') - - -if __name__ == '__main__': - from transforms import Crop, StandardNormalizeAudio, ToTensor - specs_path = '/home/nvme/data/vggsound/features/melspec_10s_22050hz/' - - transforms = torchvision.transforms.transforms.Compose([ - StandardNormalizeAudio(specs_path), - ToTensor(), - Crop([80, 848]), - ]) - - datasets = { - 'train': VGGSound('train', specs_path, transforms), - 'valid': VGGSound('valid', specs_path, transforms), - 'test': VGGSound('test', specs_path, transforms), - } - - print(datasets['train'][0]) - print(datasets['valid'][0]) - print(datasets['test'][0]) - - print(datasets['train'].class_counts) - print(datasets['valid'].class_counts) - print(datasets['test'].class_counts) diff --git a/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py b/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py deleted file mode 100644 index da69c35ed2c4ec583721339c324a53d5622429d1..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/Prompt/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .base_Prompts import * \ No newline at end of file diff --git a/spaces/AIWaves/SOP_Generation-single/gradio_config.py b/spaces/AIWaves/SOP_Generation-single/gradio_config.py deleted file mode 100644 index ba519c0f3a71037e6e209d3da21d034626291953..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/gradio_config.py +++ /dev/null @@ -1,439 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The AIWaves Inc. team. - -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -from PIL import Image -import requests -from typing import List, Tuple - -class GradioConfig: - # How many avatars are currently registered - POINTER = 0 - - # Avatar image. You can add or replace. - AGENT_HEAD_URL = [ - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687579617434043.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687592097408547.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561699613.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561275758.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021090300/ry5k31wt33c.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021090300/0ls2gmwhrf5.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/02/202302281677545695326193.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/03/202303271679886128550253.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711344407060.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711345834296.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311194291520.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311196958993.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021082612/vr0bkov0dwl.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021082612/auqx5zfsv5g.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021082612/llofpivtwls.jpg", - "https://img.touxiangwu.com/uploads/allimg/2021082612/3j2sdot3ye0.jpg", - "https://img.touxiangwu.com/2020/3/nQfYf2.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068774532.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068289945.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918069785183.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561292003.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561578616.jpg", - "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726564597524.jpg" - ] - USER_HEAD_URL = "https://img.touxiangwu.com/zb_users/upload/2023/05/202305301685407468585486.jpg" - - # The css style of gradio.Chatbot - CSS = """ - #chatbot1 .user { - background-color:transparent; - border-color:transparent; - } - #chatbot1 .bot { - background-color:transparent; - border-color:transparent; - } - #btn {color: red; border-color: red;} - """ - - ID = ["USER", "AGENT", "SYSTEM"] - - # Bubble template - BUBBLE_CSS = { - # Background-color Name-color Name-content Font-color Font-size Content Avatar-URL - "USER": """ -
-
-

{}

-

{}

-
- USER -
- """, - - # Avatar-URL Background-color Name-color Name-Content Font-color Font-size Content - "AGENT": """ -
- AGENT -
-

{}

-

{}

-
-
- """, - - # Background-color Font-size Font-color Name Content - "SYSTEM": """ -
-
-

{}:{}

-
-
- """ - } - - ROLE_2_NAME = {} - - OBJECT_INFO = { - - "User": { - # https://img-blog.csdnimg.cn/img_convert/7c20bc39ac69b6972a22e18762d02db3.jpeg - "head_url": USER_HEAD_URL, - "bubble_color": "#95EC69", - "text_color": "#000000", - "font_size": 0, - "id": "USER" - }, - - "System": { - # https://img-blog.csdnimg.cn/img_convert/e7e5887cfff67df8c2205c2ef0e5e7fa.png - "head_url": "https://img.touxiangwu.com/zb_users/upload/2023/03/202303141678768524747045.jpg", - "bubble_color": "#7F7F7F", ##FFFFFF - "text_color": "#FFFFFF", ##000000 - "font_size": 0, - "id": "SYSTEM" - }, - - "wait": { - "head_url": "https://img.touxiangwu.com/zb_users/upload/2022/12/202212011669881536145501.jpg", - "bubble_color": "#E7CBA6", - "text_color": "#000000", - "font_size": 0, - "id": "AGENT" - }, - - "Recorder": { - "head_url": "https://img.touxiangwu.com/zb_users/upload/2023/02/202302281677545695326193.jpg", - "bubble_color": "#F7F7F7", - "text_color": "#000000", - "font_size": 0, - "id": "AGENT" - } - } - - @classmethod - def color_for_img(cls, url): - """ - Extract the main colors from the picture and set them as the background color, - then determine the corresponding text color. - """ - - def get_main_color(image): - image = image.convert("RGB") - width, height = image.size - pixels = image.getcolors(width * height) - most_common_pixel = max(pixels, key=lambda item: item[0]) - return most_common_pixel[1] - - def is_dark_color(rgb_color): - r, g, b = rgb_color - luminance = (0.299 * r + 0.587 * g + 0.114 * b) / 255 - return luminance < 0.5 - - def download_image(url): - print(f"binding: {url}") - response = requests.get(url) - if response.status_code == 200: - with open('image.jpg', 'wb') as f: - f.write(response.content) - - def rgb_to_hex(color): - return "#{:02X}{:02X}{:02X}".format(color[0], color[1], color[2]) - - def get_color(image_url): - download_image(image_url) - - image = Image.open("image.jpg") - main_color = get_main_color(image) - is_dark = is_dark_color(main_color) - - if is_dark: - font_color = "#FFFFFF" - else: - font_color = "#000000" - - return rgb_to_hex(main_color), font_color - - return get_color(url) - - @classmethod - def init(cls, JSON): - # Deprecated - with open(JSON) as f: - sop = json.load(f) - cnt = 0 - FISRT_NODE = True - fisrt_node_roles = [] - for node_name in sop['nodes']: - node_info = sop['nodes'][node_name] - agent_states = node_info['agent_states'] - for agent_role in agent_states: - name = agent_states[agent_role]['style']['name'] - cls.ROLE_2_NAME[agent_role] = name - if FISRT_NODE: - fisrt_node_roles.append(agent_role) - bubble_color, text_color = cls.color_for_img(cls.AGENT_HEAD_URL[cnt]) - cls.OBJECT_INFO[name] = { - "head_url": f"{cls.AGENT_HEAD_URL[cnt]}", - "bubble_color": bubble_color, - "text_color": text_color, - "font_size": 0, - "id": "AGENT" - } - cnt += 1 - if FISRT_NODE: - FISRT_NODE = False - print(cls.OBJECT_INFO) - for usr_name in cls.OBJECT_INFO: - if cls.OBJECT_INFO[usr_name]["id"] == "SYSTEM": - cls.OBJECT_INFO[usr_name]["font_size"] = 12 - elif cls.OBJECT_INFO[usr_name]["id"] in ["USER", "AGENT"]: - cls.OBJECT_INFO[usr_name]["font_size"] = 16 - else: - assert False - return fisrt_node_roles - - @classmethod - def add_agent(cls, agents_name:List,p:int=None): - if p != None: - cls.POINTER = p - for name in agents_name: - bubble_color, text_color = cls.color_for_img(cls.AGENT_HEAD_URL[cls.POINTER]) - cls.OBJECT_INFO[name] = { - "head_url": f"{cls.AGENT_HEAD_URL[cls.POINTER]}", - "bubble_color": bubble_color, - "text_color": text_color, - "font_size": 0, - "id": "AGENT" - } - cls.POINTER += 1 - for usr_name in cls.OBJECT_INFO: - if cls.OBJECT_INFO[usr_name]["id"] == "SYSTEM": - cls.OBJECT_INFO[usr_name]["font_size"] = 12 - elif cls.OBJECT_INFO[usr_name]["id"] in ["USER", "AGENT"]: - cls.OBJECT_INFO[usr_name]["font_size"] = 16 - else: - assert False - - -class StateConfig: - """UI configuration for the step progress bar (indicating the current node)""" - - CSS = """ -:root { - --gradient-start: 100%; - --gradient-end: 0%; - } -.container.progress-bar-container { - position: relative; - display: flex; - align-items: flex-end; - width: 100%; - overflow-x: auto; - padding-bottom: 30px; - padding-top: 20px -} -.container.progress-bar-container::-webkit-scrollbar { - width: 8px; - background-color: transparent; -} - -.container.progress-bar-container::-webkit-scrollbar-thumb { - background-color: transparent; -} - -.progress-bar-container .progressbar { - counter-reset: step; - white-space: nowrap; -} -.progress-bar-container .progressbar li { - list-style: none; - display: inline-block; - width: 200px; - position: relative; - text-align: center; - cursor: pointer; - white-space: normal; -} -.progress-bar-container .progressbar li:before { - content: counter(step); - counter-increment: step; - width: 30px; - height: 30px; - line-height: 30px; - border: 1px solid #ddd; - border-radius: 100%; - display: block; - text-align: center; - margin: 0 auto 10px auto; - background-color: #ffffff; -} -.progress-bar-container .progressbar li:after { - content: attr(data-content); - position: absolute; - width: 87%; - height: 2px; - background-color: #dddddd; - top: 15px; - left: -45%; -} -.progress-bar-container .progressbar li:first-child:after { - content: none; -} -.progress-bar-container .progressbar li.active { - color: green; -} -.progress-bar-container .progressbar li.active:before { - border-color: green; - background-color: green; - color: white; -} -.progress-bar-container .progressbar li.active + li:after { - background: linear-gradient(to right, green var(--gradient-start), lightgray var(--gradient-end)); -} -.progress-bar-container .small-element { - transform: scale(0.8); -} -.progress-bar-container .progressbar li span { - position: absolute; - top: 40px; - left: 0; - width: 100%; - text-align: center; -} -.progress-bar-container .progressbar li .data-content { - position: absolute; - width: 100%; - top: -10px; - left: -100px; - text-align: center; -} -""" - - FORMAT = """ - - - - - -
-
-
-
    - {} -
-
-
- - -""" - - STATES_NAME:List[str] = None - - @classmethod - def _generate_template(cls, types:str)->str: - # normal: A state with no execution. - # active-show-up: Active state, and content displayed above the horizontal line. - # active-show-down: Active state, and content displayed below the horizontal line. - # active-show-both: Active state, and content displayed both above and below the horizontal line. - # active-show-none: Active state, with no content displayed above the horizontal line. - - assert types.lower() in ["normal","active-show-up", "active-show-down", "active-show-both", "active", "active-show-none"] - both_templates = """
  • -
    -
    -

    - {} -

    - {} -

    -
    -
    - {} -
  • """ - - if types.lower() == "normal": - templates = "
  • {}
  • " - elif types.lower() == "active": - templates = """
  • {}
  • """ - elif types.lower() == "active-show-up": - templates = both_templates.format("{}","{}", "{}", "", "{}") - elif types.lower() == "active-show-down": - templates = both_templates.format("{}","{}", "", "{}", "{}") - elif types.lower() == "active-show-both": - templates = both_templates - elif types.lower() == "active-show-none": - templates = """
  • - {} -
  • """ - else: - assert False - return templates - - @classmethod - def update_states(cls, current_states:List[int], current_templates:List[str], show_content:List[Tuple[str]])->str: - assert len(current_states) == len(current_templates) - # You can dynamically change the number of states. - # assert len(current_states) == len(cls.STATES_NAME) - css_code = [] - for idx in range(len(current_states)): - if idx == 0: - if current_states[idx] != 0: - css_code = [f"{cls._generate_template('active').format(cls.STATES_NAME[idx])}"] - else: - css_code = [f"{cls._generate_template('normal').format(cls.STATES_NAME[idx])}"] - continue - if current_states[idx-1] == 0: - # new_code = f"{cls._generate_template('normal').format(*(show_content[idx]))}" - new_code = f"{cls._generate_template('normal').format(cls.STATES_NAME[idx])}" - else: - new_code = f"{cls._generate_template(current_templates[idx]).format(current_states[idx-1], 100-current_states[idx-1],*(show_content[idx-1]), cls.STATES_NAME[idx])}" - if current_states[idx-1] != 100 or (current_states[idx]==0 and current_states[idx-1]==100): - new_code = new_code.replace("""li class="active" ""","""li """) - css_code.append(new_code) - return "\n".join(css_code) - - @classmethod - def create_states(cls, states_name:List[str], manual_create_end_nodes:bool=False): - # Create states - if manual_create_end_nodes: - states_name.append("Done") - css_code = "" - cls.STATES_NAME: List[str] = states_name - for name in states_name: - css_code = f"{css_code}\n{cls._generate_template('normal').format(name)}" - return css_code - - -if __name__ == '__main__': - pass diff --git a/spaces/AUST001/HDTV/app.py b/spaces/AUST001/HDTV/app.py deleted file mode 100644 index 9b97743edb652a5984a62d5175698bfa9bb85f3d..0000000000000000000000000000000000000000 --- a/spaces/AUST001/HDTV/app.py +++ /dev/null @@ -1,242 +0,0 @@ -import numpy as np -import torch -import matplotlib.pyplot as plt -import gradio as gr -import io -import numpy as np -from PIL import Image -from einops.layers.torch import Rearrange, Reduce - -def visualize_matrices(matrices_text, show_colorbar=False): - def mul(x): - res = 1 - for i in x: - res *= i - return res - # Example usage: - matrices = torch.arange(mul(eval(matrices_text))).reshape(*eval(matrices_text)) - # 只支持pytorch中的tensor数据类型 - if not torch.is_tensor(matrices): - raise ValueError("Input should be a pytorch tensor.") - if len(matrices.shape)==1: - matrices = matrices.reshape(1, matrices.shape[0]) - if len(matrices.shape)==3 and matrices.shape[0]==1: - matrices = matrices.reshape(matrices.shape[1], matrices.shape[2]) - # 支持二维矩阵 - if len(matrices.shape)==2: - matrices = torch.flip(matrices, (0,)).numpy() - plt.figure(figsize=(5, 5)) - cax = plt.matshow(matrices, cmap='coolwarm', origin='lower') - - for i in range(matrices.shape[0]): - for j in range(matrices.shape[1]): - plt.text(j, i, str(round(matrices[i, j],3)), ha='center', va='center', fontsize=12, color='black') - - plt.xticks([]) - plt.yticks([]) - - if show_colorbar: - plt.colorbar(cax) - - # 将Matplotlib图像转换为PIL图像 - buf = io.BytesIO() - # plt.savefig(buf, format='png') - # buf.seek(0) - # image = Image.open(buf) - # 使用bbox_inches和pad_inches调整保存的图像 - plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0) - buf.seek(0) - image = Image.open(buf) - - # 清除当前图像,以便为下一个请求绘制新图像 - plt.clf() - - return image - else: - cols = 1 - rows = 1 - num = 0 - for i in matrices.shape[:-2]: - if num%2==0: - rows = rows*i - else: - cols = cols*i - num += 1 - - fig, axes = plt.subplots(rows, cols, figsize=(cols * 5, rows * 5)) - - - matrices = matrices.reshape(-1,matrices.shape[-2],matrices.shape[-1]) - - - for i, matrix in enumerate(matrices): - if len(matrix.shape) != 2: - raise ValueError("Each matrix should have exactly 2 dimensions.") - matrix = torch.flip(matrix, (0,)).numpy() - - ax = axes.flatten()[i] - cax = ax.matshow(matrix, cmap='coolwarm', origin='lower') - - for x in range(matrix.shape[0]): - for y in range(matrix.shape[1]): - ax.text(y, x, str(round(matrix[x, y],2)), ha='center', va='center', fontsize=12, color='black') - - ax.set_xticks([]) - ax.set_yticks([]) - # 添加标题 - # axs[i, j].set_title(f"Layer {i+1}, Row {j+1}", fontsize=14) - - if show_colorbar: - plt.colorbar(cax, ax=ax) - - plt.tight_layout() - # 将Matplotlib图像转换为PIL图像 - buf = io.BytesIO() - # plt.savefig(buf, format='png') - # buf.seek(0) - # image = Image.open(buf) - # 使用bbox_inches和pad_inches调整保存的图像 - plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0) - buf.seek(0) - image = Image.open(buf) - - # 清除当前图像,以便为下一个请求绘制新图像 - plt.clf() - - return image - -def visualize_second_matrices(matrices_text, do_what, show_colorbar=False): - def mul(x): - res = 1 - for i in x: - res *= i - return res - # Example usage: - matrices = torch.arange(mul(eval(matrices_text))).reshape(*eval(matrices_text)) - for do in do_what.split('&'): - matrices = eval(do)(matrices) - # 只支持pytorch中的tensor数据类型 - if not torch.is_tensor(matrices): - raise ValueError("Input should be a pytorch tensor.") - if len(matrices.shape)==1: - matrices = matrices.reshape(1, matrices.shape[0]) - if len(matrices.shape)==3 and matrices.shape[0]==1: - matrices = matrices.reshape(matrices.shape[1], matrices.shape[2]) - # 支持二维矩阵 - if len(matrices.shape)==2: - matrices = torch.flip(matrices, (0,)).numpy() - plt.figure(figsize=(5, 5)) - cax = plt.matshow(matrices, cmap='coolwarm', origin='lower') - - for i in range(matrices.shape[0]): - for j in range(matrices.shape[1]): - plt.text(j, i, str(round(matrices[i, j],3)), ha='center', va='center', fontsize=12, color='black') - - plt.xticks([]) - plt.yticks([]) - - if show_colorbar: - plt.colorbar(cax) - - # 将Matplotlib图像转换为PIL图像 - buf = io.BytesIO() - # plt.savefig(buf, format='png') - # buf.seek(0) - # image = Image.open(buf) - # 使用bbox_inches和pad_inches调整保存的图像 - plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0) - buf.seek(0) - image = Image.open(buf) - - # 清除当前图像,以便为下一个请求绘制新图像 - plt.clf() - - return image - else: - cols = 1 - rows = 1 - num = 0 - for i in matrices.shape[:-2]: - if num%2==0: - rows = rows*i - else: - cols = cols*i - num += 1 - - fig, axes = plt.subplots(rows, cols, figsize=(cols * 5, rows * 5)) - - - matrices = matrices.reshape(-1,matrices.shape[-2],matrices.shape[-1]) - - - for i, matrix in enumerate(matrices): - if len(matrix.shape) != 2: - raise ValueError("Each matrix should have exactly 2 dimensions.") - matrix = torch.flip(matrix, (0,)).numpy() - - ax = axes.flatten()[i] - cax = ax.matshow(matrix, cmap='coolwarm', origin='lower') - - for x in range(matrix.shape[0]): - for y in range(matrix.shape[1]): - ax.text(y, x, str(round(matrix[x, y],2)), ha='center', va='center', fontsize=12, color='black') - - ax.set_xticks([]) - ax.set_yticks([]) - # 添加标题 - # axs[i, j].set_title(f"Layer {i+1}, Row {j+1}", fontsize=14) - - if show_colorbar: - plt.colorbar(cax, ax=ax) - - plt.tight_layout() - # 将Matplotlib图像转换为PIL图像 - buf = io.BytesIO() - # plt.savefig(buf, format='png') - # buf.seek(0) - # image = Image.open(buf) - # 使用bbox_inches和pad_inches调整保存的图像 - plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0) - buf.seek(0) - image = Image.open(buf) - - # 清除当前图像,以便为下一个请求绘制新图像 - plt.clf() - - return image - - -def generate_images(text1, text2): - image1 = visualize_matrices(text1) - image2 = visualize_second_matrices(text1, text2) - - return image1, image2 - -inputs = [gr.inputs.Textbox(lines=2, placeholder="tensor dims"), - gr.inputs.Textbox(lines=2, placeholder="what to do?")] - -outputs = [gr.outputs.Image(type="pil"), - gr.outputs.Image(type="pil")] - -demo = gr.Interface(fn=generate_images, inputs=inputs, outputs=outputs, - title="高维数据可视化工具", - description=""" -理解维度变换的三个关键: -1.理解每个维度代表的含义,例如(b,c,h,w)(b,l,e)等 -2.理解reshape/view的本质 -3.理解高维张量转置的本质 - -矩阵乘和Linear的理解: -1.attention中的矩阵乘就是用下图中的每一个矩阵和权重矩阵相乘,矩阵和矩阵之间没有特征交互 -2.Linear中的矩阵乘就是用下图中的每一个矩阵的每一行和权重矩阵相乘,行与行之间没有特征交互 - """, - examples=[ - ["[2, 3, 4]", "Rearrange('c h w -> c w h')"], - ["[2, 3, 4]", "Rearrange('c h w -> c w h')&Rearrange('c h w -> c w h')&Rearrange('c h w -> c w h')"], - ["[2, 3, 4, 4]", "Rearrange('b c h w -> b c (h w)')"], - ["[2, 3, 4, 4]", "Rearrange('b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1 = 2, p2 = 2)"], - ["[2, 3, 4, 4]", "Rearrange('b c (h p1) (w p2) -> b h w (p1 p2 c)', p1 = 2, p2 = 2)&Rearrange('b h w (c s) -> b w c (h s)', s=2)"] - ] - ) -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/EasyChat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/EasyChat.py deleted file mode 100644 index dae5196dd28f1b97d34fc19e0b65f919153a2b30..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/EasyChat.py +++ /dev/null @@ -1,111 +0,0 @@ -from __future__ import annotations - -import json -import random - -import requests - -from ..typing import Any, CreateResult -from .base_provider import BaseProvider - - -class EasyChat(BaseProvider): - url: str = "https://free.easychat.work" - supports_stream = True - supports_gpt_35_turbo = True - working = False - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - active_servers = [ - "https://chat10.fastgpt.me", - "https://chat9.fastgpt.me", - "https://chat1.fastgpt.me", - "https://chat2.fastgpt.me", - "https://chat3.fastgpt.me", - "https://chat4.fastgpt.me", - "https://gxos1h1ddt.fastgpt.me" - ] - - server = active_servers[kwargs.get("active_server", random.randint(0, 5))] - headers = { - "authority" : f"{server}".replace("https://", ""), - "accept" : "text/event-stream", - "accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3,fa=0.2", - "content-type" : "application/json", - "origin" : f"{server}", - "referer" : f"{server}/", - "x-requested-with" : "XMLHttpRequest", - 'plugins' : '0', - 'sec-ch-ua' : '"Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', - 'usesearch' : 'false', - 'x-requested-with' : 'XMLHttpRequest' - } - - json_data = { - "messages" : messages, - "stream" : stream, - "model" : model, - "temperature" : kwargs.get("temperature", 0.5), - "presence_penalty" : kwargs.get("presence_penalty", 0), - "frequency_penalty" : kwargs.get("frequency_penalty", 0), - "top_p" : kwargs.get("top_p", 1) - } - - session = requests.Session() - # init cookies from server - session.get(f"{server}/") - - response = session.post(f"{server}/api/openai/v1/chat/completions", - headers=headers, json=json_data, stream=stream) - - if response.status_code == 200: - - if stream == False: - json_data = response.json() - - if "choices" in json_data: - yield json_data["choices"][0]["message"]["content"] - else: - raise Exception("No response from server") - - else: - - for chunk in response.iter_lines(): - - if b"content" in chunk: - splitData = chunk.decode().split("data:") - - if len(splitData) > 1: - yield json.loads(splitData[1])["choices"][0]["delta"]["content"] - else: - continue - else: - raise Exception(f"Error {response.status_code} from server : {response.reason}") - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ("presence_penalty", "int"), - ("frequency_penalty", "int"), - ("top_p", "int"), - ("active_server", "int"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/EliminateChess.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/EliminateChess.js deleted file mode 100644 index 46e7140c214fddc39004bf597ffeb7434665378f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/actions/EliminateChess.js +++ /dev/null @@ -1,15 +0,0 @@ -/* -1. Fade-out-destroy chess -*/ - -import FadeOutDestroy from '../../../plugins/fade-out-destroy.js'; - -var EliminateChess = function (chessArray, board, bejeweled) { - const duration = 500; //ms - for (var i = 0, cnt = chessArray.length; i < cnt; i++) { - var fade = FadeOutDestroy(chessArray[i], duration); - bejeweled.waitEvent(fade, 'complete'); - } -} - -export default EliminateChess; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/Factory.js deleted file mode 100644 index 336daaa3c50fc23fb72008cd1559b93b56442b99..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import HolyGrail from './HolyGrail.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('holyGrail', function (config) { - var gameObject = new HolyGrail(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.HolyGrail', HolyGrail); - -export default HolyGrail; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Factory.js deleted file mode 100644 index 0ebb6962e2eaaddfce53a4532b014bfd54035f53..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Label from './Label.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('label', function (config) { - var gameObject = new Label(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.Label', Label); - -export default Label; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/Factory.js deleted file mode 100644 index 65ddf0d0c0cee52c6e9550f18cbd969533225991..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Slider from './Slider.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('slider', function (config) { - var gameObject = new Slider(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.Slider', Slider); - -export default Slider; \ No newline at end of file diff --git a/spaces/Akseluhr/whisper-sv-SE-auhr/README.md b/spaces/Akseluhr/whisper-sv-SE-auhr/README.md deleted file mode 100644 index 0ccc640203f73112abd278f61b96a1b2f5a9fca5..0000000000000000000000000000000000000000 --- a/spaces/Akseluhr/whisper-sv-SE-auhr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper Se Auhr -emoji: 💻 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlanMars/QYL-AI-Space/assets/custom.css b/spaces/AlanMars/QYL-AI-Space/assets/custom.css deleted file mode 100644 index 80e85715b128d0ac34b9abe2ab9926c65b84b28d..0000000000000000000000000000000000000000 --- a/spaces/AlanMars/QYL-AI-Space/assets/custom.css +++ /dev/null @@ -1,503 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* gradio的页脚信息 */ -footer { - /* display: none !important; */ - margin-top: .2em !important; - font-size: 85%; -} -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -/* -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} -*/ - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -/* Override Slider Styles (for webkit browsers like Safari and Chrome) - * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410 - * 进度滑块在各个平台还是太不统一了 - */ -input[type="range"] { - -webkit-appearance: none; - height: 4px; - background: var(--input-background-fill); - border-radius: 5px; - background-image: linear-gradient(var(--primary-500),var(--primary-500)); - background-size: 0% 100%; - background-repeat: no-repeat; -} -input[type="range"]::-webkit-slider-thumb { - -webkit-appearance: none; - height: 20px; - width: 20px; - border-radius: 50%; - border: solid 0.5px #ddd; - background-color: white; - cursor: ew-resize; - box-shadow: var(--input-shadow); - transition: background-color .1s ease; -} -input[type="range"]::-webkit-slider-thumb:hover { - background: var(--neutral-50); -} -input[type=range]::-webkit-slider-runnable-track { - -webkit-appearance: none; - box-shadow: none; - border: none; - background: transparent; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-background-color-light) !important; - color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: var(--message-bot-background-color-light) !important; -} -[data-testid = "user"] { - background-color: var(--message-user-background-color-light) !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-background-color-dark) !important; - color: var(--chatbot-color-dark) !important; -} -.dark [data-testid = "bot"] { - background-color: var(--message-bot-background-color-dark) !important; -} -.dark [data-testid = "user"] { - background-color: var(--message-user-background-color-dark) !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 95% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -#chuanhu_chatbot .wrap { - overflow-x: hidden; -} -/* 对话气泡 */ -.message { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} - -.message p { - margin-top: 0.6em !important; - margin-bottom: 0.6em !important; - font-size: 1.2em !important; -} -.message p:first-child { margin-top: 0 !important; } -.message p:last-of-type { margin-bottom: 0 !important; } - -.message .md-message { - display: block; - padding: 0 !important; -} -.message .raw-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} -.raw-message.hideM, .md-message.hideM { - display: none; -} - -/* custom buttons */ -.chuanhu-btn { - border-radius: 5px; - /* background-color: #E6E6E6 !important; */ - color: rgba(120, 120, 120, 0.64) !important; - padding: 4px !important; - position: absolute; - right: -22px; - cursor: pointer !important; - transition: color .2s ease, background-color .2s ease; -} -.chuanhu-btn:hover { - background-color: rgba(167, 167, 167, 0.25) !important; - color: unset !important; -} -.chuanhu-btn:active { - background-color: rgba(167, 167, 167, 0.5) !important; -} -.chuanhu-btn:focus { - outline: none; -} -.copy-bot-btn { - /* top: 18px; */ - bottom: 0; -} -.toggle-md-btn { - /* top: 0; */ - bottom: 20px; -} -.copy-code-btn { - position: relative; - float: right; - font-size: 1em; - cursor: pointer; -} - -.message-wrap>div img{ - border-radius: 10px !important; -} - -/* history message */ -.wrap>.history-message { - padding: 10px !important; -} -.history-message { - /* padding: 0 !important; */ - opacity: 80%; - display: flex; - flex-direction: column; -} -.history-message>.history-message { - padding: 0 !important; -} -.history-message>.message-wrap { - padding: 0 !important; - margin-bottom: 16px; -} -.history-message>.message { - margin-bottom: 16px; -} -.wrap>.history-message::after { - content: ""; - display: block; - height: 2px; - background-color: var(--body-text-color-subdued); - margin-bottom: 10px; - margin-top: -10px; - clear: both; -} -.wrap>.history-message>:last-child::after { - content: "仅供查看"; - display: block; - text-align: center; - color: var(--body-text-color-subdued); - font-size: 0.8em; -} - -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 0.6em 2em 1em 0.2em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -.message pre { - padding: 0 !important; -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/Altinas/vits-uma-genshin-honkais/Docker/Dockerfile b/spaces/Altinas/vits-uma-genshin-honkais/Docker/Dockerfile deleted file mode 100644 index 4d39cdf02a2ec151686cc1d61234bf723068fed8..0000000000000000000000000000000000000000 --- a/spaces/Altinas/vits-uma-genshin-honkais/Docker/Dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM python:3.9-bullseye -VOLUME ["/app"] -WORKDIR /app -# Set apt to Chinese mirror -RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list -RUN apt-get update && apt-get -y install cmake git -RUN git clone https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai -WORKDIR /app/vits-uma-genshin-honkai -RUN sed -i "s/\.launch()/\.launch(server_name=\"0.0.0.0\")/" /app/vits-uma-genshin-honkai/app.py -ADD vits.sh /app/vits.sh -EXPOSE 7860 -ENTRYPOINT [ "/app/vits.sh" ] \ No newline at end of file diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_lora_safetensor_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_lora_safetensor_to_diffusers.py deleted file mode 100644 index f8e05d62bd2ac35cad31e750ba590afec7f614e6..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_lora_safetensor_to_diffusers.py +++ /dev/null @@ -1,128 +0,0 @@ -# coding=utf-8 -# Copyright 2023, Haofan Wang, Qixun Wang, All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" Conversion script for the LoRA's safetensors checkpoints. """ - -import argparse - -import torch -from safetensors.torch import load_file - -from diffusers import StableDiffusionPipeline - - -def convert(base_model_path, checkpoint_path, LORA_PREFIX_UNET, LORA_PREFIX_TEXT_ENCODER, alpha): - # load base model - pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torch.float32) - - # load LoRA weight from .safetensors - state_dict = load_file(checkpoint_path) - - visited = [] - - # directly update weight in diffusers model - for key in state_dict: - # it is suggested to print out the key, it usually will be something like below - # "lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight" - - # as we have set the alpha beforehand, so just skip - if ".alpha" in key or key in visited: - continue - - if "text" in key: - layer_infos = key.split(".")[0].split(LORA_PREFIX_TEXT_ENCODER + "_")[-1].split("_") - curr_layer = pipeline.text_encoder - else: - layer_infos = key.split(".")[0].split(LORA_PREFIX_UNET + "_")[-1].split("_") - curr_layer = pipeline.unet - - # find the target layer - temp_name = layer_infos.pop(0) - while len(layer_infos) > -1: - try: - curr_layer = curr_layer.__getattr__(temp_name) - if len(layer_infos) > 0: - temp_name = layer_infos.pop(0) - elif len(layer_infos) == 0: - break - except Exception: - if len(temp_name) > 0: - temp_name += "_" + layer_infos.pop(0) - else: - temp_name = layer_infos.pop(0) - - pair_keys = [] - if "lora_down" in key: - pair_keys.append(key.replace("lora_down", "lora_up")) - pair_keys.append(key) - else: - pair_keys.append(key) - pair_keys.append(key.replace("lora_up", "lora_down")) - - # update weight - if len(state_dict[pair_keys[0]].shape) == 4: - weight_up = state_dict[pair_keys[0]].squeeze(3).squeeze(2).to(torch.float32) - weight_down = state_dict[pair_keys[1]].squeeze(3).squeeze(2).to(torch.float32) - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down).unsqueeze(2).unsqueeze(3) - else: - weight_up = state_dict[pair_keys[0]].to(torch.float32) - weight_down = state_dict[pair_keys[1]].to(torch.float32) - curr_layer.weight.data += alpha * torch.mm(weight_up, weight_down) - - # update visited list - for item in pair_keys: - visited.append(item) - - return pipeline - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--base_model_path", default=None, type=str, required=True, help="Path to the base model in diffusers format." - ) - parser.add_argument( - "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert." - ) - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - parser.add_argument( - "--lora_prefix_unet", default="lora_unet", type=str, help="The prefix of UNet weight in safetensors" - ) - parser.add_argument( - "--lora_prefix_text_encoder", - default="lora_te", - type=str, - help="The prefix of text encoder weight in safetensors", - ) - parser.add_argument("--alpha", default=0.75, type=float, help="The merging ratio in W = W0 + alpha * deltaW") - parser.add_argument( - "--to_safetensors", action="store_true", help="Whether to store pipeline in safetensors format or not." - ) - parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cuda:1, etc.)") - - args = parser.parse_args() - - base_model_path = args.base_model_path - checkpoint_path = args.checkpoint_path - dump_path = args.dump_path - lora_prefix_unet = args.lora_prefix_unet - lora_prefix_text_encoder = args.lora_prefix_text_encoder - alpha = args.alpha - - pipe = convert(base_model_path, checkpoint_path, lora_prefix_unet, lora_prefix_text_encoder, alpha) - - pipe = pipe.to(args.device) - pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/diffusers_cli.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/diffusers_cli.py deleted file mode 100644 index 2016fc19f557fd539782ca2181ec2fe74026340a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/diffusers_cli.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from argparse import ArgumentParser - -from .env import EnvironmentCommand -from .fp16_safetensors import FP16SafetensorsCommand - - -def main(): - parser = ArgumentParser("Diffusers CLI tool", usage="diffusers-cli []") - commands_parser = parser.add_subparsers(help="diffusers-cli command helpers") - - # Register commands - EnvironmentCommand.register_subcommand(commands_parser) - FP16SafetensorsCommand.register_subcommand(commands_parser) - - # Let's go - args = parser.parse_args() - - if not hasattr(args, "func"): - parser.print_help() - exit(1) - - # Run - service = args.func(args) - service.run() - - -if __name__ == "__main__": - main() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py deleted file mode 100644 index 508085094b16afcd477a664b597d0551d720239d..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py +++ /dev/null @@ -1,553 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from transformers import CLIPImageProcessor, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import PIL_INTERPOLATION, deprecate, logging -from ..onnx_utils import ORT_TO_NP_TYPE, OnnxRuntimeModel -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess with 8->64 -def preprocess(image): - warnings.warn( - ( - "The preprocess method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor.preprocess instead" - ), - FutureWarning, - ) - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 64 for x in (w, h)) # resize to integer multiple of 64 - - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -class OnnxStableDiffusionImg2ImgPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image to image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - vae_encoder: OnnxRuntimeModel - vae_decoder: OnnxRuntimeModel - text_encoder: OnnxRuntimeModel - tokenizer: CLIPTokenizer - unet: OnnxRuntimeModel - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - safety_checker: OnnxRuntimeModel - feature_extractor: CLIPImageProcessor - - _optional_components = ["safety_checker", "feature_extractor"] - _is_onnx = True - - def __init__( - self, - vae_encoder: OnnxRuntimeModel, - vae_decoder: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: CLIPTokenizer, - unet: OnnxRuntimeModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: OnnxRuntimeModel, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae_encoder=vae_encoder, - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_onnx_stable_diffusion.OnnxStableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt: Union[str, List[str]], - num_images_per_prompt: Optional[int], - do_classifier_free_guidance: bool, - negative_prompt: Optional[str], - prompt_embeds: Optional[np.ndarray] = None, - negative_prompt_embeds: Optional[np.ndarray] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - prompt_embeds (`np.ndarray`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`np.ndarray`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids - - if not np.array_equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - prompt_embeds = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0] - - prompt_embeds = np.repeat(prompt_embeds, num_images_per_prompt, axis=0) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] * batch_size - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="np", - ) - negative_prompt_embeds = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0] - - if do_classifier_free_guidance: - negative_prompt_embeds = np.repeat(negative_prompt_embeds, num_images_per_prompt, axis=0) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = np.concatenate([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def check_inputs( - self, - prompt: Union[str, List[str]], - callback_steps: int, - negative_prompt: Optional[Union[str, List[str]]] = None, - prompt_embeds: Optional[np.ndarray] = None, - negative_prompt_embeds: Optional[np.ndarray] = None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[np.ndarray, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[np.random.RandomState] = None, - prompt_embeds: Optional[np.ndarray] = None, - negative_prompt_embeds: Optional[np.ndarray] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, np.ndarray], None]] = None, - callback_steps: int = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`np.ndarray` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`np.random.RandomState`, *optional*): - A np.random.RandomState to make generation deterministic. - prompt_embeds (`np.ndarray`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`np.ndarray`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - # check inputs. Raise error if not correct - self.check_inputs(prompt, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if generator is None: - generator = np.random - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - image = preprocess(image).cpu().numpy() - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - prompt_embeds = self._encode_prompt( - prompt, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - latents_dtype = prompt_embeds.dtype - image = image.astype(latents_dtype) - # encode the init image into latents and scale the latents - init_latents = self.vae_encoder(sample=image)[0] - init_latents = 0.18215 * init_latents - - if isinstance(prompt, str): - prompt = [prompt] - if len(prompt) > init_latents.shape[0] and len(prompt) % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {len(prompt)} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = len(prompt) // init_latents.shape[0] - init_latents = np.concatenate([init_latents] * additional_image_per_prompt * num_images_per_prompt, axis=0) - elif len(prompt) > init_latents.shape[0] and len(prompt) % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {len(prompt)} text prompts." - ) - else: - init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0) - - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - timesteps = self.scheduler.timesteps.numpy()[-init_timestep] - timesteps = np.array([timesteps] * batch_size * num_images_per_prompt) - - # add noise to latents using the timesteps - noise = generator.randn(*init_latents.shape).astype(latents_dtype) - init_latents = self.scheduler.add_noise( - torch.from_numpy(init_latents), torch.from_numpy(noise), torch.from_numpy(timesteps) - ) - init_latents = init_latents.numpy() - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:].numpy() - - timestep_dtype = next( - (input.type for input in self.unet.model.get_inputs() if input.name == "timestep"), "tensor(float)" - ) - timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype] - - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(torch.from_numpy(latent_model_input), t) - latent_model_input = latent_model_input.cpu().numpy() - - # predict the noise residual - timestep = np.array([t], dtype=timestep_dtype) - noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)[ - 0 - ] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - scheduler_output = self.scheduler.step( - torch.from_numpy(noise_pred), t, torch.from_numpy(latents), **extra_step_kwargs - ) - latents = scheduler_output.prev_sample.numpy() - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - latents = 1 / 0.18215 * latents - # image = self.vae_decoder(latent_sample=latents)[0] - # it seems likes there is a strange result for using half-precision vae decoder if batchsize>1 - image = np.concatenate( - [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])] - ) - - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor( - self.numpy_to_pil(image), return_tensors="np" - ).pixel_values.astype(image.dtype) - # safety_checker does not support batched inputs yet - images, has_nsfw_concept = [], [] - for i in range(image.shape[0]): - image_i, has_nsfw_concept_i = self.safety_checker( - clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1] - ) - images.append(image_i) - has_nsfw_concept.append(has_nsfw_concept_i[0]) - image = np.concatenate(images) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d.py deleted file mode 100644 index bb5335ca30881f004ee26c7cc9cef020700cd5c7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d.py +++ /dev/null @@ -1,294 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import math -import unittest - -import torch - -from diffusers import UNet2DModel -from diffusers.utils import floats_tensor, logging, slow, torch_all_close, torch_device -from diffusers.utils.testing_utils import enable_full_determinism - -from .test_modeling_common import ModelTesterMixin, UNetTesterMixin - - -logger = logging.get_logger(__name__) - -enable_full_determinism() - - -class Unet2DModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): - model_class = UNet2DModel - main_input_name = "sample" - - @property - def dummy_input(self): - batch_size = 4 - num_channels = 3 - sizes = (32, 32) - - noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device) - time_step = torch.tensor([10]).to(torch_device) - - return {"sample": noise, "timestep": time_step} - - @property - def input_shape(self): - return (3, 32, 32) - - @property - def output_shape(self): - return (3, 32, 32) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "block_out_channels": (32, 64), - "down_block_types": ("DownBlock2D", "AttnDownBlock2D"), - "up_block_types": ("AttnUpBlock2D", "UpBlock2D"), - "attention_head_dim": 3, - "out_channels": 3, - "in_channels": 3, - "layers_per_block": 2, - "sample_size": 32, - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - -class UNetLDMModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): - model_class = UNet2DModel - main_input_name = "sample" - - @property - def dummy_input(self): - batch_size = 4 - num_channels = 4 - sizes = (32, 32) - - noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device) - time_step = torch.tensor([10]).to(torch_device) - - return {"sample": noise, "timestep": time_step} - - @property - def input_shape(self): - return (4, 32, 32) - - @property - def output_shape(self): - return (4, 32, 32) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "sample_size": 32, - "in_channels": 4, - "out_channels": 4, - "layers_per_block": 2, - "block_out_channels": (32, 64), - "attention_head_dim": 32, - "down_block_types": ("DownBlock2D", "DownBlock2D"), - "up_block_types": ("UpBlock2D", "UpBlock2D"), - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_from_pretrained_hub(self): - model, loading_info = UNet2DModel.from_pretrained("fusing/unet-ldm-dummy-update", output_loading_info=True) - - self.assertIsNotNone(model) - self.assertEqual(len(loading_info["missing_keys"]), 0) - - model.to(torch_device) - image = model(**self.dummy_input).sample - - assert image is not None, "Make sure output is not None" - - @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU") - def test_from_pretrained_accelerate(self): - model, _ = UNet2DModel.from_pretrained("fusing/unet-ldm-dummy-update", output_loading_info=True) - model.to(torch_device) - image = model(**self.dummy_input).sample - - assert image is not None, "Make sure output is not None" - - @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU") - def test_from_pretrained_accelerate_wont_change_results(self): - # by defautl model loading will use accelerate as `low_cpu_mem_usage=True` - model_accelerate, _ = UNet2DModel.from_pretrained("fusing/unet-ldm-dummy-update", output_loading_info=True) - model_accelerate.to(torch_device) - model_accelerate.eval() - - noise = torch.randn( - 1, - model_accelerate.config.in_channels, - model_accelerate.config.sample_size, - model_accelerate.config.sample_size, - generator=torch.manual_seed(0), - ) - noise = noise.to(torch_device) - time_step = torch.tensor([10] * noise.shape[0]).to(torch_device) - - arr_accelerate = model_accelerate(noise, time_step)["sample"] - - # two models don't need to stay in the device at the same time - del model_accelerate - torch.cuda.empty_cache() - gc.collect() - - model_normal_load, _ = UNet2DModel.from_pretrained( - "fusing/unet-ldm-dummy-update", output_loading_info=True, low_cpu_mem_usage=False - ) - model_normal_load.to(torch_device) - model_normal_load.eval() - arr_normal_load = model_normal_load(noise, time_step)["sample"] - - assert torch_all_close(arr_accelerate, arr_normal_load, rtol=1e-3) - - def test_output_pretrained(self): - model = UNet2DModel.from_pretrained("fusing/unet-ldm-dummy-update") - model.eval() - model.to(torch_device) - - noise = torch.randn( - 1, - model.config.in_channels, - model.config.sample_size, - model.config.sample_size, - generator=torch.manual_seed(0), - ) - noise = noise.to(torch_device) - time_step = torch.tensor([10] * noise.shape[0]).to(torch_device) - - with torch.no_grad(): - output = model(noise, time_step).sample - - output_slice = output[0, -1, -3:, -3:].flatten().cpu() - # fmt: off - expected_output_slice = torch.tensor([-13.3258, -20.1100, -15.9873, -17.6617, -23.0596, -17.9419, -13.3675, -16.1889, -12.3800]) - # fmt: on - - self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-3)) - - -class NCSNppModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): - model_class = UNet2DModel - main_input_name = "sample" - - @property - def dummy_input(self, sizes=(32, 32)): - batch_size = 4 - num_channels = 3 - - noise = floats_tensor((batch_size, num_channels) + sizes).to(torch_device) - time_step = torch.tensor(batch_size * [10]).to(dtype=torch.int32, device=torch_device) - - return {"sample": noise, "timestep": time_step} - - @property - def input_shape(self): - return (3, 32, 32) - - @property - def output_shape(self): - return (3, 32, 32) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "block_out_channels": [32, 64, 64, 64], - "in_channels": 3, - "layers_per_block": 1, - "out_channels": 3, - "time_embedding_type": "fourier", - "norm_eps": 1e-6, - "mid_block_scale_factor": math.sqrt(2.0), - "norm_num_groups": None, - "down_block_types": [ - "SkipDownBlock2D", - "AttnSkipDownBlock2D", - "SkipDownBlock2D", - "SkipDownBlock2D", - ], - "up_block_types": [ - "SkipUpBlock2D", - "SkipUpBlock2D", - "AttnSkipUpBlock2D", - "SkipUpBlock2D", - ], - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - @slow - def test_from_pretrained_hub(self): - model, loading_info = UNet2DModel.from_pretrained("google/ncsnpp-celebahq-256", output_loading_info=True) - self.assertIsNotNone(model) - self.assertEqual(len(loading_info["missing_keys"]), 0) - - model.to(torch_device) - inputs = self.dummy_input - noise = floats_tensor((4, 3) + (256, 256)).to(torch_device) - inputs["sample"] = noise - image = model(**inputs) - - assert image is not None, "Make sure output is not None" - - @slow - def test_output_pretrained_ve_mid(self): - model = UNet2DModel.from_pretrained("google/ncsnpp-celebahq-256") - model.to(torch_device) - - batch_size = 4 - num_channels = 3 - sizes = (256, 256) - - noise = torch.ones((batch_size, num_channels) + sizes).to(torch_device) - time_step = torch.tensor(batch_size * [1e-4]).to(torch_device) - - with torch.no_grad(): - output = model(noise, time_step).sample - - output_slice = output[0, -3:, -3:, -1].flatten().cpu() - # fmt: off - expected_output_slice = torch.tensor([-4842.8691, -6499.6631, -3800.1953, -7978.2686, -10980.7129, -20028.8535, 8148.2822, 2342.2905, 567.7608]) - # fmt: on - - self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2)) - - def test_output_pretrained_ve_large(self): - model = UNet2DModel.from_pretrained("fusing/ncsnpp-ffhq-ve-dummy-update") - model.to(torch_device) - - batch_size = 4 - num_channels = 3 - sizes = (32, 32) - - noise = torch.ones((batch_size, num_channels) + sizes).to(torch_device) - time_step = torch.tensor(batch_size * [1e-4]).to(torch_device) - - with torch.no_grad(): - output = model(noise, time_step).sample - - output_slice = output[0, -3:, -3:, -1].flatten().cpu() - # fmt: off - expected_output_slice = torch.tensor([-0.0325, -0.0900, -0.0869, -0.0332, -0.0725, -0.0270, -0.0101, 0.0227, 0.0256]) - # fmt: on - - self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2)) - - def test_forward_with_norm_groups(self): - # not required for this model - pass diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/ssd300.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/ssd300.py deleted file mode 100644 index 1b839ad43fd14cd612ceed312758e9ce75a270bc..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/ssd300.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -input_size = 300 -model = dict( - type='SingleStageDetector', - pretrained='open-mmlab://vgg16_caffe', - backbone=dict( - type='SSDVGG', - input_size=input_size, - depth=16, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - l2_norm_scale=20), - neck=None, - bbox_head=dict( - type='SSDHead', - in_channels=(512, 1024, 512, 256, 256, 256), - num_classes=80, - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=input_size, - basesize_ratio_range=(0.15, 0.9), - strides=[8, 16, 32, 64, 100, 300], - ratios=[[2], [2, 3], [2, 3], [2, 3], [2], [2]]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2])), - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0., - ignore_iof_thr=-1, - gt_max_assign_all=False), - smoothl1_beta=1., - allowed_border=-1, - pos_weight=-1, - neg_pos_ratio=3, - debug=False), - test_cfg=dict( - nms_pre=1000, - nms=dict(type='nms', iou_threshold=0.45), - min_bbox_size=0, - score_thr=0.02, - max_per_img=200)) -cudnn_benchmark = True diff --git a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/reppoints_partial_minmax_r50_fpn_gn-neck+head_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/reppoints/reppoints_partial_minmax_r50_fpn_gn-neck+head_1x_coco.py deleted file mode 100644 index 9a63bd0862be6d5f363c5d481bade3e8e2e8433a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/reppoints_partial_minmax_r50_fpn_gn-neck+head_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py' -model = dict(bbox_head=dict(transform_method='partial_minmax')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py deleted file mode 100644 index 6078bb98cacc04da23dcb7a661047902e0adefb3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './vfnet_r50_fpn_1x_coco.py' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 480), (1333, 960)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/point_generator.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/point_generator.py deleted file mode 100644 index e6fbd988c317992c092c68c827dc4c53223b4a4a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/point_generator.py +++ /dev/null @@ -1,37 +0,0 @@ -import torch - -from .builder import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class PointGenerator(object): - - def _meshgrid(self, x, y, row_major=True): - xx = x.repeat(len(y)) - yy = y.view(-1, 1).repeat(1, len(x)).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_points(self, featmap_size, stride=16, device='cuda'): - feat_h, feat_w = featmap_size - shift_x = torch.arange(0., feat_w, device=device) * stride - shift_y = torch.arange(0., feat_h, device=device) * stride - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - stride = shift_x.new_full((shift_xx.shape[0], ), stride) - shifts = torch.stack([shift_xx, shift_yy, stride], dim=-1) - all_points = shifts.to(device) - return all_points - - def valid_flags(self, featmap_size, valid_size, device='cuda'): - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - return valid diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context_59.py deleted file mode 100644 index d4065ec05c5c12e2b24a1433b38580b3c640d6be..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/pspnet_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/silero_tts/style.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/silero_tts/style.css deleted file mode 100644 index 2ab7aefbbfca19982414f13a76dfdd4324793903..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/silero_tts/style.css +++ /dev/null @@ -1,8 +0,0 @@ -.SDAP .hires_opts input[type="number"] { - width: 6em !important; -} - -/* silero_tts preview */ -.form:has(> #silero_preview_text) { - min-width: 75% -} diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/html_generator.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/html_generator.py deleted file mode 100644 index d3e122fdad2ae3d9cec829bf87c59af6290fc4c1..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/html_generator.py +++ /dev/null @@ -1,308 +0,0 @@ -import html -import os -import re -import time -from pathlib import Path - -import markdown -from PIL import Image, ImageOps - -from modules.utils import get_available_chat_styles - -# This is to store the paths to the thumbnails of the profile pictures -image_cache = {} - -with open(Path(__file__).resolve().parent / '../css/html_readable_style.css', 'r') as f: - readable_css = f.read() -with open(Path(__file__).resolve().parent / '../css/html_4chan_style.css', 'r') as css_f: - _4chan_css = css_f.read() -with open(Path(__file__).resolve().parent / '../css/html_instruct_style.css', 'r') as f: - instruct_css = f.read() - -# Custom chat styles -chat_styles = {} -for k in get_available_chat_styles(): - chat_styles[k] = open(Path(f'css/chat_style-{k}.css'), 'r').read() - -# Handle styles that derive from other styles -for k in chat_styles: - lines = chat_styles[k].split('\n') - input_string = lines[0] - match = re.search(r'chat_style-([a-z\-]*)\.css', input_string) - - if match: - style = match.group(1) - chat_styles[k] = chat_styles.get(style, '') + '\n\n' + '\n'.join(lines[1:]) - - -def fix_newlines(string): - string = string.replace('\n', '\n\n') - string = re.sub(r"\n{3,}", "\n\n", string) - string = string.strip() - return string - - -def replace_blockquote(m): - return m.group().replace('\n', '\n> ').replace('\\begin{blockquote}', '').replace('\\end{blockquote}', '') - - -def convert_to_markdown(string): - - # Blockquote - string = re.sub(r'(^|[\n])>', r'\1>', string) - pattern = re.compile(r'\\begin{blockquote}(.*?)\\end{blockquote}', re.DOTALL) - string = pattern.sub(replace_blockquote, string) - - # Code - string = string.replace('\\begin{code}', '```') - string = string.replace('\\end{code}', '```') - string = re.sub(r"(.)```", r"\1\n```", string) - - result = '' - is_code = False - for line in string.split('\n'): - if line.lstrip(' ').startswith('```'): - is_code = not is_code - - result += line - if is_code or line.startswith('|'): # Don't add an extra \n for tables or code - result += '\n' - else: - result += '\n\n' - - result = result.strip() - if is_code: - result += '\n```' # Unfinished code block - - # Unfinished list, like "\n1.". A |delete| string is added and then - # removed to force a
      or