diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download program decodare casetofoane auto Learn how to decode your car stereo with this simple program.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download program decodare casetofoane auto Learn how to decode your car stereo with this simple program.md deleted file mode 100644 index 62cf9786dec36ca2054ef21ac28ac40148c30336..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download program decodare casetofoane auto Learn how to decode your car stereo with this simple program.md +++ /dev/null @@ -1,130 +0,0 @@ - -
- Benefits: List some advantages of using a program to decode car radios.
- Challenges: Mention some difficulties or risks of decoding car radios without a program. | | H2: How to find the best program decodare casetofoane auto for your car model? | - Criteria: Describe some features or factors to consider when choosing a program.
- Examples: Give some examples of popular or reliable programs for different car models.
- Tips: Provide some tips or warnings for using a program safely and effectively. | | H3: How to use a program decodare casetofoane auto to unlock your car radio? | - Steps: Explain the steps to follow to download, install and run a program.
- Troubleshooting: Suggest some solutions for common problems or errors that may occur.
- Alternatives: Mention some other ways to decode car radios if a program does not work. | | H4: Conclusion | - Summary: Summarize the main points of the article.
- Call to action: Invite the reader to try a program or contact you for more information. | **Table 2: Article with HTML formatting** ```html

Download program decodare casetofoane auto: What is it and why do you need it?

-

If you own a car with a radio or a CD player, you may have encountered a situation where your device asks for a code after you disconnect the battery or change the car. This code is a security feature that prevents unauthorized use of your radio in case of theft or loss. However, if you don't have the code or you forget it, you may not be able to use your radio anymore.

-

This is where a program decodare casetofoane auto comes in handy. This is a software tool that can help you generate or recover the code for your radio based on its serial number or model. By using such a program, you can unlock your radio and enjoy your music again.

-

Download program decodare casetofoane auto


Download Zip ✑ ✑ ✑ https://byltly.com/2uKxc2



-

There are many benefits of using a program decodare casetofoane auto, such as:

- -

However, there are also some challenges or risks of decoding your radio without a program, such as:

- -

Therefore, it is important to use a reliable and safe program decodare casetofoane auto that can help you unlock your radio without any hassle.

-

How to find the best program decodare casetofoane auto for your car model?

-

There are many programs decodare casetofoane auto available online, but not all of them are suitable for your car model. Some programs may only work for certain brands or models, while others may not work at all. To find the best program for your car model, you need to consider some features or factors, such as:

- -

To help you choose a good program decodare casetofoane auto, here are some examples of popular or reliable programs for different car models:

- - - - - - - -
Car modelRadio typeProgram nameWebsite
FordM-SerialFord M-Serial Radio Code Decoderhttps://fordradiocode.eu/
RenaultClio, Megane, Scenic etc.Renault Radio Code Generatorhttps://renaultradiocode.com/
VolkswagenRNS 310 / 315 / 510 etc.VW Radio Code Calculatorhttps://www.vw-radio-code.com/
BlaupunktBlaupunkt SC202 etc.Blaupunkt Calculatorhttps://www.decodari-casetofoane-auto.ro/
PhilipsPhilips Car 400 etc.Philips Radio Code Generatorhttps://philipsradiocode.com/
-

Here are some tips or warnings for using a program decodare casetofoane auto safely and effectively:

- -

How to use a program decodare casetofoane auto to unlock your car radio?

-

To use a program decodare casetofoane auto to unlock your car radio, you need to follow these steps:

-
    -
  1. Find out the serial number and model of your radio. You can usually find them on a sticker on the side or back of the device, or on the screen after pressing certain buttons. You may also need to remove the radio from the dashboard to access them.
  2. -
  3. Select and download a suitable program decodare casetofoane auto for your car model and radio type from a trusted website. Make sure it is compatible with your computer system and has good reviews.
  4. -
  5. Install and run the program on your computer. Follow the instructions on how to connect your radio to your computer via USB cable, Bluetooth, Wi-Fi, etc. You may need to enter some information about your radio such as serial number, model, brand, etc.
  6. -
  7. Wait for the program to scan your radio and generate or recover the code. This may take from few seconds to few minutes depending on the complexity of the algorithm and the speed of the connection.
  8. -
  9. Enter the code on your radio using the buttons or knobs. Confirm and test if it works. If not, try again with another code or another program.
  10. -
  11. Enjoy your music!
  12. -
-

If you encounter any problems or errors while using a program decodare casetofoane auto, here are some solutions you can try:

-

How to download software for unlocking car radios
-Best program for decoding car stereo codes
-Download free program to decode car radio serial numbers
-Where to find program for decodare casetofoane auto online
-Program decodare casetofoane auto compatible with Windows 10
-Download program decodare casetofoane auto for Android devices
-Program decodare casetofoane auto with lifetime updates
-How to use program decodare casetofoane auto step by step
-Program decodare casetofoane auto reviews and ratings
-Program decodare casetofoane auto download link and activation code
-Program decodare casetofoane auto for Ford, Renault, Volkswagen, etc.
-Program decodare casetofoane auto supported models and brands
-Program decodare casetofoane auto troubleshooting and error messages
-Program decodare casetofoane auto alternatives and competitors
-Program decodare casetofoane auto discount and coupon codes
-How to install program decodare casetofoane auto on your PC or laptop
-How to transfer program decodare casetofoane auto to your smartphone or tablet
-How to backup and restore program decodare casetofoane auto data and settings
-How to uninstall program decodare casetofoane auto from your device
-How to contact program decodare casetofoane auto customer support and service
-Program decodare casetofoane auto FAQs and tips
-Program decodare casetofoane auto features and benefits
-Program decodare casetofoane auto testimonials and success stories
-Program decodare casetofoane auto demo and trial version
-Program decodare casetofoane auto license and terms of use
-How to update program decodare casetofoane auto to the latest version
-How to customize program decodare casetofoane auto settings and preferences
-How to access program decodare casetofoane auto online dashboard and account
-How to share program decodare casetofoane auto with your friends and family
-How to get program decodare casetofoane auto for free or cheap
-How to solve common problems with program decodare casetofoane auto
-How to verify program decodare casetofoane auto authenticity and security
-How to speed up program decodare casetofoane auto performance and efficiency
-How to integrate program decodare casetofoane auto with other tools and apps
-How to learn more about program decodare casetofoane auto functionality and usage
-How to get the most out of program decodare casetofoane auto for your needs
-How to find the best deal on program decodare casetofoane auto online or offline
-How to compare program decodare casetofoane auto with other similar programs
-How to avoid scams and frauds with program decodare casetofoane auto downloads
-How to report bugs and issues with program decodare casetofoane auto software

- -

If a program decodare casetofoane auto does not work for your car model or radio type, or if you prefer not to use a program at all, here are some other ways to decode your car radio:

- -

Conclusion

-

In conclusion, a program decodare casetofoane auto is a software tool that can help you unlock your car radio by generating or recovering the code based on its serial number or model. It can save you time and money, and allow you to use your radio in any car. However, you need to be careful and choose a reliable and safe program that is compatible with your car model and radio type. You also need to follow the instructions and guidelines provided by the program carefully. If a program does not work or if you prefer not to use one, you can try other alternatives such as contacting your dealer, searching online, or buying a code or a service.

-

We hope this article has helped you understand what a program decodare casetofoane auto is and how to use it. If you have any questions or comments, please feel free to contact us. We would love to hear from you!

-

FAQs

-

Here are some frequently asked questions about program decodare casetofoane auto:

-
    -
  1. Q: Is it legal to use a program decodare casetofoane auto?
    -A: It depends on the laws and regulations of your country and the terms of service of your radio manufacturer. In general, it is legal to use a program decodare casetofoane auto if you own the radio and have the right to use it. However, it may be illegal to use a program decodare casetofoane auto if you do not own the radio, if you have stolen it, or if you intend to sell it.
  2. -
  3. Q: Is it safe to use a program decodare casetofoane auto?
    -A: It depends on the quality and security of the program and its source website. In general, it is safe to use a program decodare casetofoane auto if you download it from a trusted website, scan it with an antivirus software, backup your data, create a restore point, and follow the instructions carefully. However, it may be unsafe to use a program decodare casetofoane auto if you download it from an unknown website, open it without scanning it, install it without backing up your data, run it without creating a restore point, or enter incorrect information.
  4. -
  5. Q: How much does it cost to use a program decodare casetofoane auto?
    -A: It depends on the type and source of the program. In general, it is free to use a program decodare casetofoane auto if you download it from a website that offers free codes or solutions for decoding car radios. However, it may cost some money to use a program decodare casetofoane auto if you download it from a website that charges a fee for codes or solutions for decoding car radios.
  6. -
  7. Q: How long does it take to use a program decodare casetofoane auto?
    -A: It depends on the speed and complexity of the program and the connection. In general, it takes few seconds to few minutes to use a program decodare casetofoane auto if you have a fast and simple program and a good connection. However, it may take longer to use a program decodare casetofoane auto if you have a slow and complex program and a poor connection.
  8. -
  9. Q: What are some examples of program decodare casetofoane auto?
    -A: Some examples of program decodare casetofoane auto are Ford M-Serial Radio Code Decoder for Ford radios with M-Serial numbers, Renault Radio Code Generator for Renault radios with Clio, Megane, Scenic models etc., VW Radio Code Calculator for Volkswagen radios with RNS 310 / 315 / 510 models etc., Blaupunkt Calculator for Blaupunkt radios with SC202 models etc., Philips Radio Code Generator for Philips radios with Car 400 models etc.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ace Attorney Cracked Ipa Apps Discover the Amazing Story and Characters of the Game Series.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ace Attorney Cracked Ipa Apps Discover the Amazing Story and Characters of the Game Series.md deleted file mode 100644 index 13b87c47b7ed91f215d70cdcbc195cad1829d976..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ace Attorney Cracked Ipa Apps Discover the Amazing Story and Characters of the Game Series.md +++ /dev/null @@ -1,18 +0,0 @@ -
-


Modded/Hacked App: Ace Attorney Trilogy HD by CAPCOM Co., Ltd
Bundle ID: jp.co.capcom.gyakusaisetus
iTunes Store Link: -attorney-trilogy-hd/id365681816?uo=4&at=1010lce4

-

Wright thought back to the state of the murder weapon and recalled that the Time Keeper had been activated during the reception as part of the First Startup of Love. Wright recalled that this process involved two symbols of the bride and groom's love, and that Wyatt's pendant was supposed to be a component of the Time Keeper. He fitted the pendant into a spot near the bottom, inserted the Key of Love into the pendant, and turned it. Doing so opened the top of the Time Keeper to reveal a miniature scene of the happy couple, protected by a glass covering, which was cracked and stained with blood. Since everyone had been watching Sorin and Wyatt at the time, the only person with the opportunity for murder was Nichody. Panicked, Nichody began frantically operating on his FXR-UPR mech, saying he could repair anything, until he dropped his tools, cursing himself for having listened to Selena when she asked him to save Sorin instead of her. Clutching at his chest, Nichody collapsed.

-

ace attorney cracked ipa apps


Download →→→ https://imgfil.com/2uxZZO



-

Court Is Back In Session. Star as rookie defense attorney, Apollo Justice, as he visits crime scenes, questions key witnesses and collects vital evidence before stepping into the courtroom to prove his clients innocence.

-

Features:
· All-new high-resolution graphics
· A new touch screen interface
· Interactive forensic testing mini-games that allow players to reveal hidden clues by dusting for prints, testing for traces of blood, and other exciting techniques.
· Two distinct gameplay segments:
o Investigation phase survey crime scenes, interview witnesses and gather forensic evidence that will be used in court
o Trial phase present findings from the investigation to support your case, listen to testimonies and examine witnesses
· An eclectic cast of characters:
o Apollo Justice: Stepping into the shoes of Phoenix Wright, the new rookie attorney leads the series into an exciting next chapter
o Klavier Gavin: Lead prosecutor, Apollos nemesis and rock star legend
o Kristoph Gavin: the coolest defense attorney on the judicial circuit, and Apollos mentor.
o Trucy: A mysterious magician and Apollos assistant

-

While both the SYSTEM_ALERT_WINDOW and the BIND_ACCESSIBILITY_SERVICE Android permissions have been abused individually (e.g., in UI redressing attacks, accessibility attacks), previous attacks based on these permissions failed to completely control the UI feedback loop and thus either rely on vanishing side-channels to time the appearance of overlay UI, cannot respond properly to user input, or make the attacks literally visible. In this work, we demonstrate how combining the capabilities of these permissions leads to complete control of the UI feedback loop and creates devastating and stealthy attacks. In particular, we demonstrate how an app with these two permissions can launch a variety of stealthy, powerful attacks, ranging from stealing user's login credentials and security PIN, to the silent installation of a God-like app with all permissions enabled. To make things even worse, we note that when installing an app targeting a recent Android SDK, the list of its required permissions is not shown to the user and that these attacks can be carried out without needing to lure the user to knowingly enable any permission, thus leaving him completely unsuspecting. In fact, we found that the SYSTEM_ALERT_WINDOW permission is automatically granted for apps installed from the Play Store and, even though the BIND_ACCESSIBILITY_SERVICE is not automatically granted, our experiment shows that it is very easy to lure users to unknowingly grant that permission by abusing capabilities from the SYSTEM_ALERT_WINDOW permission. We also found that it is straightforward to get a proof-of-concept app requiring both permissions accepted on the official store. We evaluated the practicality of these attacks by performing a user study: none of the 20 human subjects that took part of the experiment even suspected they had been attacked. We conclude with a number of observations and best-practices that Google and developers can adopt to secure the Android GUI.

-

Despite all predictions, native Desktop apps are back. After years porting stand-alone apps to the web, we are witnessing an inverse trend. Many companies have started providing native desktop apps built using the same technologies as their web counterparts. In this trend, Github's Electron has become a popular framework to build cross-platform desktop apps with JavaScript, HTML, and CSS. While it seems to be easy, embedding a webapp in a self-contained web environment (Chromium, Node.Js) introduces new security challenges.

In this presentation, we will illustrate Electron's security model and describe current isolation mechanisms to prevent untrusted content from using Node.js primitives. Electron's IPC messaging, preloading and other internals will be comprehensively discussed. BrowserWindow and WebView security-relevant options will be also analyzed, together with design-level weaknesses and implementation bugs in Electron-based applications.

As part of our study of Electron security, we have mapped the overall attack surface and derived a comprehensive checklist of anti-patterns. A new tool (electronegativity) to facilitate testing of Electron-based apps will be released.

-

The law affords unique protections to communications between a lawyer and client, commonly referred to as the "attorney-client privilege." This tool is indispensable because a lawyer can best advocate for a client when the client is free to disclose both the good and the bad. The law affords similar protections to communications between a physician/therapist and patient.

Cybersecurity professionals have no equivalent. This is true despite the fact that cybersecurity professionals are regularly entrusted with more sensitive information (about an individual/company) than what is entrusted to a lawyer or doctor. Security consultants can hold their clients' darkest secrets, or perhaps information that could "bring down" the company. These professionals are asked to find flaws, infiltrate networks, gather sensitive data, and document exactly how it happened, all-the-while contemplating how to use the information to the worst detriment of the target.

Although security consultants have no straightforward legal privilege for protecting client data, they may have the best mechanism of all: White Hat Privilege. By using this term, the speakers submit that a white hat professional is perhaps able to utilize technical savvy to implement technological solutions to the problem of protecting client data while staying within the confines of the law.

In this talk, we will examine the legal landscape for cybersecurity professionals seeking to safeguard a clients' sensitive client data. We will cover issues including contract formation, risk allocation, and other legal issues that arise during formation of services contracts. We will pivot to legal regimes for handling PII, cross-border data transfers, IP rights, and export-control issues. And because security professionals are not static beings, we will also examine border crossings, including authority of TSA/Customs to search and seize devices that might hold client data. While examining these issues, where possible, we will discuss potential technological solutions to legal problems.

-

- Download Ace Attorney: Dual Destinies mod for android phone apk: Click the download button on the Android device that corresponds to your phone's operating system at the top of this page! Here EN.VNMOD.NET commit to bring the file download link ace-attorney-dual-destinies-hack_mod.apk & full other version, the most accurate from the publisher CAPCOM CO., LTD..

-

- Download Ace Attorney: Dual Destinies mod for iphone ios phone: Click the download button to your iphone then follow the instructions to download the file ace-attorney-dual-destinies-hack_mod.ipa for IPhone IOS phone. Install without jailbreak.

-

You play as a young boy who must head out to stay with his aunt, who, as the title implies, happens to be a powerful witch. Unfortunately, his aunt isn't all that she is cracked up to be. While she is powerful, she is also somewhat of an outcast. Together, the two, and her cat, must explore various lands while solving puzzles and figuring out why darkness hangs over the kingdom.

-

-

Ace Attorney Investigations - Miles Edgeworth APK is definitely a great Adventure app for Android, and has been already downloaded 15341 times here at Sbenny.com! You'll love its gameplay for sure and we hope you'll enjoy it! If you have some spare moments, please scroll down and review this app by giving a valuable feedback and sharing your experience about Ace Attorney Investigations - Miles Edgeworth APK, to help people from all around the world to know what you think about it.
If you love Adventure apps for Android like we do, you'll be glad to know we have thousand of similar games and apps, just click here to find our full collection of Adventure apps for Android!

-

Always run more like a family business than a blue-chip corporate empire, the private Trump Organization has operated free from the oversight of independent board members or pesky shareholders. But now that secrecy has cracked.

-

Yet here were are, three years later, and 3DS is doing quite well for itself. While it may not command the enormous marketshare of its esteemed predecessor, 3DS's current success does at least give Nintendo a much-needed fallback position as it struggles to make a viable business of the Wii U. Oh, and also, it makes for a ton of great games: First-party hits, third-party creations from Japan, classic games via Virtual Console, and a healthy selection of entertaining independent software. To mark the occasion, USgamer's biggest 3DS fans have reflected on their feelings about the system... and cracked open our Activity Logs to confess our most-played titles.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FULLIObitDriverBoosterPro7426810Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/FULLIObitDriverBoosterPro7426810Crack.md deleted file mode 100644 index 0935f36d8430de55ed505db9e40b9b58741da14d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/FULLIObitDriverBoosterPro7426810Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

FULLIObitDriverBoosterPro7426810Crack


Download Zip ··· https://imgfil.com/2uxZkI



- - 899543212b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Ekahau Site Survey.md b/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Ekahau Site Survey.md deleted file mode 100644 index 25df815e9bf54321bd58f684e9fb9b16883bdbf4..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Free Download Ekahau Site Survey.md +++ /dev/null @@ -1,29 +0,0 @@ - -

Sorry, but Ekahau Site Survey for Mac does not have a direct download. Using the link below and downloading the required application from the developer's site was possible when we last checked. We cannot confirm if there is a free download of this app available. FDMLib cannot ensure the security of software that is hosted on external sites.

-

Free download ekahau site survey


DOWNLOAD ✏ ✏ ✏ https://imgfil.com/2uy1Tj



-

About the download, Ekahau HeatMapper is a program that takes up less free space than many programs in the category Networking software. It's a software frequently downloaded in United States, Hungary, and China.

-

Ekahau HeatMap is the latest "must have" Internet Marketing Tool for all the new breed of website owners who want to carve a niche for themselves and expand their market share. Ekahau has made it very easy to use their free website meter, which you can find at the link below, but what is this "must have" tool that everyone is raving about, and why do so many people think that it will help you increase your profits? For starters, it is extremely easy to use and understand. All you need to do is open the link below this article, then just follow the instructions on screen, and within minutes you are ready to begin your own website meter viewing, which can bring you great potential traffic, and thereby increased sales, for your business. And if you are looking for a great "make money online" tool, this is the tool for you!

-

The Ekahau HeatMapper: Website Metering for FREE Wireless Networks. Website Metering is a free wi-fi site survey tool to help website owners to determine the best place to connect their internet connections through various wireless networks. It helps determine the best internet connection for your site, based on where your visitors are coming from, what they are typing in to see your site, what time of day they visit, what their computer language is, and what browser type they use, among other factors. Website meter is an interactive tool to help you determine what the optimum location for you is, and how to connect with the closest hotspots. The site survey tool also helps you determine the most cost effective route for connecting to users in your local area.

-

In addition, the Ekahau HeatMapper can be used by webmasters who want to know which areas of their site need the most improvement. For example, if you have a restaurant that receives a lot of traffic, but you notice that the majority of these visitors have poor wi-fi coverage, you can run a site survey to find out what areas in your location are getting the best coverage. By knowing which areas of your site get the best reception of internet traffic, you can improve your marketing strategies and expand your target market. Another great thing about the Ekahau HeatMapper is that it is completely free to use. Users just need to download the software, and then they can start to measure the performance of their wireless networks immediately. All you need to do is log in with your user ID and password to begin!

-

-

With the Wifi HeatMap from Solarwinds, you can create custom heatmaps. Start with a manual site survey of the wireless signals so that the tool can record it on the surface of a blueprint or layout. NPM with Wifi Heatmaps will poll the signal strength from clients and APs and generate dynamic Wifi heatmaps.

-

NetSpot is a comprehensive wireless site survey, Wifi analysis, and troubleshooting application. It is designed for a wide variety of scenarios, from small home WLAN users to large-scale wireless deployments. NetSpot can help you improve the wireless security standpoint by running an advanced analysis. It can also help optimize the Wifi signal strength via heatmaps.

-

To start a site survey with NetSpot, you need to upload your office plan or area layout. Indicate your location on the office plan, and the tool will begin to calculate the wireless coverage. As you move around with your site survey device, Netspot will continue to record data of the signals received in the area, and finally, create a heatmap.

-

VisiWave Site Survey is an advanced WLAN site survey solution with data collection and visualization capabilities. It is designed for indoor/outdoor and metropolitan hotspots surveys. VisiWave allows you to analyze your WLAN, generate coverage heat maps automatically, and display radio waves.

-

Although the Wifi heatmap software comes at different sizes and prices, they all have the same basic functionality, to collect data and generate a heatmap. All the software showed in this list, have this ability and some others improve it with amazing features. Most of the software have free trial downloads, which is an excellent way to start testing your Wifi networks.

-

We cannot guarantee that the program is safe to download as it will be downloaded from the developer's website. Before launching the program, check it with any free antivirus software. The program lies within Internet & Network Tools, more precisely Network Tools.

-

HeatMapper is the free version of a more powerful Wi-Fi surveying tool called Ekahau Site Survey. HeatMapper lets you do surveys for only 15 minutes at a time; Site Survey gives you unlimited time, along with additional features. Pricing varies according to the size and complexity of your network.

-

If you're looking for the simplest and most basic test of your Wi-Fi speed, then Ookla Speedtest is the way to go. You don't need to download any software (which means this particular app works just fine for Macs as well). Just head to the site, click "Begin Test" and the site tests your upload and download speeds. It's a great tool for getting quick-and-dirty information about your network's throughput.

-

CyberGhost is simple to use: Just download and install the client. (Note: In order to download the free version of the software, click the Free Download link on the upper-right hand of the CyberGhost home page.) You won't even need to create an account; after you install the client, you're ready to go. There are clients for Windows, Mac, iOS, Linux and Android.

-

However, there are a few hurdles you'll need to clear first. To begin with, the Connectify home page is a bit confusing when it comes to figuring out how to download the free (Lite) version. At the top of the page are two buttons: Buy Now and Download. Click the Download button and install the app; you can then choose the Lite option during installation, which will let you share Wi-Fi hotspots.

-

TamoGraph is a powerful and user-friendly wireless site survey software tool for collecting, visualizing, and analyzing 802.11 a/b/g/n/ac/ax Wi-Fi data. Wireless network deployment and maintenance requires the use of a professional RF site survey tool that facilitates otherwise time-consuming and very complex tasks, such as ongoing analysis and reporting of signal strength, noise and interference, channel allocation, data rates, etc.

-

In a word, wireless site surveys are necessary because radio wave propagation is difficult to predict, especially in non-open space environments. It is virtually impossible to consider all the variables that might affect the health and performance of your WLAN. Changing conditions, even something as seemingly minor as a notebook equipped with a legacy 802.11g adapter that your new employee connected to the office wireless network, might seriously affect the WLAN performance. In addition, considering the wide proliferation of wireless infrastructure, factors such as interference from nearby WLANs play a very important role. This is why regular site surveys that are conducted with a professional tool are important.

-

Ekahau Site Survey & Planner allows you to design and maintain Wi-Fi networks to ensure your requirements are met.
Main features:
- WI-Fi site survey tool allows simple yet ultra-comprehensive validation of Wi-Fi network coverage and performance.
- Eliminate coverage holes, interference issues, roaming problems and all performance bottlenecks.

-

A wireless site survey analyzes the radio frequency environment of an area where a Wi-Fi network is deployed. Network teams use site surveys when planning a new network to determine where to install access points (APs).

-

Teams should also conduct periodic site surveys while the network is operating. Changes to office floor plans and layouts, such as the movement or addition of desks or file cabinets, might require changes to AP locations. New equipment might need a high level of signal strength in a location that doesn't provide the required signal levels. New applications or increased use of an existing application might also require improved performance. A site survey can determine any necessary changes to the network.

-

In a predictive site survey, the blueprint shows virtual APs, and the survey software determines signal strength based on information about how signals propagate through walls and around cabinets and desks. The software also factors in the types of applications used in the area -- e.g., heavy video use requires high throughput, while VoIP calls don't require high throughput, but do require tight limits on latency and delay.

-

The goal of a passive survey is to report on all signals at each location, including the installed network and signals from neighboring sites or other devices that generate noise at wireless frequencies.

-

Teams should perform passive surveys periodically after they build the site, install equipment and activate the network. These surveys report information on APs and their characteristics, signal strength, signal-to-noise ratios and interference. They might reveal marginal performance changes before users notice.

-

Active surveys focus on a specific signal or set of specific signals and produce an extensive list of measurements for each AP that generates a studied signal. These measurements include signal strength, throughput, round-trip time, packet loss and retransmission rate throughout the area where the signal is used. Active site surveys also measure upstream and downstream data rates and might result in teams moving an AP or adding or removing an unneeded AP. Teams should perform active surveys when investigating performance problems.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buy and Sell Anything with Aladdin.az the Leading E-commerce Platform in Azerbaijan.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buy and Sell Anything with Aladdin.az the Leading E-commerce Platform in Azerbaijan.md deleted file mode 100644 index 3f468a3a9e29d4e2d91ed9129fc18417066ac8c9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Buy and Sell Anything with Aladdin.az the Leading E-commerce Platform in Azerbaijan.md +++ /dev/null @@ -1,207 +0,0 @@ - -

tag defines a heading level 1, and the tag defines a table. To create a table in HTML, you need to use the following tags: - The
tag defines the start and end of a table. - The tag defines a table row. - The
tag defines a table header cell. - The tag defines a table data cell. You can also use other tags to add more features to your table, such as captions, column groups, headers, footers, etc. You can find more details and examples at [HTML Tables - W3Schools](^14^). Here is an example of a simple HTML table: - - - - - - - - - - - - - - - - - - - - -
NameAgeCountry
Alice25USA
Bob32UK
Charlie28Australia
- This table will look like this in the browser: | Name | Age | Country | | --- | --- | --- | | Alice | 25 | USA | | Bob | 32 | UK | | Charlie | 28 | Australia | Now that you know how to create a table in HTML, let's move on to how to write a conversational article. A conversational article is an article that uses a friendly and informal tone to engage the reader and make them feel like they are having a chat with the writer. A conversational article should also be clear, concise, and well-structured, using headings, subheadings, paragraphs, lists, examples, questions, and other elements to organize the content and make it easy to read and understand. To write a conversational article, you should follow these tips: - Speak to one person and personalize your writing. Imagine your ideal reader and write as if you are talking to them directly. Use the word "you" to address them and make them feel involved. For example: "In this article, I will show you how to create a table in HTML and write a conversational article." - Open with a story. A story can capture the reader's attention and interest from the start. It can also help you establish rapport and credibility with the reader. You can use a personal story, an anecdote, a case study, or any other relevant story that relates to your topic and purpose. For example: "When I first started learning HTML, I was confused by all the different tags and how to use them. I wanted to create a simple table to display some data, but I had no idea how to do it. That's why I decided to write this article for you." - Break grammar rules. Conversational writing does not have to follow all the formal rules of grammar and punctuation. You can use contractions, slang, colloquialisms, fragments, run-on sentences, etc., as long as they make sense and do not affect the clarity and readability of your writing. For example: "Don't worry if you're not an expert in HTML. It's not that hard once you get the hang of it." - Ask questions. Questions can help you engage the reader and make them think about your topic. They can also help you transition from one point to another or introduce a new idea or example. You can use rhetorical questions or direct questions that invite the reader to respond or take action. For example: "Do you want to learn how to create a table in HTML? Then keep reading."

How to Create a Table in HTML and Write a Conversational Article

-- Introduction - Open with a story - Explain what HTML tags are and how to use them - State the purpose and scope of the article - How to Create a Table in HTML - Explain the basic tags for creating a table - Show an example of a simple HTML table - Explain how to add more features to a table - Show an example of a more complex HTML table - How to Write a Conversational Article - Explain what a conversational article is and why it is effective - Give some tips for writing a conversational article - Speak to one person and personalize your writing - Open with a story - Break grammar rules - Ask questions - Use examples - Use humor and emotion - Use transitions and summaries - Conclusion - Summarize the main points of the article - Restate the benefits of creating a table in HTML and writing a conversational article - Include a call to action or a suggestion for further reading or learning - FAQs - List some common questions and answers related to the topic of the article Here is the article based on the outline:

How to Create a Table in HTML and Write a Conversational Article

-

Have you ever wanted to create a table in HTML and write a conversational article? If so, you're not alone. I was in the same boat when I first started learning HTML. I wanted to create a simple table to display some data, but I had no idea how to do it. I also wanted to write an engaging and friendly article that would appeal to my readers, but I didn't know how to use the right tone and style. That's why I decided to write this article for you.

-

aladdin.az


Download 🔗 https://urlin.us/2uT0W1



-

In this article, I will show you how to create a table in HTML and write a conversational article. You will learn what HTML tags are and how to use them, how to create a simple and complex table in HTML, and how to write an article that uses a friendly and informal tone to engage your reader. By the end of this article, you will be able to create your own tables in HTML and write your own conversational articles with ease.

-

How to Create a Table in HTML

-

HTML stands for HyperText Markup Language, and it is the standard language for creating web pages. HTML tags are special words or letters surrounded by angle brackets, < and >, that tell the browser how to display the content of a web page. For example, the

tag defines a paragraph, the

tag defines a heading level 1, and the tag defines a table.

-

To create a table in HTML, you need to use the following tags:

-
    -
  • The
tag defines the start and end of a table. -
  • The
  • tag defines a table row. -
  • The
  • tag defines a table header cell. -
  • The
  • tag defines a table data cell. - -

    You can also use other tags to add more features to your table, such as captions, column groups, headers, footers, etc. You can find more details and examples at [HTML Tables - W3Schools].

    -

    aladdin.az marketplace
    -aladdin.az instagram
    -aladdin.az baku
    -aladdin.az online shopping
    -aladdin.az e-commerce
    -aladdin.az technology
    -aladdin.az reviews
    -aladdin.az products
    -aladdin.az delivery
    -aladdin.az customer service
    -aladdin.az coupons
    -aladdin.az discounts
    -aladdin.az careers
    -aladdin.az jobs
    -aladdin.az contact
    -aladdin.az phone number
    -aladdin.az email address
    -aladdin.az app
    -aladdin.az download
    -aladdin.az login
    -aladdin.az register
    -aladdin.az account
    -aladdin.az orders
    -aladdin.az returns
    -aladdin.az refund policy
    -aladdin.az payment methods
    -aladdin.az credit card
    -aladdin.az gift card
    -aladdin.az loyalty program
    -aladdin.az rewards
    -aladdin.az points
    -aladdin.az cashback
    -aladdin.az affiliates
    -aladdin.az partners
    -aladdin.az sellers
    -aladdin.az vendors
    -aladdin.az categories
    -aladdin.az fashion
    -aladdin.az electronics
    -aladdin.az home and garden
    -aladdin.az beauty and health
    -aladdin.az sports and outdoors
    -aladdin.az books and media
    -aladdin.az toys and games
    -aladdin.az pets and animals
    -aladdin.az travel and leisure
    -aladdin.az automotive and motorcycles
    -aladdin.az food and beverages

    -

    Here is an example of a simple HTML table:

    - - - - - - - - - - - - - - - - - - - - - -
    NameAgeCountry
    Alice25USA
    Bob32UK
    Charlie28Australia
    -

    This table will look like this in the browser:

    - | Name | Age | Country | | --- | --- | --- | | Alice | 25 | USA | | Bob | 32 | UK | | Charlie | 28 | Australia |

    If you want to create a more complex table in HTML, you can use some additional tags and attributes. For example, you can use the colspan and rowspan attributes to merge cells horizontally or vertically. You can also use the border, cellpadding, cellspacing, width, height, align, valign, bgcolor, etc., attributes to style your table. You can find more details and examples at [HTML Table Advanced Features - W3Schools](^2^).

    -

    Here is an example of a more complex HTML table:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Monthly Savings
    NameAgeCountry
    Alice25USA
    BobUK
    CharlieAustralia
    This table shows some data about people and their savings.
    -

    This table will look like this in the browser:

    - | Name | Age | Country | | --- | --- | --- | | Alice | 25 | USA | | Bob | | UK | | Charlie | | Australia |

    This table shows some data about people and their savings.

    -

    How to Write a Conversational Article

    -

    A conversational article is an article that uses a friendly and informal tone to engage the reader and make them feel like they are having a chat with the writer. A conversational article should also be clear, concise, and well-structured, using headings, subheadings, paragraphs, lists, examples, questions, and other elements to organize the content and make it easy to read and understand.

    -

    To write a conversational article, you should follow these tips:

    -

    Speak to one person and personalize your writing

    -

    Imagine your ideal reader and write as if you are talking to them directly. Use the word "you" to address them and make them feel involved. For example: "In this article, I will show you how to create a table in HTML and write a conversational article."

    -

    Open with a story

    -

    A story can capture the reader's attention and interest from the start. It can also help you establish rapport and credibility with the reader. You can use a personal story, an anecdote, a case study, or any other relevant story that relates to your topic and purpose. For example: "When I first started learning HTML, I was confused by all the different tags and how to use them. I wanted to create a simple table to display some data, but I had no idea how to do it. That's why I decided to write this article for you."

    -

    Break grammar rules

    -

    Conversational writing does not have to follow all the formal rules of grammar and punctuation. You can use contractions, slang, colloquialisms, fragments, run-on sentences, etc., as long as they make sense and do not affect the clarity and readability of your writing. For example: "Don't worry if you're not an expert in HTML. It's not that hard once you get the hang of it."

    -

    Ask questions

    -

    Questions can help you engage the reader and make them think about your topic. They can also help you transition from one point to another or introduce a new idea or example. You can use rhetorical questions or direct questions that invite the reader to respond or take action. For example: "Do you want to learn how to create a table in HTML? Then keep reading."

    -

    Use examples

    -

    Examples can help you illustrate your points and make them more concrete and relatable for the reader. They can also help you explain complex or abstract concepts in simpler terms. You can use real-life examples, hypothetical scenarios, analogies, metaphors, etc., as long as they are relevant and accurate. For example: "Creating a table in HTML is like building a Lego structure. You need different pieces (tags) that fit together (nest) in a certain way (syntax) to form the shape (table) you want."

    -

    Use humor and emotion

    -

    Humor and emotion can help you connect with the reader and make your writing more enjoyable and memorable. You can use jokes, puns, sarcasm, irony, exaggeration, etc., as long as they are appropriate and tast eful for your audience and topic. You can also use emoticons, emojis, gifs, etc., to add some fun and personality to your writing. For example: "HTML tables are awesome ?. They can help you organize and present your data in a neat and attractive way. But they can also be tricky ?. You need to know how to use the right tags and attributes to create the table you want."

    -

    Use transitions and summaries

    -

    Transitions and summaries can help you guide the reader through your article and keep them on track. They can also help you emphasize your main points and remind the reader of what they have learned or what they need to do next. You can use words or phrases such as "however", "therefore", "in conclusion", "to sum up", etc., to create smooth and logical connections between your paragraphs and sections. For example: "In conclusion, creating a table in HTML is not as hard as it may seem. You just need to follow some basic steps and use some simple tags and attributes. Let's recap what we have learned so far."

    -

    Conclusion

    -

    In this article, I have shown you how to create a table in HTML and write a conversational article. You have learned what HTML tags are and how to use them, how to create a simple and complex table in HTML, and how to write an article that uses a friendly and informal tone to engage your reader.

    -

    Creating a table in HTML can help you organize and present your data in a neat and attractive way. Writing a conversational article can help you connect with your reader and make your writing more enjoyable and memorable. By following the tips and examples I have given you, you will be able to create your own tables in HTML and write your own conversational articles with ease.

    -

    I hope you have found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. I would love to hear from you. And if you want to learn more about HTML or conversational writing, you can check out these resources:

    -
      -
    • [HTML Tutorial - W3Schools]
    • -
    • [How to Write Conversational Content - Neil Patel]
    • -
    -

    Thank you for reading and happy coding!

    -

    FAQs

    -

    What is HTML?

    -

    HTML stands for HyperText Markup Language, and it is the standard language for creating web pages. HTML tags are special words or letters surrounded by angle brackets, < and >, that tell the browser how to display the content of a web page.

    -

    What is a conversational article?

    -

    A conversational article is an article that uses a friendly and informal tone to engage the reader and make them feel like they are having a chat with the writer. A conversational article should also be clear, concise, and well-structured, using headings, subheadings, paragraphs, lists, examples, questions, and other elements to organize the content and make it easy to read and understand.

    -

    How do I create a table in HTML?

    -

    To create a table in HTML, you need to use the following tags:

    -
      -
    • The tag defines the start and end of a table. -
    • The
    • tag defines a table row. -
    • The
    • tag defines a table header cell. -
    • The
    • tag defines a table data cell. - -

      You can also use other tags to add more features to your table, such as captions, column groups, headers, footers, etc.

      -

      How do I write a conversational article?

      -

      To write a conversational article, you should follow these tips:

      -
        -
      • Speak to one person and personalize your writing.
      • -
      • Open with a story.
      • -
      • Break grammar rules.
      • -
      • Ask questions.
      • -
      • Use examples.
      • -
      • Use humor and emotion.
      • -
      • Use transitions and summaries.
      • -
      -

      Where can I learn more about HTML or conversational writing?

      -

      You can learn more about HTML or conversational writing by checking out these resources:

      -
        -
      • [HTML Tutorial - W3Schools]
      • -
      • [How to Write Conversational Content - Neil Patel]
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans and Join the Epic Clan Wars.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans and Join the Epic Clan Wars.md deleted file mode 100644 index 5327063dd918bb01d76bbca0087b9ff88114129e..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash of Clans and Join the Epic Clan Wars.md +++ /dev/null @@ -1,176 +0,0 @@ - -

      How to Download Clash of Clans

      -

      Clash of Clans is one of the most popular and addictive mobile games in the world. It is a strategy game where you build your own village, raise a clan, and compete in epic clan wars with millions of other players. In this article, we will show you how to download clash of clans on your Android or iOS device and start playing right away.

      -

      how to download clash of clans


      Download Zip >> https://urlin.us/2uSYbK



      -

      What is Clash of Clans?

      -

      A brief introduction to the game and its features

      -

      Clash of Clans is a free-to-play game developed by Supercell, a Finnish company that also created other hit games like Clash Royale, Brawl Stars, Boom Beach, and Hay Day. The game was launched in 2012 and has since become one of the highest-grossing and most-downloaded apps on both Google Play Store and App Store .

      -

      The game is set in a fantasy world where you can create your own village with various buildings, such as town hall, barracks, gold mines, elixir collectors, defenses, walls, and more. You can also train different types of troops, such as barbarians, archers, wizards, giants, dragons, and more. You can use these troops to attack other players' villages or defend your own from enemy raids.

      -

      One of the main features of the game is the clan system. You can join or create a clan with other players from around the world. You can chat with your clanmates, donate and receive troops, and participate in clan wars. Clan wars are special events where two clans face each other in a series of attacks. The clan with the most stars (earned by destroying enemy buildings) at the end of the war wins.

      -

      Why should you play Clash of Clans?

      -

      The benefits of playing the game, such as fun, strategy, social interaction, and competition

      -

      There are many reasons why you should play Clash of Clans. Here are some of them:

      -

      How to download clash of clans on Android devices
      -How to download clash of clans on iOS devices
      -How to download clash of clans on PC or Mac
      -How to download clash of clans on Windows 10
      -How to download clash of clans on Chromebook
      -How to download clash of clans without Google Play
      -How to download clash of clans without App Store
      -How to download clash of clans APK file
      -How to download clash of clans latest version
      -How to download clash of clans update
      -How to download clash of clans mod APK
      -How to download clash of clans hack version
      -How to download clash of clans private server
      -How to download clash of clans offline mode
      -How to download clash of clans for free
      -How to install clash of clans after downloading
      -How to play clash of clans after downloading
      -How to create an account for clash of clans after downloading
      -How to join a clan in clash of clans after downloading
      -How to start a clan war in clash of clans after downloading
      -How to upgrade your village in clash of clans after downloading
      -How to unlock new troops in clash of clans after downloading
      -How to use spells in clash of clans after downloading
      -How to use heroes in clash of clans after downloading
      -How to use siege machines in clash of clans after downloading
      -How to switch between villages in clash of clans after downloading
      -How to access the builder base in clash of clans after downloading
      -How to earn gems in clash of clans after downloading
      -How to spend gems wisely in clash of clans after downloading
      -How to get free gems in clash of clans after downloading
      -How to get unlimited gems in clash of clans after downloading
      -How to get magic items in clash of clans after downloading
      -How to use magic items effectively in clash of clans after downloading
      -How to get free magic items in clash of clans after downloading
      -How to participate in clan games in clash of clans after downloading
      -How to complete clan games challenges in clash of clans after downloading
      -How to earn clan games rewards in clash of clans after downloading
      -How to join the clan war leagues in clash of clans after downloading
      -How to win the clan war leagues in clash of clans after downloading
      -How to earn league medals in clan war leagues in clash of clans after downloading
      -How to redeem league medals for rewards in clan war leagues in clash of clans after downloading
      -How to join the clan capital leagues in clash of clans after downloading
      -How to win the clan capital leagues in clash of clans after downloading
      -How to earn capital trophies in clan capital leagues in clash of clans after downloading
      -How to customize your player house in clan capital leagues in clash of clans after downloading
      -How to get hero skins in clash of clans after downloading
      -How to change hero skins in clash of cl

      -
        -
      • It is fun. The game offers endless hours of entertainment and challenge. You can design your own village, experiment with different troop combinations, and discover new strategies. You can also enjoy the colorful graphics, sound effects, and animations.
      • -
      • It is strategic. The game requires you to think carefully about how to build your village, how to attack your enemies, and how to cooperate with your clanmates. You have to balance your resources, plan your upgrades, and choose your targets wisely. You also have to adapt to changing situations and learn from your mistakes.
      • -
      • It is social. The game allows you to interact with other players from different countries and cultures. You can chat with them, share tips and tricks, and make friends. You can also join or create a clan that suits your play style and preferences. You can support each other in times of need and celebrate your victories together.
      • -
      • It is competitive. The game gives you the opportunity to test your skills against other players from around the world. You can climb the leaderboards, earn trophies, and prove yourself as the best. You can also challenge yourself in various game modes, such as clan war leagues, friendly wars, clan games, and special events.
      • -
      -

      How to download Clash of Clans

      How to download Clash of Clans on Android devices

      -

      The steps to download the game from Google Play Store

      -

      If you have an Android device, such as a smartphone or a tablet, you can download Clash of Clans from Google Play Store. Here are the steps to follow:

      -
        -
      1. Open the Google Play Store app on your device. You can find it on your home screen or in your app drawer.
      2. -
      3. Tap on the search bar and type "Clash of Clans". You can also use the voice search feature by tapping on the microphone icon.
      4. -
      5. Select the game from the list of results. You can recognize it by its icon, which is a red shield with a yellow lion.
      6. -
      7. Tap on the green "Install" button. This will start downloading the game to your device. You may need to accept some permissions and terms of service before proceeding.
      8. -
      9. Wait for the download and installation to finish. You can check the progress on the notification bar or on the app page.
      10. -
      11. Once the game is installed, you can tap on the "Open" button to launch it. You can also find it on your home screen or in your app drawer.
      12. -
      -

      Requirements and compatibility

      -

      To play Clash of Clans on your Android device, you need to meet some minimum requirements. These are:

      -
        -
      • An Android version of 4.4 or higher
      • -
      • A device with at least 1 GB of RAM
      • -
      • A stable internet connection (Wi-Fi or mobile data)
      • -
      • At least 200 MB of free storage space
      • -
      -

      You can check your device's specifications by going to Settings > About phone or tablet. You can also use third-party apps like CPU-Z or Droid Hardware Info to get more details.

      -

      If your device does not meet these requirements, you may not be able to download or play the game properly. You may experience crashes, glitches, or lagging issues. In that case, you may need to upgrade your device or look for alternative ways to play the game, such as using an emulator or a cloud gaming service.

      -

      Installation and updates

      -

      Once you have downloaded and installed Clash of Clans on your Android device, you can start playing right away. However, you may need to update the game from time to time to get new features, bug fixes, and security patches. You can update the game manually or automatically, depending on your preferences.

      -

      To update the game manually, you need to follow these steps:

      -
        -
      1. Open the Google Play Store app on your device.
      2. -
      3. Tap on the menu icon (three horizontal lines) on the top left corner.
      4. -
      5. Select "My apps & games" from the menu.
      6. -
      7. Find Clash of Clans from the list of installed apps and tap on it.
      8. -
      9. If there is an update available, you will see an "Update" button. Tap on it to start downloading and installing the update.
      10. -
      11. Wait for the update to finish and then tap on "Open" to launch the game.
      12. -
      -

      To update the game automatically, you need to follow these steps:

      -
        -
      1. Open the Google Play Store app on your device.
      2. -
      3. Tap on the menu icon (three horizontal lines) on the top left corner.
      4. -
      5. Select "Settings" from the menu.
      6. -
      7. Tap on "Auto-update apps" under General settings.
      8. -
      9. Select "Over Wi-Fi only" or "Over any network" depending on your preference. This will enable automatic updates for all your apps, including Clash of Clans.
      10. -
      11. You can also choose to update individual apps by going to their app pages and tapping on the menu icon (three vertical dots) on the top right corner. Then select "Enable auto-update" from the menu.
      12. -
      -

      Permissions and settings

      -

      To play Clash of Clans on your Android device, you need to grant some permissions to the game. These are:

      -
        -
      • Access to photos, media, and files: This allows the game to save data and settings on your device's storage.
      • -
      • Access to Wi-Fi connection information: This allows the game to check your internet connection status and quality.
      • -
      • Access to device ID and call information: This allows the game to identify your device and prevent fraud and abuse.
      • -
      -

      You can manage these permissions by going to Settings > Apps > Clash of Clans > Permissions. You can also revoke these permissions at any time, but this may affect your gameplay experience or functionality.

      -

      In addition to these permissions In addition to these permissions, you can also customize some settings to enhance your gameplay experience. These include: - Sound and music: You can adjust the volume and mute or unmute the sound effects and background music of the game. You can also enable or disable notifications and vibration alerts. - Language: You can change the language of the game to your preferred one. The game supports over 20 languages, including English, Spanish, French, German, Chinese, Japanese, Korean, and more. - Graphics: You can change the graphics quality of the game to suit your device's performance and battery life. You can choose between low, medium, high, or ultra settings. - Account: You can link your game account to your Google Play Games or Supercell ID account. This will allow you to save your progress and access your game from different devices. You can also switch between multiple accounts if you have more than one. You can access these settings by tapping on the gear icon on the top right corner of the game screen. You can also find more information and help by tapping on the question mark icon next to it.

      How to download Clash of Clans on iOS devices

      -

      The steps to download the game from App Store

      -

      If you have an iOS device, such as an iPhone or an iPad, you can download Clash of Clans from App Store. Here are the steps to follow:

      -
        -
      1. Open the App Store app on your device. You can find it on your home screen or in your app library.
      2. -
      3. Tap on the search icon (a magnifying glass) on the bottom right corner.
      4. -
      5. Type "Clash of Clans" in the search bar and tap on the search button. You can also use the voice search feature by tapping on the microphone icon.
      6. -
      7. Select the game from the list of results. You can recognize it by its icon, which is a red shield with a yellow lion.
      8. -
      9. Tap on the blue "Get" button. This will start downloading the game to your device. You may need to enter your Apple ID and password or use Touch ID or Face ID before proceeding.
      10. -
      11. Wait for the download and installation to finish. You can check the progress on the app page or on your home screen.
      12. -
      13. Once the game is installed, you can tap on it to launch it. You can also find it in your app library.
      14. -
      -

      Requirements and compatibility

      -

      To play Clash of Clans on your iOS device, you need to meet some minimum requirements. These are:

      -
        -
      • An iOS version of 10 or higher
      • -
      • A device with at least 1 GB of RAM
      • -
      • A stable internet connection (Wi-Fi or mobile data)
      • -
      • At least 200 MB of free storage space
      • -
      -

      You can check your device's specifications by going to Settings > General > About. You can also use third-party apps like System Status or Battery Life to get more details.

      -

      If your device does not meet these requirements, you may not be able to download or play the game properly. You may experience crashes, glitches, or lagging issues. In that case, you may need to upgrade your device or look for alternative ways to play the game, such as using an emulator or a cloud gaming service.

      -

      Installation and updates

      -

      Once you have downloaded and installed Clash of Clans on your iOS device, you can start playing right away. However, you may need to update the game from time to time to get new features, bug fixes, and security patches. You can update the game manually or automatically, depending on your preferences.

      -

      To update the game manually, you need to follow these steps:

      -
        -
      1. Open the App Store app on your device.
      2. -
      3. Tap on your profile picture on the top right corner.
      4. -
      5. Scroll down to see a list of apps that have updates available.
      6. -
      7. Find Clash of Clans from the list and tap on "Update" next to it. This will start downloading and installing the update.
      8. -
      9. Wait for the update to finish and then tap on "Open" to launch the game.
      10. -
      -

      To update the game automatically, you need to follow these steps:

      -
        -
      1. Open the Settings app on your device.
      2. -
      3. Tap on "App Store" under General settings.
      4. -
      5. Toggle on "App Updates" under Automatic Downloads. This will enable automatic updates for all your apps, including Clash of Clans.
      6. -
      7. You can also choose to update individual apps by going to their app pages and tapping on "More" (three horizontal dots) on the top right corner. Then select "Automatic Updates" from the menu. Then toggle on "Enable Automatic Updates" from the menu.
      8. -
      -

      Permissions and settings

      -

      To play Clash of Clans on your iOS device, you need to grant some permissions to the game. These are:

      -
        -
      • Access to photos: This allows the game to save screenshots and videos of your gameplay on your device's photo library.
      • -
      • Access to microphone: This allows the game to record your voice and use it for voice chat with your clanmates or other players.
      • -
      • Access to notifications: This allows the game to send you alerts and reminders about your game progress, events, and offers.
      • -
      -

      You can manage these permissions by going to Settings > Clash of Clans. You can also revoke these permissions at any time, but this may affect your gameplay experience or functionality.

      -

      In addition to these permissions, you can also customize some settings to enhance your gameplay experience. These include:

      - - Sound and music: You can adjust the volume and mute or unmute the sound effects and background music of the game. You can also enable or disable notifications and vibration alerts. - Language: You can change the language of the game to your preferred one. The game supports over 20 languages, including English, Spanish, French, German, Chinese, Japanese, Korean, and more. - Graphics: You can change the graphics quality of the game to suit your device's performance and battery life. You can choose between low, medium, high, or ultra settings. - Account: You can link your game account to your Game Center or Supercell ID account. This will allow you to save your progress and access your game from different devices. You can also switch between multiple accounts if you have more than one. You can access these settings by tapping on the gear icon on the top right corner of the game screen. You can also find more information and help by tapping on the question mark icon next to it.

      How to start playing Clash of Clans

      -

      The basics of the game, such as building your village, joining a clan, and fighting in clan wars

      -

      Now that you have downloaded and installed Clash of Clans on your device, you are ready to start playing. Here are some of the basics of the game that you need to know:

      - - Building your village: Your village is your base of operations and your main source of resources. You can build various buildings, such as town hall, barracks, gold mines, elixir collectors, defenses, walls, and more. You can also upgrade these buildings to improve their functions and appearance. You can use gold and elixir as the main currencies to build and upgrade your buildings. You can also use gems as a premium currency to speed up the process or buy special items. - Joining a clan: A clan is a group of players who share a common interest and goal in the game. You can join or create a clan with other players from around the world. You can chat with your clanmates, donate and receive troops, and participate in clan wars. Clan wars are special events where two clans face each other in a series of attacks. The clan with the most stars (earned by destroying enemy buildings) at the end of the war wins. - Fighting in clan wars: Clan wars are one of the most exciting and rewarding aspects of the game. They allow you to test your skills against other players from around the world. You can participate in clan wars once you have a town hall level 4 or higher and join a clan that has at least 10 members. To start a clan war, you need to have a leader or co-leader in your clan who can initiate the war search. Once a match is found, you will enter a preparation day where you can scout your enemy's village and donate troops to your clan's war base. Then you will enter a battle day where you can attack your enemy's village twice and earn stars for your clan. The clan with the most stars at the end of the battle day wins the war.

      Conclusion

      -

      A summary of the main points and a call to action

      -

      Clash of Clans is a fun, strategic, social, and competitive game that you can play on your Android or iOS device. It is easy to download and install from Google Play Store or App Store. All you need is a compatible device, a stable internet connection, and some free storage space. You can also update the game regularly to get new features, bug fixes, and security patches.

      -

      Once you have downloaded and installed Clash of Clans on your device, you can start building your own village, raising a clan, and competing in epic clan wars with millions of other players. You can also customize some settings to enhance your gameplay experience.

      -

      If you are looking for a game that offers a game that offers endless hours of entertainment and challenge, then Clash of Clans is the perfect choice for you. Download it today and join the millions of players who are already enjoying this amazing game. You won't regret it!

      -

      FAQs

      -

      Some common questions and answers about Clash of Clans

      -

      Here are some of the frequently asked questions and answers about Clash of Clans that you may find helpful:

      - - Q: How can I save my game progress and access it from different devices? - A: You can save your game progress and access it from different devices by linking your game account to your Google Play Games or Supercell ID account. You can do this by going to Settings > Account in the game. You can also switch between multiple accounts if you have more than one. - Q: How can I get more gems in the game? - A: Gems are the premium currency in the game that you can use to speed up the process or buy special items. You can get more gems by completing achievements, removing obstacles, participating in events, or purchasing them with real money. - Q: How can I contact the support team if I have any issues or questions about the game? - A: You can contact the support team by going to Settings > Help and Support in the game. You can also visit the official website, forum, or social media pages of the game for more information and help. - Q: How can I report a bug, a glitch, or a hacker in the game? - A: You can report a bug, a glitch, or a hacker by going to Settings > Help and Support > Report an Issue in the game. You can also send a screenshot or a video of the problem to the support team for further investigation. - Q: How can I join or create a clan in the game? - A: You can join or create a clan in the game by tapping on the clan icon on the bottom left corner of the game screen. You can then search for a clan that suits your play style and preferences, or create your own clan with your own rules and requirements.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Car Driving School Simulator APK Drive Around the World in 9 Different Maps.md b/spaces/1phancelerku/anime-remove-background/Car Driving School Simulator APK Drive Around the World in 9 Different Maps.md deleted file mode 100644 index 8e1c6b5eacc76f12be3bc09d96844c91f1faadf0..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Car Driving School Simulator APK Drive Around the World in 9 Different Maps.md +++ /dev/null @@ -1,81 +0,0 @@ -
      -

      Car Driving School Simulator Apk Dayı: A Fun and Useful Game for Learning to Drive

      -

      Do you want to learn how to drive in a realistic and safe way? Do you want to have fun while doing so? If yes, then you should try car driving school simulator apk dayı, a popular driving simulation game that has been on the market since 2017. This game will test your driving skills in various scenarios and teach you useful traffic rules along the way. In this article, we will tell you more about this game and why you should download it today.

      -

      car driving school simulator apk dayı


      Download Zip ::: https://jinyurl.com/2uNOcd



      -

      Features of Car Driving School Simulator Apk Dayı

      -

      Car driving school simulator apk dayı is a feature-packed game that offers a lot of content and variety for the players. Here are some of the main features of the game:

      -
        -
      • Huge car collection: You can choose from over 39 awesome cars in different categories, such as sedans, pickup trucks, muscle cars, 4x4s, buses, and even a supercar.
      • -
      • Multiple varied maps: You can drive around 9 different locations around the world, such as California, Canada, Aspen, Las Vegas, New York, Miami, Tokyo, and Norway.
      • -
      • Realistic traffic: You have to deal with real traffic AI that will react to your actions and follow the traffic rules.
      • -
      • Dynamic weather: You have to adapt to the changes on the road due to different weather conditions, such as rain, snow, fog, or night.
      • -
      • Online multiplayer: You can compete with other players online in free roaming mode or seasonal events.
      • -
      • Seasonal events: You can participate in special challenges that will surprise you with different themes and rewards.
      • -
      -

      Tips and Tricks for Playing Car Driving School Simulator Apk Dayı

      -

      If you want to improve your driving skills and enjoy the game more, here are some tips and tricks for playing car driving school simulator apk dayı:

      -
        -
      • Practice vehicle control tasks until you can do them automatically: You need to master the basics of handling the steering wheel, using the gas and brake pedals, shifting gears, visually scanning the road, and using the turn signal.
      • -
      • Test various braking scenarios to learn how to safely stop your car: You need to practice braking for different reasons, such as stop signs, red lights, pedestrians, speed limits, or sharp turns.
      • -
      • Use traffic sign scenarios to practice responding to different signs: You need to learn how to obey different traffic signs, such as stop signs, yield signs, speed limit signs, or turn signs.
      • -
      • Practice turning and changing lanes using your turn signal: You need to learn how to maneuver your car in different situations, such as turning left or right, changing lanes, overtaking, or merging with traffic.
      • -
      • Use the first person mode to increase your immersion: You can switch to the optional first person camera view to get a more realistic driving experience.
      • -
      -

      Review of Car Driving School Simulator Apk Dayı

      -

      Car driving school simulator apk dayı is one of the best driving simulators available on the market. It has received positive reviews from players who praised its realistic graphics, sound effects, physics, and gameplay. The game is also constantly updated with new features, improvements, and bug fixes. Here are some of the pros and cons of the game:

      - - -
      ProsCons- Realistic graphics, sound effects, and physics
      - Huge car collection and multiple varied maps
      - Realistic traffic and dynamic weather
      - Online multiplayer and seasonal events
      - Educational and fun gameplay
      - Requires a lot of storage space and a good device
      - Some bugs and glitches may occur
      - Some ads and in-app purchases may be annoying
      - Some traffic rules may differ from real life
      - Some features may require internet connection
      -

      Conclusion

      -

      Car driving school simulator apk dayı is a game that will not only entertain you, but also teach you valuable driving skills. You can choose from a variety of cars and locations, and experience realistic traffic and weather conditions. You can also compete with other players online and participate in seasonal events. The game is well-designed, realistic, and fun to play. If you are looking for a driving simulator that will challenge you and help you learn, you should download car driving school simulator apk dayı today.

      -

      FAQs

      -

      Here are some frequently asked questions about the game:

      -

      car driving school simulator 2021 apk download
      -car driving school simulator mod apk unlimited money
      -car driving school simulator boombit games apk
      -car driving school simulator android oyun club
      -car driving school simulator apk indir
      -car driving school simulator hack apk
      -car driving school simulator 3d apk
      -car driving school simulator apk pure
      -car driving school simulator apk uptodown
      -car driving school simulator apk mod menu
      -car driving school simulator 2020 apk
      -car driving school simulator full apk
      -car driving school simulator pro apk
      -car driving school simulator free download apk
      -car driving school simulator latest version apk
      -car driving school simulator offline apk
      -car driving school simulator online apk
      -car driving school simulator premium apk
      -car driving school simulator real parking game apk
      -car driving school simulator unlocked apk
      -car driving school simulator 2 apk
      -car driving school simulator 4x4 apk
      -car driving school simulator android 1
      -car driving school simulator by nullapp apk
      -car driving school simulator cheats apk
      -car driving school simulator city drive game apk
      -car driving school simulator cracked apk
      -car driving school simulator everything unlocked apk
      -car driving school simulator for android apk
      -car driving school simulator games bracket apk
      -car driving school simulator hack mod apk download
      -car driving school simulator in usa apk
      -car driving school simulator ios apk
      -car driving school simulator latest mod apk
      -car driving school simulator mod apk android oyun club
      -car driving school simulator mod apk revdl
      -car driving school simulator new update apk
      -car driving school simulator no ads apk
      -car driving school simulator obb file download apk
      -car driving school simulator old version apk download

      -
        -
      1. How can I download car driving school simulator apk dayı?
        You can download the game from the Google Play Store or the App Store for free. You can also download the apk file from various websites, such as [Apk Dayı] or [Apk Pure].
      2. -
      3. How can I unlock more cars and maps?
        You can unlock more cars and maps by earning coins and XP in the game. You can also buy them with real money or watch ads to get them for free.
      4. -
      5. How can I play online multiplayer?
        You can play online multiplayer by tapping on the multiplayer icon on the main menu. You can then choose a map and a mode to join or create a room. You can also invite your friends to play with you.
      6. -
      7. How can I participate in seasonal events?
        You can participate in seasonal events by tapping on the event icon on the main menu. You can then see the current event theme, duration, rewards, and challenges. You can complete the challenges to earn points and rank up on the leaderboard.
      8. -
      9. How can I contact the developers of the game?
        You can contact the developers of the game by sending an email to support@boombit.com or by visiting their website at [BoomBit Games]. You can also follow them on social media platforms, such as [Facebook], [Twitter], or [Instagram].
      10. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Christopher Martin - Let Her Go (Lyrics Video) - MP3 Download.md b/spaces/1phancelerku/anime-remove-background/Christopher Martin - Let Her Go (Lyrics Video) - MP3 Download.md deleted file mode 100644 index 6ea20d22ff39bba9471715031cece6d2d696260a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Christopher Martin - Let Her Go (Lyrics Video) - MP3 Download.md +++ /dev/null @@ -1,101 +0,0 @@ -
      -

      Let Her Go MP3 Download by Christopher Martin

      -

      If you are a fan of reggae and dancehall music, you might have heard of Christopher Martin, a talented singer and songwriter from Jamaica. He is best known for his cover version of Passenger's hit song "Let Her Go", which has over 48 million views on YouTube. In this article, we will tell you more about Christopher Martin, his rendition of "Let Her Go", and how you can download it as an MP3 file legally and easily.

      -

      Who is Christopher Martin?

      -

      Christopher Martin is a reggae/dancehall artist who was born on February 14, 1987, in St. Catherine, Jamaica. He rose to fame after winning the Digicel Rising Stars competition in 2005, which is the Jamaican version of American Idol. Since then, he has released several albums and singles, such as "Cheaters Prayer", "I'm a Big Deal", "Is It Love", and "Dreams of Brighter Days". He has also collaborated with other prominent artists in the genre, such as Busy Signal, Romain Virgo, Cecile, and Konshens.

      -

      let her go mp3 download by christopher martin


      Download Ziphttps://jinyurl.com/2uNTuo



      -

      His background and achievements

      -

      Martin grew up in a musical family and developed a passion for singing at an early age. He attended the Watermount All-Age School and later graduated from St. Jago High School, where he excelled in academics and sports. He also participated in various drama and music festivals, winning several awards and accolades. After winning the Digicel Rising Stars contest, he signed a contract with VP Records, one of the largest independent reggae labels in the world. He has since toured extensively across the globe, performing at major events such as One Night with Michael Bolton, Air Jamaica Jazz Festival, Reggae Sumfest, and Rebel Salute. He has also received numerous nominations and honors for his work, such as the Excellence in Music and Entertainment Award, the Youth View Award, the International Reggae and World Music Award, and the MOBO Award.

      -

      His musical style and influences

      -

      Martin's music is a blend of reggae, dancehall, pop, R&B, and soul. He has a smooth and versatile voice that can deliver both romantic ballads and upbeat anthems. He writes most of his songs himself, drawing inspiration from his personal experiences, social issues, spirituality, and love. Some of his musical influences include Bob Marley, Dennis Brown, Beres Hammond, Luther Vandross, Boyz II Men, Usher, and R. Kelly.

      -

      What is Let Her Go?

      -

      "Let Her Go" is a song that was originally written and recorded by Passenger, a British singer-songwriter. It was released in July 2012 as the second single from his third album, All the Little Lights. It became a huge international success, topping the charts in over 20 countries and selling over six million copies worldwide. It was also nominated for a Brit Award for British Single of the Year in 2014.

      -

      The original song by Passenger

      -

      The song is about realizing the value of someone when they are already gone. It emphasizes how waiting too long to tell someone how you feel may be too late and that they may have moved on. The chorus says: "Well you only need the light when it's burning low / Only miss the sun when it starts to snow / Only know you love her when you let her go / Only know you've been high when you're feeling low / Only hate the road when you're missing home / Only know you love her when you let her go".

      The cover version by Christopher Martin

      -

      In 2014, Christopher Martin released his own version of "Let Her Go" as a single. He changed the lyrics slightly to suit his reggae style and added some Jamaican slang and expressions. He also gave the song a more upbeat and cheerful vibe, contrasting with the melancholic tone of the original. His cover was well-received by fans and critics alike, who praised his vocal delivery and interpretation of the song. It became one of his most popular and requested songs, especially in the Caribbean and Africa.

      -

      The meaning and message of the song

      -

      According to Martin, he chose to cover "Let Her Go" because he liked the message and the melody of the song. He said that he could relate to the song because he had experienced losing someone he loved before. He also said that he wanted to share the song with his fans who might be going through a similar situation. He explained that the song is about learning from your mistakes and appreciating what you have before it's gone. He said that the song is also about hope and optimism, because even if you let someone go, you can still find happiness and love again.

      -

      * christopher martin let her go lyrics
      -* let her go by christopher martin video
      -* christopher martin let her go reggaeville
      -* let her go christopher martin mp3 free download
      -* christopher martin let her go official video
      -* let her go by christopher martin audio
      -* christopher martin let her go song download
      -* let her go christopher martin youtube
      -* christopher martin let her go instrumental
      -* let her go by christopher martin remix
      -* christopher martin let her go live performance
      -* let her go by christopher martin cover
      -* christopher martin let her go album
      -* let her go by christopher martin meaning
      -* christopher martin let her go chords
      -* let her go by christopher martin ringtone
      -* christopher martin let her go karaoke
      -* let her go by christopher martin spotify
      -* christopher martin let her go acoustic version
      -* let her go by christopher martin soundcloud
      -* christopher martin let her go piano tutorial
      -* let her go by christopher martin genre
      -* christopher martin let her go guitar tabs
      -* let her go by christopher martin release date
      -* christopher martin let her go reaction video
      -* let her go by christopher martin tiktok
      -* christopher martin let her go 320kbps download
      -* let her go by christopher martin last.fm
      -* christopher martin let her go slowed down
      -* let her go by christopher martin shazam
      -* christopher martin let her go dancehall version
      -* let her go by christopher martin azlyrics
      -* christopher martin let her go extended mix
      -* let her go by christopher martin genius.com
      -* christopher martin let her go sheet music
      -* let her go by christopher martin billboard chart
      -* christopher martin let her go mashup song
      -* let her go by christopher martin apple music
      -* christopher martin let her go nightcore edit
      -* let her go by christopher martin amazon music
      -* christopher martin let her go edm remix
      -* let her go by christopher martin deezer
      -* christopher martin let her go whatsapp status
      -* let her go by christopher martin pandora
      -* christopher martin let her go saxophone version
      -* let her go by christopher martin vevo
      -* christopher martin let her go violin cover
      -* let her go by christopher martin vimeo
      -* christopher martin let her go ukulele chords

      -

      How to download Let Her Go MP3 by Christopher Martin?

      -

      If you want to enjoy "Let Her Go" by Christopher Martin on your device, you might be wondering how to download it as an MP3 file. There are many ways to do so, but not all of them are legal and ethical. In this section, we will show you the best and safest ways to download the song without breaking any laws or harming the artist.

      -

      The legal and ethical ways

      -

      The most legal and ethical way to download "Let Her Go" by Christopher Martin is to buy it from an authorized online store or platform, such as iTunes, Amazon Music, or Google Play Music. By doing so, you will support the artist financially and help him continue making music. You will also get a high-quality MP3 file that you can play on any device. Buying the song usually costs less than a dollar, which is a fair price for a great song.

      -

      Another legal and ethical way to download "Let Her Go" by Christopher Martin is to stream it from a licensed online service or app, such as Spotify, YouTube Music, or Deezer. By doing so, you will also support the artist indirectly, as he will receive royalties from the streaming platforms based on the number of plays and views. You will also get access to a large library of music that you can listen to anytime and anywhere. Streaming the song is usually free or very cheap, depending on the service or app you use.

      -

      The best sources and platforms

      -

      Among the online stores and platforms that sell "Let Her Go" by Christopher Martin, we recommend iTunes as the best option. This is because iTunes has a user-friendly interface, a fast and secure payment system, and a wide compatibility with various devices. You can easily buy the song from iTunes using your Apple ID and download it to your computer or mobile device. You can also sync it with your iCloud account and access it from any Apple device.

      -

      Among the online services and apps that stream "Let Her Go" by Christopher Martin, we recommend Spotify as the best option. This is because Spotify has a huge catalog of music, a smart algorithm that suggests songs based on your preferences, and a social feature that lets you share your playlists with your friends. You can easily stream the song from Spotify using your account and listen to it online or offline. You can also create your own radio station based on the song or discover similar songs by other artists.

      -

      The tips and tricks for a smooth download

      -

      To ensure a smooth download of "Let Her Go" by Christopher Martin, here are some tips and tricks that you should follow:

      -
        -
      • Make sure you have a stable internet connection and enough storage space on your device.
      • -
      • Choose a reputable and reliable source or platform that offers high-quality MP3 files.
      • -
      • Check the reviews and ratings of the source or platform before buying or streaming the song.
      • -
      • Use a VPN or proxy service if you encounter any geo-restrictions or censorship issues.
      • -
      • Scan your device for viruses or malware after downloading or streaming the song.
      • -
      -

      Conclusion

      -

      "Let Her Go" by Christopher Martin is a beautiful reggae cover of Passenger's hit song. It showcases Martin's talent and versatility as a singer and songwriter. It also conveys a powerful message about love, loss, and hope. If you want to download it as an MP3 file, you can do so legally and ethically by buying it from an authorized online store or platform, such as iTunes, or streaming it from a licensed online service or app, such as Spotify. You can also follow some tips and tricks to ensure a smooth download and enjoy the song on your device. We hope you found this article helpful and informative. If you have any questions or comments, feel free to leave them below.

      -

      FAQs

      -

      Here are some frequently asked questions about "Let Her Go" by Christopher Martin:

      -
        -
      1. When did Christopher Martin release his cover of "Let Her Go"?
      2. -

        He released it in 2014 as a single.

        -
      3. What album is "Let Her Go" by Christopher Martin from?
      4. -

        It is not from any album, but it is included in some of his compilations, such as Reggae Gold 2014 and Strictly the Best Vol. 50.

        -
      5. How long is "Let Her Go" by Christopher Martin?
      6. -

        It is 4 minutes and 25 seconds long.

        -
      7. Who produced "Let Her Go" by Christopher Martin?
      8. -

        It was produced by Frankie Music, a Jamaican record label and production company.

        -
      9. Where can I watch the video of "Let Her Go" by Christopher Martin?
      10. -

        You can watch it on YouTube or on his official website.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dr. Driving for PC How to Download and Play the Best Racing Game on Your Computer.md b/spaces/1phancelerku/anime-remove-background/Dr. Driving for PC How to Download and Play the Best Racing Game on Your Computer.md deleted file mode 100644 index d73be347089a1ad632162f27fc247a503fec416b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dr. Driving for PC How to Download and Play the Best Racing Game on Your Computer.md +++ /dev/null @@ -1,131 +0,0 @@ -
      -

      How to Download and Play Dr. Driving on PC

      -

      Do you love racing games but want to experience a more realistic driving simulation? Do you want to test your driving skills in various missions and modes? Do you want to challenge other players online and compete for the top spot on the leaderboards? If you answered yes to any of these questions, then you should try Dr. Driving, a popular racing game developed by SUD Inc.

      -

      dr driving pc download apk


      DOWNLOAD ->->->-> https://jinyurl.com/2uNSbT



      -

      Dr. Driving is a game that drives you crazy with its fast-paced and visually stunning gameplay. You can choose from different cars, modes, and missions, and drive through the streets with realistic physics and graphics. You can also sign in with your Google account to play online multiplayer and see how you rank among other drivers.

      -

      But what if you want to play Dr. Driving on a bigger screen with better controls? What if you want to access thousands of other Android games and apps on your PC? Well, there is a way to do that, and it's called an emulator. An emulator is a software that lets you run Android applications on your PC or laptop, giving you the best of both worlds.

      -

      In this article, we will show you how to download and install Dr. Driving on PC using three different emulators: BlueStacks, LDPlayer, and NoxPlayer. We will also give you some tips and tricks for playing Dr. Driving on PC more efficiently and enjoyably.

      -

      What is Dr. Driving?

      -

      Dr. Driving is a racing game that is not like any other racing game you have played before. It is not about speed or adrenaline, but about skill and precision. It is a game that simulates real driving scenarios, such as parking, lane changing, traffic rules, etc.

      -

      A racing game with realistic driving physics and graphics

      -

      Dr. Driving features realistic driving physics that make you feel like you are behind the wheel of a real car. You can feel the weight, acceleration, braking, steering, and suspension of your car as you drive through the streets with different weather and traffic conditions. You can also see the details of your car, such as the speedometer, the fuel gauge, the brake pedal, etc.

      -

      A game that challenges you to complete various missions and modes

      -

      Dr. Driving is not just about driving around aimlessly. It is a game that tests your driving skills in various missions and modes. You can choose from different game modes, such as Highway, Drift, Speed, Fuel Efficiency, VIP Escort, Parking, Broken Brake, and more. Each mode has its own objectives and difficulties, such as reaching a certain speed, drifting for a certain distance, saving fuel, avoiding collisions, parking accurately, etc. You can also earn coins and gold by completing missions and modes, which you can use to buy or upgrade your cars.

      -

      dr driving game download for pc windows 10
      -dr driving 2 pc download free
      -dr driving for pc online play
      -dr driving pc game full version download
      -dr driving mod apk download for pc
      -dr driving 3d game download for pc
      -dr driving simulator pc download
      -dr driving offline game download for pc
      -dr driving car game download for pc
      -dr driving hack apk download for pc
      -dr driving 1 pc download
      -dr driving for pc without emulator
      -dr driving game install in pc
      -dr driving game free download for laptop
      -dr driving exe file download for pc
      -dr driving 2 mod apk download for pc
      -dr driving game play on pc
      -dr driving game setup download for pc
      -dr driving unlimited coins and gold apk download for pc
      -dr driving game download for windows 7
      -dr driving 2 game download for pc windows 10
      -dr driving game download for mac
      -dr driving game free download for windows 8.1
      -dr driving game download for pc softonic
      -dr driving game free download for windows xp
      -dr driving 2 game download for laptop
      -dr driving game free download for macbook air
      -dr driving game free download for windows 7 ultimate
      -dr driving game free download for windows 10 pro
      -dr driving 2 game free download for windows 8.1
      -dr driving game free download for windows 10 64 bit
      -dr driving 2 game free download for windows xp
      -dr driving game free download for windows 7 professional
      -dr driving 2 game free download for windows 7 ultimate
      -dr driving game free download for windows 10 home
      -dr driving 2 game free download for windows 10 pro
      -dr driving game free download for windows 8.1 pro
      -dr driving 2 game free download for windows 10 home
      -dr driving game free download for windows xp sp3
      -dr driving 2 game free download for windows xp sp3
      -how to play dr driving on pc with keyboard
      -how to install and play dr driving on pc using bluestacks emulator
      -how to play and enjoy dr driving on pc with ldplayer emulator
      -how to run and play dr driving on pc without bluestacks emulator
      -how to play and have fun with dr driving on pc using noxplayer emulator
      -how to play and experience dr driving on pc with koplayer emulator
      -how to play and master dr driving on pc with memu emulator
      -how to play and win in dr driving on pc with gameloop emulator

      -

      A game that supports online multiplayer and leaderboards

      -

      Dr. Driving is not just a solo game. It is also a game that lets you compete with other players online. You can sign in with your Google account to play online multiplayer and see how you rank among other drivers. You can challenge your friends or random players in different modes, such as Speed Parking, Drift Race, Fuel Battle, etc. You can also see your stats and achievements on the leaderboards and compare them with others.

      -

      Why play Dr. Driving on PC?

      -

      Dr. Driving is a fun and addictive game that you can play on your Android device. But what if you want to enjoy it on a bigger screen with better controls? What if you want to access thousands of other Android games and apps on your PC? Well, there is a way to do that, and it's called an emulator. An emulator is a software that lets you run Android applications on your PC or laptop, giving you the best of both worlds.

      -

      Enjoy a larger screen and better controls with an emulator

      -

      One of the main advantages of playing Dr. Driving on PC using an emulator is that you can enjoy a larger screen and better controls. You can see the graphics and details of the game more clearly and vividly on your PC monitor. You can also use your keyboard, mouse, or gamepad to control your car more smoothly and accurately. You can customize your controls using the emulator settings and choose the layout that suits you best.

      -

      Choose from different emulators such as BlueStacks, LDPlayer, Nox, etc.

      -

      Another advantage of playing Dr. Driving on PC using an emulator is that you can choose from different emulators according to your preferences and needs. There are many emulators available for PC, such as BlueStacks, LDPlayer, NoxPlayer, etc. Each emulator has its own features and benefits, such as performance, compatibility, stability, user interface, etc. You can compare and try different emulators to find the one that works best for you.

      -

      Access thousands of other Android games and apps on your PC

      -

      A third advantage of playing Dr. Driving on PC using an emulator is that you can access thousands of other Android games and apps on your PC. You can download and install any Android game or app from the Google Play Store or other sources using the emulator. You can also switch between different games and apps easily using the emulator's multitasking feature. You can enjoy a variety of Android games and apps on your PC without any hassle.

      -

      How to download and install Dr. Driving on PC using BlueStacks

      -

      One of the most popular and widely used emulators for PC is BlueStacks. BlueStacks is a powerful and reliable emulator that lets you run Android games and apps on your PC smoothly and efficiently. Here are the steps to download and install Dr. Driving on PC using BlueStacks:

      -

      Download and install BlueStacks from its official website

      -

      The first step is to download and install BlueStacks from its official website . You can choose the version that matches your PC's operating system (Windows or Mac). The download process may take some time depending on your internet speed. Once the download is complete, you can run the installer file and follow the instructions to install BlueStacks on your PC.

      -

      Launch BlueStacks and sign in with your Google account

      -

      The second step is to launch BlueStacks and sign in with your Google account . You can use your existing Google account or create a new one if you don't have one yet. Signing in with your Google account will allow you to access the Google Play Store and other Google services on BlueStacks.

      -

      Search for Dr. Driving in the Play Store and install it

      -

      The third step is to search for Dr. Driving in the Play Store and install it . You can use the search bar on the top right corner of the BlueStacks home screen to look for Dr. Driving. You can also browse the categories or recommendations to find the game. Once you find the game, you can click on it and then click on the Install button to download and install it on BlueStacks.

      -

      Click the Dr. Driving icon on the home screen and start playing

      -

      The fourth and final step is to click the Dr. Driving icon on the home screen and start playing . You can also find the game in the My Games tab on BlueStacks. You can now enjoy Dr. Driving on your PC with a larger screen and better controls.

      -

      How to download and install Dr. Driving on PC using LDPlayer

      -

      Another popular and widely used emulator for PC is LDPlayer. LDPlayer is a fast and lightweight emulator that lets you run Android games and apps on your PC smoothly and efficiently. Here are the steps to download and install Dr. Driving on PC using LDPlayer:

      -

      Download and install LDPlayer from its official website

      -

      The first step is to download and install LDPlayer from its official website . You can choose the version that matches your PC's operating system (Windows or Mac). The download process may take some time depending on your internet speed. Once the download is complete, you can run the installer file and follow the instructions to install LDPlayer on your PC.

      -

      Launch LDPlayer and sign in with your Google account

      -

      The second step is to launch LDPlayer and sign in with your Google account . You can use your existing Google account or create a new one if you don't have one yet. Signing in with your Google account will allow you to access the LD Store and other Google services on LDPlayer.

      -

      Search for Dr. Driving in the LD Store and install it

      -

      The third step is to search for Dr. Driving in the LD Store and install it . You can use the search bar on the top right corner of the LDPlayer home screen to look for Dr. Driving. You can also browse the categories or recommendations to find the game. Once you find the game, you can click on it and then click on the Install button to download and install it on LDPlayer.

      -

      Click the Dr. Driving icon on the desktop and start playing

      -

      The fourth and final step is to click the Dr. Driving icon on the desktop and start playing . You can also find the game in the My Games tab on LDPlayer. You can now enjoy Dr. Driving on your PC with a larger screen and better controls.

      -

      How to download and install Dr. Driving on PC using NoxPlayer

      -

      A third popular and widely used emulator for PC is NoxPlayer. NoxPlayer is a powerful and stable emulator that lets you run Android games and apps on your PC smoothly and efficiently. Here are the steps to download and install Dr. Driving on PC using NoxPlayer:

      -

      Download and install NoxPlayer from its official website

      -

      The first step is to download and install NoxPlayer from its official website . You can choose the version that matches your PC's operating system (Windows or Mac). The download process may take some time depending on your internet speed. Once the download is complete, you can run the installer file and follow the instructions to install NoxPlayer on your PC.

      -

      Launch NoxPlayer and sign in with your Google account

      -

      The second step is to launch NoxPlayer and sign in with your Google account . You can use your existing Google account or create a new one if you don't have one yet. Signing in with your Google account will allow you to access the Google Play Store and other Google services on NoxPlayer.

      -

      Drag and drop the APK/XAPK file of Dr. Driving onto the NoxPlayer window

      -

      The third step is to drag and drop the APK/XAPK file of Dr. Driving onto the NoxPlayer window . You can download the APK/XAPK file of Dr. Driving from any reliable source, such as APKPure, APKMirror, etc. Once you drag and drop the file onto the NoxPlayer window, it will automatically install the game on NoxPlayer.

      -

      Click the Dr. Driving icon on the home screen and start playing

      -

      The fourth and final step is to click the Dr. Driving icon on the home screen and start playing . You can also find the game in the My Games tab on NoxPlayer. You can now enjoy Dr. Driving on your PC with a larger screen and better controls.

      -

      Tips and tricks for playing Dr. Driving on PC

      -

      Now that you know how to download and install Dr. Driving on PC using different emulators, you might want to know some tips and tricks for playing the game more efficiently and enjoyably. Here are some of them:

      -

      Customize your controls using the keyboard, mouse, or gamepad

      -

      One of the benefits of playing Dr. Driving on PC using an emulator is that you can customize your controls using the keyboard, mouse, or gamepad . You can use the emulator settings to change the key mapping, sensitivity, layout, etc. of your controls. You can also use the default controls or choose from different presets. You can find the best controls that suit your style and preference.

      -

      Use the overview mode to see your car and the road better

      -

      Another tip for playing Dr. Driving on PC using an emulator is to use the overview mode to see your car and the road better . You can use the overview mode by pressing the O key on your keyboard or clicking the O button on your emulator screen. The overview mode will zoom out your view and show you a wider angle of your car and the road. This will help you see your surroundings better and avoid crashing into other vehicles or obstacles.

      -

      Follow the instructions and avoid crashing into other vehicles or obstacles

      -

      A third tip for playing Dr. Driving on PC using an emulator is to follow the instructions and avoid crashing into other vehicles or obstacles . You can see the instructions for each mode and mission on the top left corner of your screen. You can also see the indicators for your speed, fuel, damage, etc. on the bottom right corner of your screen. You should follow the instructions carefully and complete the objectives as fast as possible. You should also avoid crashing into other vehicles or obstacles, as this will damage your car and reduce your score.

      -

      Earn coins and gold by completing missions and modes

      -

      A fourth tip for playing Dr. Driving on PC using an emulator is to earn coins and gold by completing missions and modes . You can earn coins and gold by finishing each mode and mission with a high score and rank. You can also earn coins and gold by signing in with your Google account and playing online multiplayer. You can use coins and gold to buy or upgrade your cars in the shop.

      -

      Upgrade your car or buy new ones with different features and stats

      -

      A fifth tip for playing Dr. Driving on PC using an emulator is to upgrade your car or buy new ones with different features and stats . You can upgrade your car's engine, brake, tire, suspension, etc. in the shop using coins. You can also buy new cars with different features and stats, such as speed, handling, fuel efficiency, etc. using gold. You can choose from different cars, such as sedan, truck, bus, sports car, etc. You can also customize your car's color and appearance.

      -

      Conclusion

      -

      Dr. Driving is a racing game that drives you crazy with its realistic driving physics and graphics. You can choose from different cars, modes, and missions, and drive through the streets with skill and precision. You can also sign in with your Google account to play online multiplayer and compete with other drivers.

      -

      If you want to play Dr. Driving on PC, you can use an emulator to run the game on your PC or laptop. You can choose from different emulators, such as BlueStacks, LDPlayer, and NoxPlayer, and follow the steps to download and install the game on your PC. You can also customize your controls using the keyboard, mouse, or gamepad, and enjoy a larger screen and better performance.

      -

      We hope this article has helped you learn how to download and play Dr. Driving on PC using different emulators. We also hope you have enjoyed some tips and tricks for playing the game more efficiently and enjoyably. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -

      Here are some frequently asked questions about Dr. Driving and playing it on PC:

      -

      Is Dr. Driving free to play?

      -

      Yes, Dr. Driving is free to play on Android devices. You can download and install it from the Google Play Store or other sources without paying any money. However, the game may contain ads and in-app purchases that require real money.

      -

      Is Dr. Driving safe to play?

      -

      Yes, Dr. Driving is safe to play on Android devices. The game does not contain any harmful or malicious content that may harm your device or data. However, you should always download and install the game from trusted sources and avoid any modded or hacked versions that may contain viruses or malware.

      -

      Can I play Dr. Driving offline?

      -

      Yes, you can play Dr. Driving offline on Android devices. You can play most of the modes and missions without an internet connection. However, you will need an internet connection to play online multiplayer and access some of the Google services, such as leaderboards and achievements.

      -

      Can I play Dr. Driving with my friends?

      -

      Yes, you can play Dr. Driving with your friends on Android devices. You can sign in with your Google account and challenge your friends or random players in different modes online. You can also see your friends' stats and rankings on the leaderboards.

      -

      Can I transfer my progress from Android to PC or vice versa?

      -

      Yes, you can transfer your progress from Android to PC or vice versa using an emulator. You can use the same Google account to sign in on both devices and sync your data across them. However, you may need to reinstall the game on the new device if you switch between different emulators.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Fly Anywhere in the World with Flight Simulator Download.md b/spaces/1phancelerku/anime-remove-background/Fly Anywhere in the World with Flight Simulator Download.md deleted file mode 100644 index f77bcef188c12fbc79282b4d7c5f32aff919efd0..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Fly Anywhere in the World with Flight Simulator Download.md +++ /dev/null @@ -1,155 +0,0 @@ - -

      Flight Simulator Download: How to Get the Best Flight Simulation Experience

      -

      If you are a fan of aviation, you might have dreamed of flying a plane yourself. But unless you have a pilot license, a lot of money, and access to an airport, this dream might seem impossible. Fortunately, there is a way to experience the thrill and challenge of flying without leaving your home: flight simulation.

      -

      flight simulator download


      Download > https://jinyurl.com/2uNRug



      -

      Introduction

      -

      What is flight simulation and why is it popular?

      -

      Flight simulation is a computer-based activity that simulates the operation and control of an aircraft in a virtual environment. It allows you to fly various types of planes, from light aircraft to wide-body jets, in different locations, weather conditions, and scenarios. You can also interact with other pilots, air traffic controllers, and ground crew online.

      -

      Flight simulation is popular among aviation enthusiasts, hobbyists, gamers, and aspiring pilots. It offers a realistic and immersive way to learn about aviation, practice your skills, explore the world, and have fun. It can also help you prepare for real-world flying situations, such as emergencies, navigation, communication, and instrument procedures.

      -

      What are the benefits of flight simulation?

      -

      Flight simulation has many benefits for different types of users. Some of the benefits are:

      -
        -
      • It improves your knowledge of aviation theory, terminology, regulations, and procedures.
      • -
      • It enhances your spatial awareness, hand-eye coordination, decision making, and problem solving skills.
      • -
      • It boosts your confidence and self-esteem as you master new challenges and achieve your goals.
      • -
      • It reduces your stress and anxiety by providing a relaxing and enjoyable activity.
      • -
      • It saves you time and money by allowing you to fly anytime and anywhere without spending on fuel, maintenance, or fees.
      • -
      -

      What are the features of a good flight simulator?

      -

      Not all flight simulators are created equal. Some are more realistic, detailed, and comprehensive than others. Some are more user-friendly, customizable, and compatible than others. Some are more affordable, accessible, and supported than others. So how do you choose the best flight simulator for your needs?

      -

      Here are some of the features that you should look for in a good flight simulator:

      -
        -
      • A large selection of aircraft models with accurate physics, aerodynamics, performance, and appearance.
      • -
      • A vast and diverse world with high-resolution scenery, landmarks, airports, runways, and terrain.
      • -
      • A dynamic weather system with realistic clouds, wind, precipitation, visibility, temperature, and pressure.
      • -
      • A live traffic system with real-time flights, schedules, routes, frequencies, and callsigns.
      • -
      • A multiplayer mode with voice chat, shared cockpit, formation flying, and online events.
      • -
      • A user interface that is easy to navigate, configure, and customize.
      • -
      • A tutorial mode that guides you through the basics of flying and using the simulator.
      • -
      • A mission mode that challenges you with various objectives and scenarios.
      • -
      • A community that provides support, feedback, tips, mods, add-ons, and updates.
      • -
      -

      Microsoft Flight Simulator: The Next Generation of Flight Simulation

      -

      What is Microsoft Flight Simulator and how does it work?

      -

      Microsoft Flight Simulator is the next generation of one of the most beloved

      Microsoft Flight Simulator is the next generation of one of the most beloved and longest-running flight simulation franchises in history. It is developed by Asobo Studio and published by Xbox Game Studios, and it was released in August 2020 for Windows 10 and Xbox Series X/S.

      -

      Microsoft Flight Simulator uses cutting-edge technology to create a stunning and realistic simulation of the entire planet. It uses data from Bing Maps, Azure cloud computing, and artificial intelligence to render over 37,000 airports, 2 million cities, 1.5 billion buildings, and 2 trillion trees. It also uses real-time data from sources like FlightAware, Meteoblue, and Navblue to simulate live weather, traffic, and navigation.

      -

      To play Microsoft Flight Simulator, you need a device that meets the minimum or recommended system requirements, a stable internet connection, and a Microsoft account. You can also use various peripherals, such as a keyboard, mouse, joystick, yoke, throttle, rudder pedals, headset, or VR headset, to enhance your experience. You can purchase the game from the Microsoft Store or Steam, or you can subscribe to Xbox Game Pass for PC or Xbox Game Pass Ultimate.

      -

      What are the main features and highlights of Microsoft Flight Simulator?

      -

      Microsoft Flight Simulator offers a wide range of features and highlights that make it one of the most impressive and immersive flight simulators ever made. Some of the main features and highlights are:

      -

      flight simulator download for pc
      -flight simulator download free full version
      -flight simulator download windows 10
      -flight simulator download size
      -flight simulator download steam
      -flight simulator download xbox one
      -flight simulator download mac
      -flight simulator download microsoft store
      -flight simulator download planes
      -flight simulator download scenery
      -flight simulator download 2024
      -flight simulator download addons
      -flight simulator download aircraft
      -flight simulator download apk
      -flight simulator download android
      -flight simulator download boeing 737
      -flight simulator download best buy
      -flight simulator download bundle
      -flight simulator download code
      -flight simulator download crack
      -flight simulator download content manager
      -flight simulator download deluxe edition
      -flight simulator download digital
      -flight simulator download disc
      -flight simulator download demo
      -flight simulator download error
      -flight simulator download expansion pack
      -flight simulator download europe
      -flight simulator download for xbox series x
      -flight simulator download for android phone
      -flight simulator download game pass
      -flight simulator download google earth
      -flight simulator download gamestop
      -flight simulator download helicopter
      -flight simulator download how long
      -flight simulator download how to install
      -flight simulator download iso file
      -flight simulator download in parts
      -flight simulator download india map
      -flight simulator download japan update
      -flight simulator download keygen
      -flight simulator download kickass torrent
      -flight simulator download liveries
      -flight simulator download loop fix
      -flight simulator download mods
      -flight simulator download multiplayer mode
      -flight simulator download not working
      -flight simulator download nzxt cam overlay fixer tool

      -
        -
      • A huge variety of aircraft to choose from, including light planes, airliners, helicopters, jets, gliders, and more. Each aircraft has a detailed and functional cockpit with realistic instruments and controls.
      • -
      • A beautiful and diverse world to explore, with stunning graphics and lighting effects that change depending on the time of day and weather conditions. You can fly over mountains, oceans, forests, deserts, cities, landmarks, and more.
      • -
      • A dynamic weather system that affects the flight performance and visuals of your aircraft. You can experience rain, snow, fog, clouds, wind, turbulence, lightning, thunderstorms, hurricanes, and more. You can also adjust the weather settings to your liking or match them with the real-world conditions.
      • -
      • A live traffic system that populates the skies and airports with real-world flights and aircraft. You can see and interact with other planes and vehicles on the ground and in the air. You can also tune in to various radio frequencies and communicate with air traffic controllers and other pilots.
      • -
      • A multiplayer mode that lets you fly with or against other players online. You can join or create a group session with your friends or strangers, or you can join an event or activity with specific rules and objectives. You can also chat with other players using text or voice.
      • -
      • A user interface that is easy to use and customize. You can access various menus and options from the main screen or during your flight. You can also adjust your graphics settings, sound settings, camera settings, control settings, difficulty settings, and more.
      • -
      • A tutorial mode that teaches you how to fly and use the simulator. You can learn the basics of flight physics,

        A tutorial mode that teaches you how to fly and use the simulator. You can learn the basics of flight physics, aerodynamics, controls, instruments, procedures, and more. You can also get feedback and tips from your instructor.

        -
      • A mission mode that challenges you with various tasks and scenarios. You can test your skills and knowledge in different situations, such as landing, takeoff, navigation, emergency, combat, and more. You can also earn rewards and achievements for completing the missions.
      • -
      • A community that provides support, feedback, tips, mods, add-ons, and updates. You can access various forums, blogs, videos, guides, reviews, and more from other users and developers. You can also download and install various mods and add-ons that enhance or modify the simulator.
      • -
      -

      Tips and Tricks for Getting the Most Out of Microsoft Flight Simulator

      -

      How to customize your settings and preferences?

      -

      One of the best things about Microsoft Flight Simulator is that you can customize it to suit your preferences and needs. Here are some of the things that you can do to make the simulator more enjoyable and comfortable for you:

      -
        -
      • Choose the right edition for you. Microsoft Flight Simulator comes in three editions: Standard, Deluxe, and Premium Deluxe. Each edition has a different number of aircraft and airports included. You can compare the editions and their prices on the official website or store.
      • -
      • Optimize your graphics settings. Microsoft Flight Simulator is a very demanding game that requires a lot of processing power and memory. To ensure a smooth and stable performance, you should adjust your graphics settings according to your device's capabilities. You can use the preset options (low, medium, high, ultra) or customize them individually.
      • -
      • Select your control device and scheme. Microsoft Flight Simulator supports various types of control devices, such as keyboards, mice, joysticks, yokes, throttles, rudders, pedals, headsets, and VR headsets. You can choose the device that you prefer or have available, and then select the corresponding control scheme from the menu. You can also customize the key bindings and sensitivity of each device.
      • -
      • Set your difficulty level and assistance options. Microsoft Flight Simulator allows you to choose how realistic and challenging you want your flight experience to be. You can select from three difficulty levels (easy, medium, hard) or customize them individually. You can also enable or disable various assistance options that help you with different aspects of flying, such as checklists, navigation aids, copilot assistance, failure simulation,

        failure simulation, and more.

        -

        How to create and follow your flight plan?

        -

        A flight plan is a document that specifies the details of your flight, such as the departure and arrival airports, the route, the altitude, the speed, the fuel, and the weather. It is important to create and follow a flight plan to ensure a safe and efficient flight.

        -

        Here are some of the steps that you can take to create and follow your flight plan in Microsoft Flight Simulator:

        -
          -
        • Choose your aircraft and airport. You can select from the available aircraft and airports in the simulator, or you can search for a specific one by name, code, or location. You can also choose whether you want to start from a parking spot, a runway, or in the air.
        • -
        • Plan your route. You can use the world map to plan your route by clicking on the waypoints, airways, or airports that you want to fly through. You can also enter the route manually using the ICAO codes or the GPS coordinates. You can also use various tools and websites, such as SkyVector, SimBrief, or Little Navmap, to generate and import your route.
        • -
        • Set your altitude and speed. You can set your cruising altitude and speed according to your aircraft's capabilities and the airspace regulations. You can also adjust them during your flight as needed. You can use the autopilot or the manual controls to maintain your altitude and speed.
        • -
        • Check the weather and traffic. You can check the current and forecasted weather and traffic conditions for your flight using the simulator's menu or external sources, such as Meteoblue, FlightAware, or Navblue. You can also change the weather and traffic settings to your liking or match them with the real-world conditions.
        • -
        • Follow your flight plan. You can use the navigation instruments and aids in your cockpit, such as the GPS, the VOR, the NDB, the DME, the ILS, or the ATC, to follow your flight plan. You can also use the simulator's menu or external sources, such as Navigraph Charts, FltPlan Go, or ForeFlight, to view your flight plan on a map.
        • -
        -

        How to use the interactive cockpit and instruments?

        -

        One of the most realistic and immersive features of Microsoft Flight Simulator is the interactive cockpit and instruments. Each aircraft has a detailed and functional cockpit with realistic instruments and controls that you can interact with using your mouse, keyboard, or peripheral device.

        -

        Here are some of the things that you can do to use the interactive cockpit and instruments in Microsoft Flight Simulator:

        -
          -
        • Familiarize yourself with the cockpit layout and functions. You can use the cockpit camera to look around and zoom in on different parts of the cockpit. You can also use the tooltips to see the name and function of each instrument and control. You can also access various checklists and manuals that explain how to operate each aircraft.
        • -
        • Interact with the instruments and controls using your mouse. You can use your mouse cursor to click on buttons, switches, knobs, levers, dials, screens, and more. You can also use your mouse wheel to rotate knobs and dials. You can also use your right mouse button to drag or pan certain instruments or controls.
        • -
        • Interact with the instruments and controls using your keyboard. You can use various keyboard shortcuts to activate or deactivate certain instruments or controls. You can also use
        • Interact with the instruments and controls using your keyboard. You can use various keyboard shortcuts to activate or deactivate certain instruments or controls. You can also use the arrow keys, the page up and page down keys, the home and end keys, and the enter key to adjust certain instruments or controls. You can also customize your keyboard bindings in the settings menu.
        • -
        • Interact with the instruments and controls using your peripheral device. You can use various peripheral devices, such as joysticks, yokes, throttles, rudders, pedals, headsets, or VR headsets, to interact with the instruments and controls. You can also use the buttons, switches, knobs, levers, dials, screens, and more on your device to control certain instruments or controls. You can also customize your device settings and bindings in the settings menu.
        • -
        -

        How to deal with realistic weather and traffic conditions?

        -

        Another realistic and immersive feature of Microsoft Flight Simulator is the realistic weather and traffic conditions. The simulator uses real-time data from various sources to simulate the current and forecasted weather and traffic conditions for your flight. You can also change the weather and traffic settings to your liking or match them with the real-world conditions.

        -

        Here are some of the things that you can do to deal with realistic weather and traffic conditions in Microsoft Flight Simulator:

        -
          -
        • Check the weather and traffic information before and during your flight. You can use the simulator's menu or external sources, such as Meteoblue, FlightAware, or Navblue, to check the current and forecasted weather and traffic conditions for your flight. You can also see the weather and traffic information on your navigation instruments and aids, such as the GPS, the ATIS, or the ATC.
        • -
        • Adjust your flight plan and performance according to the weather and traffic conditions. You can use the simulator's menu or external sources, such as SkyVector, SimBrief, or Little Navmap, to plan or modify your route, altitude, speed, fuel, and more according to the weather and traffic conditions. You can also adjust your aircraft's performance settings, such as the engine power, the flaps, the trim, the landing gear, and more according to the weather and traffic conditions.
        • -
        • Follow the rules and procedures for flying in different weather and traffic conditions. You can use various sources, such as checklists, manuals, guides, videos,

          Follow the rules and procedures for flying in different weather and traffic conditions. You can use various sources, such as checklists, manuals, guides, videos, or online courses, to learn and follow the rules and procedures for flying in different weather and traffic conditions. You can also use the simulator's tutorial mode, mission mode, or assistance options to help you with the rules and procedures.

          -
        • React to the changing weather and traffic conditions during your flight. You can use your instruments, controls, communication, and judgment to react to the changing weather and traffic conditions during your flight. You can also use the simulator's menu or external sources, such as Meteoblue, FlightAware, or Navblue, to update the weather and traffic information during your flight. You can also pause, save, or restart your flight if needed.
        • -
        -

        Conclusion

        -

        Flight simulation is a great way to experience the thrill and challenge of flying without leaving your home. It has many benefits for different types of users, such as improving your knowledge, skills, confidence, and enjoyment of aviation. However, not all flight simulators are the same. You need to choose the best flight simulator for your needs and preferences.

        -

        Microsoft Flight Simulator is one of the best flight simulators available today. It offers a stunning and realistic simulation of the entire planet with various types of aircraft, scenery, weather, traffic, and more. It also offers a user-friendly and customizable interface with various modes, settings, options, and features. It also has a supportive and active community that provides support, feedback, tips, mods, add-ons, and updates.

        -

        To get the best flight simulation experience with Microsoft Flight Simulator, you need to download and install it on your device, customize your settings and preferences, create and follow your flight plan, use the interactive cockpit and instruments,

        use the interactive cockpit and instruments, and deal with realistic weather and traffic conditions. You also need to follow the rules and procedures for flying in different situations, and react to the changing conditions during your flight. You can also use various sources and tools to help you with your flight simulation, such as tutorials, missions, checklists, manuals, guides, videos, websites, apps, and more.

        -

        Flight simulation is a fun and rewarding activity that can enrich your life in many ways. Whether you are a beginner or an expert, a casual or a serious user, a gamer or a learner, you can find something that suits your taste and style in Microsoft Flight Simulator. So what are you waiting for? Download Microsoft Flight Simulator today and start your flight simulation adventure!

        -

        FAQs

        -

        Here are some of the frequently asked questions about flight simulation and Microsoft Flight Simulator:

        -
          -
        1. Q: How realistic is Microsoft Flight Simulator?
          -A: Microsoft Flight Simulator is one of the most realistic flight simulators ever made. It uses advanced technology and data to create a lifelike simulation of the entire planet with various types of aircraft, scenery, weather, traffic, and more. It also simulates the physics, aerodynamics, performance, and appearance of each aircraft with high accuracy and detail.
        2. -
        3. Q: How much does Microsoft Flight Simulator cost?
          -A: Microsoft Flight Simulator comes in three editions: Standard ($59.99), Deluxe ($89.99), and Premium Deluxe ($119.99). Each edition has a different number of aircraft and airports included. You can also subscribe to Xbox Game Pass for PC or Xbox Game Pass Ultimate to access the Standard edition of the game.
        4. -
        5. Q: What are the system requirements for Microsoft Flight Simulator?
          -A: Microsoft Flight Simulator is a very demanding game that requires a lot of processing power and memory. The minimum system requirements are: Windows 10 64-bit, Intel Core i5-4460 or AMD Ryzen 3 1200 processor, NVIDIA GTX 770 or AMD Radeon RX 570 graphics card, 8 GB RAM, 150 GB storage space, and 5 Mbps internet speed. The recommended system requirements are: Windows 10 64-bit, Intel Core i5-8400 or AMD Ryzen 5 1500X processor, NVIDIA GTX 970 or AMD Radeon RX 590 graphics card, 16 GB RAM, 150 GB storage space, and 20 Mbps internet speed.
        6. -
        7. Q: How do I download and install Microsoft Flight Simulator?
          -A: You can download and install Microsoft Flight Simulator from the Microsoft Store or Steam. You need to have a Microsoft account and a stable internet connection. You also need to have enough storage space on your device. The download size is about 100 GB for the Standard edition, 125 GB for the Deluxe edition, and 150 GB for the Premium Deluxe edition.
        8. -
        9. Q: Where can I find more information and help about Microsoft Flight Simulator?
          -A: You can find more information and help about Microsoft Flight Simulator on the official website (https://www.flightsimulator.com/), the official forums (https://forums.flightsimulator.com/), the official support page (https://flightsimulator.zendesk.com/hc/en-us), the official YouTube channel (https://www.youtube.com/channel/UCqONzeACDBaF6FfKjh7ndAQ), or various other sources, such as blogs, videos,

          videos, guides, reviews, and more from other users and developers.

        10. -
        -

        I hope this article has helped you learn more about flight simulation and Microsoft Flight Simulator. If you have any questions or comments, please feel free to share them below. Happy flying!

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/801artistry/RVC801/configs/config.py b/spaces/801artistry/RVC801/configs/config.py deleted file mode 100644 index e3b0205a1f0d62f674b9c3de2c5ab7ee90464945..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/configs/config.py +++ /dev/null @@ -1,265 +0,0 @@ -import argparse -import os -import sys -import json -from multiprocessing import cpu_count - -import torch - -try: - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass - -import logging - -logger = logging.getLogger(__name__) - - -version_config_list = [ - "v1/32k.json", - "v1/40k.json", - "v1/48k.json", - "v2/48k.json", - "v2/32k.json", -] - - -def singleton_variable(func): - def wrapper(*args, **kwargs): - if not wrapper.instance: - wrapper.instance = func(*args, **kwargs) - return wrapper.instance - - wrapper.instance = None - return wrapper - - -@singleton_variable -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.json_config = self.load_config_json() - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - self.paperspace, - self.is_cli, - self.grtheme, - self.dml, - ) = self.arg_parse() - self.instead = "" - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def load_config_json() -> dict: - d = {} - for config_file in version_config_list: - with open(f"configs/{config_file}", "r") as f: - d[config_file] = json.load(f) - return d - - @staticmethod - def arg_parse() -> tuple: - exe = sys.executable or "python" - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument("--pycmd", type=str, default=exe, help="Python command") - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument( - "--paperspace", - action="store_true", - help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.", - ) - parser.add_argument( - "--is_cli", - action="store_true", - help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!", - ) - - parser.add_argument( - "-t", - "--theme", - help = "Theme for Gradio. Format - `JohnSmith9982/small_and_pretty` (no backticks)", - default = "JohnSmith9982/small_and_pretty", - type = str - ) - - parser.add_argument( - "--dml", - action="store_true", - help="Use DirectML backend instead of CUDA." - ) - - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.paperspace, - cmd_opts.is_cli, - cmd_opts.theme, - cmd_opts.dml, - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - @staticmethod - def has_xpu() -> bool: - if hasattr(torch, "xpu") and torch.xpu.is_available(): - return True - else: - return False - - def use_fp32_config(self): - for config_file in version_config_list: - self.json_config[config_file]["train"]["fp16_run"] = False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - if self.has_xpu(): - self.device = self.instead = "xpu:0" - self.is_half = True - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "P10" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - logger.info("Found GPU %s, force to fp32", self.gpu_name) - self.is_half = False - self.use_fp32_config() - else: - logger.info("Found GPU %s", self.gpu_name) - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("infer/modules/train/preprocess.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("infer/modules/train/preprocess.py", "w") as f: - f.write(strr) - elif self.has_mps(): - logger.info("No supported Nvidia GPU found") - self.device = self.instead = "mps" - self.is_half = False - self.use_fp32_config() - else: - logger.info("No supported Nvidia GPU found") - self.device = self.instead = "cpu" - self.is_half = False - self.use_fp32_config() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem is not None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - if self.dml: - logger.info("Use DirectML instead") - if ( - os.path.exists( - "runtime\Lib\site-packages\onnxruntime\capi\DirectML.dll" - ) - == False - ): - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime", - "runtime\Lib\site-packages\onnxruntime-cuda", - ) - except: - pass - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime-dml", - "runtime\Lib\site-packages\onnxruntime", - ) - except: - pass - # if self.device != "cpu": - import torch_directml - - self.device = torch_directml.device(torch_directml.default_device()) - self.is_half = False - else: - if self.instead: - logger.info(f"Use {self.instead} instead") - if ( - os.path.exists( - "runtime\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" - ) - == False - ): - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime", - "runtime\Lib\site-packages\onnxruntime-dml", - ) - except: - pass - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime-cuda", - "runtime\Lib\site-packages\onnxruntime", - ) - except: - pass - return x_pad, x_query, x_center, x_max diff --git a/spaces/A666sxr/Genshin_TTS/text/english.py b/spaces/A666sxr/Genshin_TTS/text/english.py deleted file mode 100644 index 4de565e8ebc68cdb5db680f6c02ef8167c4d1688..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/text/english.py +++ /dev/null @@ -1,171 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') diff --git a/spaces/AIConsultant/MusicGen/audiocraft/environment.py b/spaces/AIConsultant/MusicGen/audiocraft/environment.py deleted file mode 100644 index adc7819305758bb50a9984928bfa7f13eabef5f5..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/environment.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Provides cluster and tools configuration across clusters (slurm, dora, utilities). -""" - -import logging -import os -from pathlib import Path -import re -import typing as tp - -import omegaconf - -from .utils.cluster import _guess_cluster_type - - -logger = logging.getLogger(__name__) - - -class AudioCraftEnvironment: - """Environment configuration for teams and clusters. - - AudioCraftEnvironment picks compute cluster settings (slurm, dora) from the current running environment - or declared variable and the loaded team configuration. Additionally, the AudioCraftEnvironment - provides pointers to a reference folder resolved automatically across clusters that is shared across team members, - allowing to share sigs or other files to run jobs. Finally, it provides dataset mappers to automatically - map dataset file paths to new locations across clusters, allowing to use the same manifest of files across cluters. - - The cluster type is identified automatically and base configuration file is read from config/teams.yaml. - Use the following environment variables to specify the cluster, team or configuration: - - AUDIOCRAFT_CLUSTER (optional): Cluster type to enforce. Useful if the cluster type - cannot be inferred automatically. - AUDIOCRAFT_CONFIG (optional): Path to yaml config holding the teams configuration. - If not set, configuration is read from config/teams.yaml. - AUDIOCRAFT_TEAM (optional): Name of the team. Recommended to set to your own team. - Cluster configuration are shared across teams to match compute allocation, - specify your cluster configuration in the configuration file under a key mapping - your team name. - """ - _instance = None - DEFAULT_TEAM = "default" - - def __init__(self) -> None: - """Loads configuration.""" - self.team: str = os.getenv("AUDIOCRAFT_TEAM", self.DEFAULT_TEAM) - cluster_type = _guess_cluster_type() - cluster = os.getenv( - "AUDIOCRAFT_CLUSTER", cluster_type.value - ) - logger.info("Detecting cluster type %s", cluster_type) - - self.cluster: str = cluster - - config_path = os.getenv( - "AUDIOCRAFT_CONFIG", - Path(__file__) - .parent.parent.joinpath("config/teams", self.team) - .with_suffix(".yaml"), - ) - self.config = omegaconf.OmegaConf.load(config_path) - self._dataset_mappers = [] - cluster_config = self._get_cluster_config() - if "dataset_mappers" in cluster_config: - for pattern, repl in cluster_config["dataset_mappers"].items(): - regex = re.compile(pattern) - self._dataset_mappers.append((regex, repl)) - - def _get_cluster_config(self) -> omegaconf.DictConfig: - assert isinstance(self.config, omegaconf.DictConfig) - return self.config[self.cluster] - - @classmethod - def instance(cls): - if cls._instance is None: - cls._instance = cls() - return cls._instance - - @classmethod - def reset(cls): - """Clears the environment and forces a reload on next invocation.""" - cls._instance = None - - @classmethod - def get_team(cls) -> str: - """Gets the selected team as dictated by the AUDIOCRAFT_TEAM env var. - If not defined, defaults to "labs". - """ - return cls.instance().team - - @classmethod - def get_cluster(cls) -> str: - """Gets the detected cluster. - This value can be overridden by the AUDIOCRAFT_CLUSTER env var. - """ - return cls.instance().cluster - - @classmethod - def get_dora_dir(cls) -> Path: - """Gets the path to the dora directory for the current team and cluster. - Value is overridden by the AUDIOCRAFT_DORA_DIR env var. - """ - cluster_config = cls.instance()._get_cluster_config() - dora_dir = os.getenv("AUDIOCRAFT_DORA_DIR", cluster_config["dora_dir"]) - logger.warning(f"Dora directory: {dora_dir}") - return Path(dora_dir) - - @classmethod - def get_reference_dir(cls) -> Path: - """Gets the path to the reference directory for the current team and cluster. - Value is overridden by the AUDIOCRAFT_REFERENCE_DIR env var. - """ - cluster_config = cls.instance()._get_cluster_config() - return Path(os.getenv("AUDIOCRAFT_REFERENCE_DIR", cluster_config["reference_dir"])) - - @classmethod - def get_slurm_exclude(cls) -> tp.Optional[str]: - """Get the list of nodes to exclude for that cluster.""" - cluster_config = cls.instance()._get_cluster_config() - return cluster_config.get("slurm_exclude") - - @classmethod - def get_slurm_partitions(cls, partition_types: tp.Optional[tp.List[str]] = None) -> str: - """Gets the requested partitions for the current team and cluster as a comma-separated string. - - Args: - partition_types (list[str], optional): partition types to retrieve. Values must be - from ['global', 'team']. If not provided, the global partition is returned. - """ - if not partition_types: - partition_types = ["global"] - - cluster_config = cls.instance()._get_cluster_config() - partitions = [ - cluster_config["partitions"][partition_type] - for partition_type in partition_types - ] - return ",".join(partitions) - - @classmethod - def resolve_reference_path(cls, path: tp.Union[str, Path]) -> Path: - """Converts reference placeholder in path with configured reference dir to resolve paths. - - Args: - path (str or Path): Path to resolve. - Returns: - Path: Resolved path. - """ - path = str(path) - - if path.startswith("//reference"): - reference_dir = cls.get_reference_dir() - logger.warn(f"Reference directory: {reference_dir}") - assert ( - reference_dir.exists() and reference_dir.is_dir() - ), f"Reference directory does not exist: {reference_dir}." - path = re.sub("^//reference", str(reference_dir), path) - - return Path(path) - - @classmethod - def apply_dataset_mappers(cls, path: str) -> str: - """Applies dataset mapping regex rules as defined in the configuration. - If no rules are defined, the path is returned as-is. - """ - instance = cls.instance() - - for pattern, repl in instance._dataset_mappers: - path = pattern.sub(repl, path) - - return path diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/transformer.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/transformer.py deleted file mode 100644 index 9234b7928450974ae39843aa6c870400a5d31b1c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/commons/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import Parameter, Linear -from modules.commons.common_layers import LayerNorm, Embedding -from utils.tts_utils import get_incremental_state, set_incremental_state, softmax, make_positions -import torch.nn.functional as F - -DEFAULT_MAX_SOURCE_POSITIONS = 2000 -DEFAULT_MAX_TARGET_POSITIONS = 2000 - - -class SinusoidalPositionalEmbedding(nn.Module): - """This module produces sinusoidal positional embeddings of any length. - - Padding symbols are ignored. - """ - - def __init__(self, embedding_dim, padding_idx, init_size=1024): - super().__init__() - self.embedding_dim = embedding_dim - self.padding_idx = padding_idx - self.weights = SinusoidalPositionalEmbedding.get_embedding( - init_size, - embedding_dim, - padding_idx, - ) - self.register_buffer('_float_tensor', torch.FloatTensor(1)) - - @staticmethod - def get_embedding(num_embeddings, embedding_dim, padding_idx=None): - """Build sinusoidal embeddings. - - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0) - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1) - if embedding_dim % 2 == 1: - # zero pad - emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) - if padding_idx is not None: - emb[padding_idx, :] = 0 - return emb - - def forward(self, input, incremental_state=None, timestep=None, positions=None, **kwargs): - """Input is expected to be of size [bsz x seqlen].""" - bsz, seq_len = input.shape[:2] - max_pos = self.padding_idx + 1 + seq_len - if self.weights is None or max_pos > self.weights.size(0): - # recompute/expand embeddings if needed - self.weights = SinusoidalPositionalEmbedding.get_embedding( - max_pos, - self.embedding_dim, - self.padding_idx, - ) - self.weights = self.weights.to(self._float_tensor) - - if incremental_state is not None: - # positions is the same for every token when decoding a single step - pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len - return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1) - - positions = make_positions(input, self.padding_idx) if positions is None else positions - return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach() - - def max_positions(self): - """Maximum number of supported positions.""" - return int(1e5) # an arbitrary large number - - -class TransformerFFNLayer(nn.Module): - def __init__(self, hidden_size, filter_size, padding="SAME", kernel_size=1, dropout=0., act='gelu'): - super().__init__() - self.kernel_size = kernel_size - self.dropout = dropout - self.act = act - if padding == 'SAME': - self.ffn_1 = nn.Conv1d(hidden_size, filter_size, kernel_size, padding=kernel_size // 2) - elif padding == 'LEFT': - self.ffn_1 = nn.Sequential( - nn.ConstantPad1d((kernel_size - 1, 0), 0.0), - nn.Conv1d(hidden_size, filter_size, kernel_size) - ) - self.ffn_2 = Linear(filter_size, hidden_size) - - def forward(self, x, incremental_state=None): - # x: T x B x C - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_input' in saved_state: - prev_input = saved_state['prev_input'] - x = torch.cat((prev_input, x), dim=0) - x = x[-self.kernel_size:] - saved_state['prev_input'] = x - self._set_input_buffer(incremental_state, saved_state) - - x = self.ffn_1(x.permute(1, 2, 0)).permute(2, 0, 1) - x = x * self.kernel_size ** -0.5 - - if incremental_state is not None: - x = x[-1:] - if self.act == 'gelu': - x = F.gelu(x) - if self.act == 'relu': - x = F.relu(x) - x = F.dropout(x, self.dropout, training=self.training) - x = self.ffn_2(x) - return x - - def _get_input_buffer(self, incremental_state): - return get_incremental_state( - self, - incremental_state, - 'f', - ) or {} - - def _set_input_buffer(self, incremental_state, buffer): - set_incremental_state( - self, - incremental_state, - 'f', - buffer, - ) - - def clear_buffer(self, incremental_state): - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_input' in saved_state: - del saved_state['prev_input'] - self._set_input_buffer(incremental_state, saved_state) - - -class MultiheadAttention(nn.Module): - def __init__(self, embed_dim, num_heads, kdim=None, vdim=None, dropout=0., bias=True, - add_bias_kv=False, add_zero_attn=False, self_attention=False, - encoder_decoder_attention=False): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, 'Self-attention requires query, key and ' \ - 'value to be of the same size' - - if self.qkv_same_dim: - self.in_proj_weight = Parameter(torch.Tensor(3 * embed_dim, embed_dim)) - else: - self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim)) - self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim)) - self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim)) - - if bias: - self.in_proj_bias = Parameter(torch.Tensor(3 * embed_dim)) - else: - self.register_parameter('in_proj_bias', None) - - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.enable_torch_version = False - if hasattr(F, "multi_head_attention_forward"): - self.enable_torch_version = True - else: - self.enable_torch_version = False - self.last_attn_probs = None - - def reset_parameters(self): - if self.qkv_same_dim: - nn.init.xavier_uniform_(self.in_proj_weight) - else: - nn.init.xavier_uniform_(self.k_proj_weight) - nn.init.xavier_uniform_(self.v_proj_weight) - nn.init.xavier_uniform_(self.q_proj_weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.in_proj_bias is not None: - nn.init.constant_(self.in_proj_bias, 0.) - nn.init.constant_(self.out_proj.bias, 0.) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, key, value, - key_padding_mask=None, - incremental_state=None, - need_weights=True, - static_kv=False, - attn_mask=None, - before_softmax=False, - need_head_weights=False, - enc_dec_attn_constraint_mask=None, - reset_attn_weight=None - ): - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if self.enable_torch_version and incremental_state is None and not static_kv and reset_attn_weight is None: - if self.qkv_same_dim: - return F.multi_head_attention_forward(query, key, value, - self.embed_dim, self.num_heads, - self.in_proj_weight, - self.in_proj_bias, self.bias_k, self.bias_v, - self.add_zero_attn, self.dropout, - self.out_proj.weight, self.out_proj.bias, - self.training, key_padding_mask, need_weights, - attn_mask) - else: - return F.multi_head_attention_forward(query, key, value, - self.embed_dim, self.num_heads, - torch.empty([0]), - self.in_proj_bias, self.bias_k, self.bias_v, - self.add_zero_attn, self.dropout, - self.out_proj.weight, self.out_proj.bias, - self.training, key_padding_mask, need_weights, - attn_mask, use_separate_proj_weight=True, - q_proj_weight=self.q_proj_weight, - k_proj_weight=self.k_proj_weight, - v_proj_weight=self.v_proj_weight) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_key' in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - # self-attention - q, k, v = self.in_proj_qkv(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.in_proj_q(query) - if key is None: - assert value is None - k = v = None - else: - k = self.in_proj_k(key) - v = self.in_proj_v(key) - - else: - q = self.in_proj_q(query) - k = self.in_proj_k(key) - v = self.in_proj_v(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [key_padding_mask, key_padding_mask.new_zeros(key_padding_mask.size(0), 1)], dim=1) - - q = q.contiguous().view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1) - if k is not None: - k = k.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1) - if v is not None: - v = v.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if 'prev_key' in saved_state: - prev_key = saved_state['prev_key'].view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - k = torch.cat((prev_key, k), dim=1) - if 'prev_value' in saved_state: - prev_value = saved_state['prev_value'].view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - v = torch.cat((prev_value, v), dim=1) - if 'prev_key_padding_mask' in saved_state and saved_state['prev_key_padding_mask'] is not None: - prev_key_padding_mask = saved_state['prev_key_padding_mask'] - if static_kv: - key_padding_mask = prev_key_padding_mask - else: - key_padding_mask = torch.cat((prev_key_padding_mask, key_padding_mask), dim=1) - - saved_state['prev_key'] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state['prev_value'] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state['prev_key_padding_mask'] = key_padding_mask - - self._set_input_buffer(incremental_state, saved_state) - - src_len = k.size(1) - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.shape == torch.Size([]): - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [key_padding_mask, torch.zeros(key_padding_mask.size(0), 1).type_as(key_padding_mask)], dim=1) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - if len(attn_mask.shape) == 2: - attn_mask = attn_mask.unsqueeze(0) - elif len(attn_mask.shape) == 3: - attn_mask = attn_mask[:, None].repeat([1, self.num_heads, 1, 1]).reshape( - bsz * self.num_heads, tgt_len, src_len) - attn_weights = attn_weights + attn_mask - - if enc_dec_attn_constraint_mask is not None: # bs x head x L_kv - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.masked_fill( - enc_dec_attn_constraint_mask.unsqueeze(2).bool(), - -1e8, - ) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2), - -1e8, - ) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_logits = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = softmax(attn_weights, dim=-1) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = F.dropout(attn_weights_float.type_as(attn_weights), p=self.dropout, training=self.training) - - if reset_attn_weight is not None: - if reset_attn_weight: - self.last_attn_probs = attn_probs.detach() - else: - assert self.last_attn_probs is not None - attn_probs = self.last_attn_probs - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - - if need_weights: - attn_weights = attn_weights_float.view(bsz, self.num_heads, tgt_len, src_len).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - else: - attn_weights = None - - return attn, (attn_weights, attn_logits) - - def in_proj_qkv(self, query): - return self._in_proj(query).chunk(3, dim=-1) - - def in_proj_q(self, query): - if self.qkv_same_dim: - return self._in_proj(query, end=self.embed_dim) - else: - bias = self.in_proj_bias - if bias is not None: - bias = bias[:self.embed_dim] - return F.linear(query, self.q_proj_weight, bias) - - def in_proj_k(self, key): - if self.qkv_same_dim: - return self._in_proj(key, start=self.embed_dim, end=2 * self.embed_dim) - else: - weight = self.k_proj_weight - bias = self.in_proj_bias - if bias is not None: - bias = bias[self.embed_dim:2 * self.embed_dim] - return F.linear(key, weight, bias) - - def in_proj_v(self, value): - if self.qkv_same_dim: - return self._in_proj(value, start=2 * self.embed_dim) - else: - weight = self.v_proj_weight - bias = self.in_proj_bias - if bias is not None: - bias = bias[2 * self.embed_dim:] - return F.linear(value, weight, bias) - - def _in_proj(self, input, start=0, end=None): - weight = self.in_proj_weight - bias = self.in_proj_bias - weight = weight[start:end, :] - if bias is not None: - bias = bias[start:end] - return F.linear(input, weight, bias) - - def _get_input_buffer(self, incremental_state): - return get_incremental_state( - self, - incremental_state, - 'attn_state', - ) or {} - - def _set_input_buffer(self, incremental_state, buffer): - set_incremental_state( - self, - incremental_state, - 'attn_state', - buffer, - ) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - return attn_weights - - def clear_buffer(self, incremental_state=None): - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_key' in saved_state: - del saved_state['prev_key'] - if 'prev_value' in saved_state: - del saved_state['prev_value'] - self._set_input_buffer(incremental_state, saved_state) - - -class EncSALayer(nn.Module): - def __init__(self, c, num_heads, dropout, attention_dropout=0.1, - relu_dropout=0.1, kernel_size=9, padding='SAME', act='gelu'): - super().__init__() - self.c = c - self.dropout = dropout - self.num_heads = num_heads - if num_heads > 0: - self.layer_norm1 = LayerNorm(c) - self.self_attn = MultiheadAttention( - self.c, num_heads, self_attention=True, dropout=attention_dropout, bias=False) - self.layer_norm2 = LayerNorm(c) - self.ffn = TransformerFFNLayer( - c, 4 * c, kernel_size=kernel_size, dropout=relu_dropout, padding=padding, act=act) - - def forward(self, x, encoder_padding_mask=None, **kwargs): - layer_norm_training = kwargs.get('layer_norm_training', None) - if layer_norm_training is not None: - self.layer_norm1.training = layer_norm_training - self.layer_norm2.training = layer_norm_training - if self.num_heads > 0: - residual = x - x = self.layer_norm1(x) - x, _, = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None] - - residual = x - x = self.layer_norm2(x) - x = self.ffn(x) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None] - return x - - -class DecSALayer(nn.Module): - def __init__(self, c, num_heads, dropout, attention_dropout=0.1, relu_dropout=0.1, - kernel_size=9, act='gelu'): - super().__init__() - self.c = c - self.dropout = dropout - self.layer_norm1 = LayerNorm(c) - self.self_attn = MultiheadAttention( - c, num_heads, self_attention=True, dropout=attention_dropout, bias=False - ) - self.layer_norm2 = LayerNorm(c) - self.encoder_attn = MultiheadAttention( - c, num_heads, encoder_decoder_attention=True, dropout=attention_dropout, bias=False, - ) - self.layer_norm3 = LayerNorm(c) - self.ffn = TransformerFFNLayer( - c, 4 * c, padding='LEFT', kernel_size=kernel_size, dropout=relu_dropout, act=act) - - def forward( - self, - x, - encoder_out=None, - encoder_padding_mask=None, - incremental_state=None, - self_attn_mask=None, - self_attn_padding_mask=None, - attn_out=None, - reset_attn_weight=None, - **kwargs, - ): - layer_norm_training = kwargs.get('layer_norm_training', None) - if layer_norm_training is not None: - self.layer_norm1.training = layer_norm_training - self.layer_norm2.training = layer_norm_training - self.layer_norm3.training = layer_norm_training - residual = x - x = self.layer_norm1(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - attn_mask=self_attn_mask - ) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - - attn_logits = None - if encoder_out is not None or attn_out is not None: - residual = x - x = self.layer_norm2(x) - if encoder_out is not None: - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - enc_dec_attn_constraint_mask=get_incremental_state(self, incremental_state, - 'enc_dec_attn_constraint_mask'), - reset_attn_weight=reset_attn_weight - ) - attn_logits = attn[1] - elif attn_out is not None: - x = self.encoder_attn.in_proj_v(attn_out) - if encoder_out is not None or attn_out is not None: - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - - residual = x - x = self.layer_norm3(x) - x = self.ffn(x, incremental_state=incremental_state) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - return x, attn_logits - - def clear_buffer(self, input, encoder_out=None, encoder_padding_mask=None, incremental_state=None): - self.encoder_attn.clear_buffer(incremental_state) - self.ffn.clear_buffer(incremental_state) - - def set_buffer(self, name, tensor, incremental_state): - return set_incremental_state(self, incremental_state, name, tensor) - - -class TransformerEncoderLayer(nn.Module): - def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2): - super().__init__() - self.hidden_size = hidden_size - self.dropout = dropout - self.num_heads = num_heads - self.op = EncSALayer( - hidden_size, num_heads, dropout=dropout, - attention_dropout=0.0, relu_dropout=dropout, - kernel_size=kernel_size) - - def forward(self, x, **kwargs): - return self.op(x, **kwargs) - - -class TransformerDecoderLayer(nn.Module): - def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2): - super().__init__() - self.hidden_size = hidden_size - self.dropout = dropout - self.num_heads = num_heads - self.op = DecSALayer( - hidden_size, num_heads, dropout=dropout, - attention_dropout=0.0, relu_dropout=dropout, - kernel_size=kernel_size) - - def forward(self, x, **kwargs): - return self.op(x, **kwargs) - - def clear_buffer(self, *args): - return self.op.clear_buffer(*args) - - def set_buffer(self, *args): - return self.op.set_buffer(*args) - - -class FFTBlocks(nn.Module): - def __init__(self, hidden_size, num_layers, ffn_kernel_size=9, dropout=0.0, - num_heads=2, use_pos_embed=True, use_last_norm=True, - use_pos_embed_alpha=True): - super().__init__() - self.num_layers = num_layers - embed_dim = self.hidden_size = hidden_size - self.dropout = dropout - self.use_pos_embed = use_pos_embed - self.use_last_norm = use_last_norm - if use_pos_embed: - self.max_source_positions = DEFAULT_MAX_TARGET_POSITIONS - self.padding_idx = 0 - self.pos_embed_alpha = nn.Parameter(torch.Tensor([1])) if use_pos_embed_alpha else 1 - self.embed_positions = SinusoidalPositionalEmbedding( - embed_dim, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS, - ) - - self.layers = nn.ModuleList([]) - self.layers.extend([ - TransformerEncoderLayer(self.hidden_size, self.dropout, - kernel_size=ffn_kernel_size, num_heads=num_heads) - for _ in range(self.num_layers) - ]) - if self.use_last_norm: - self.layer_norm = nn.LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, x, padding_mask=None, attn_mask=None, return_hiddens=False): - """ - :param x: [B, T, C] - :param padding_mask: [B, T] - :return: [B, T, C] or [L, B, T, C] - """ - padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask - nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1] - if self.use_pos_embed: - positions = self.pos_embed_alpha * self.embed_positions(x[..., 0]) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) * nonpadding_mask_TB - hiddens = [] - for layer in self.layers: - x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB - hiddens.append(x) - if self.use_last_norm: - x = self.layer_norm(x) * nonpadding_mask_TB - if return_hiddens: - x = torch.stack(hiddens, 0) # [L, T, B, C] - x = x.transpose(1, 2) # [L, B, T, C] - else: - x = x.transpose(0, 1) # [B, T, C] - return x - - -class FastSpeechEncoder(FFTBlocks): - def __init__(self, dict_size, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2, - dropout=0.0): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads, - use_pos_embed=False, dropout=dropout) # use_pos_embed_alpha for compatibility - self.embed_tokens = Embedding(dict_size, hidden_size, 0) - self.embed_scale = math.sqrt(hidden_size) - self.padding_idx = 0 - self.embed_positions = SinusoidalPositionalEmbedding( - hidden_size, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS, - ) - - def forward(self, txt_tokens, attn_mask=None): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [B x T x C] - } - """ - encoder_padding_mask = txt_tokens.eq(self.padding_idx).data - x = self.forward_embedding(txt_tokens) # [B, T, H] - if self.num_layers > 0: - x = super(FastSpeechEncoder, self).forward(x, encoder_padding_mask, attn_mask=attn_mask) - return x - - def forward_embedding(self, txt_tokens): - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(txt_tokens) - if self.use_pos_embed: - positions = self.embed_positions(txt_tokens) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - return x - - -class FastSpeechDecoder(FFTBlocks): - def __init__(self, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/multiprocess_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/multiprocess_utils.py deleted file mode 100644 index e7ed3c91aeee1103213fb573806423a9e1aef097..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/multiprocess_utils.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import traceback -from multiprocessing import Queue, Process - - -def chunked_worker(worker_id, map_func, args, results_queue=None, init_ctx_func=None): - ctx = init_ctx_func(worker_id) if init_ctx_func is not None else None - for job_idx, arg in args: - try: - if ctx is not None: - res = map_func(*arg, ctx=ctx) - else: - res = map_func(*arg) - results_queue.put((job_idx, res)) - except: - traceback.print_exc() - results_queue.put((job_idx, None)) - -def chunked_multiprocess_run(map_func, args, num_workers=None, ordered=True, init_ctx_func=None, q_max_size=1000): - args = zip(range(len(args)), args) - args = list(args) - n_jobs = len(args) - if num_workers is None: - num_workers = int(os.getenv('N_PROC', os.cpu_count())) - results_queues = [] - if ordered: - for i in range(num_workers): - results_queues.append(Queue(maxsize=q_max_size // num_workers)) - else: - results_queue = Queue(maxsize=q_max_size) - for i in range(num_workers): - results_queues.append(results_queue) - workers = [] - for i in range(num_workers): - args_worker = args[i::num_workers] - p = Process(target=chunked_worker, args=( - i, map_func, args_worker, results_queues[i], init_ctx_func), daemon=True) - workers.append(p) - p.start() - for n_finished in range(n_jobs): - results_queue = results_queues[n_finished % num_workers] - job_idx, res = results_queue.get() - assert job_idx == n_finished or not ordered, (job_idx, n_finished) - yield res - for w in workers: - w.join() - w.close() - -def multiprocess_run_tqdm(map_func, args, num_workers=None, ordered=True, init_ctx_func=None, - multithread=False, desc=None): - for i, res in tqdm(enumerate( - multiprocess_run(map_func, args, num_workers, ordered, init_ctx_func, multithread)), - total=len(args), desc=desc): - yield i, res \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/diffspeech/net.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/diffspeech/net.py deleted file mode 100644 index 764020f28add5e4ee387a9d081ab6d548fc0f201..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/diffspeech/net.py +++ /dev/null @@ -1,110 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from math import sqrt - -Linear = nn.Linear -ConvTranspose2d = nn.ConvTranspose2d - - -class Mish(nn.Module): - def forward(self, x): - return x * torch.tanh(F.softplus(x)) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -def Conv1d(*args, **kwargs): - layer = nn.Conv1d(*args, **kwargs) - nn.init.kaiming_normal_(layer.weight) - return layer - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation) - self.diffusion_projection = Linear(residual_channels, residual_channels) - self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - gate, filter = torch.chunk(y, 2, dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - residual, skip = torch.chunk(y, 2, dim=1) - return (x + residual) / sqrt(2.0), skip - - -class DiffNet(nn.Module): - def __init__(self, hparams): - super().__init__() - in_dims = hparams['audio_num_mel_bins'] - self.encoder_hidden = hparams['hidden_size'] - self.residual_layers = hparams['residual_layers'] - self.residual_channels = hparams['residual_channels'] - self.dilation_cycle_length = hparams['dilation_cycle_length'] - - self.input_projection = Conv1d(in_dims, self.residual_channels, 1) - self.diffusion_embedding = SinusoidalPosEmb(self.residual_channels) - dim = self.residual_channels - self.mlp = nn.Sequential( - nn.Linear(dim, dim * 4), - Mish(), - nn.Linear(dim * 4, dim) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock(self.encoder_hidden, self.residual_channels, 2 ** (i % self.dilation_cycle_length)) - for i in range(self.residual_layers) - ]) - self.skip_projection = Conv1d(self.residual_channels, self.residual_channels, 1) - self.output_projection = Conv1d(self.residual_channels, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec[:, 0] - x = self.input_projection(x) # x [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer_id, layer in enumerate(self.residual_layers): - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, 80, T] - return x[:, None, :, :] diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 7d137b8cf36718c1c58faa09f9dd919e5fb2977b..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,87 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/base.py b/spaces/AgentVerse/agentVerse/agentverse/environments/base.py deleted file mode 100644 index 7767dc248d825aff6a2c7d76532136dcbc23dfeb..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/base.py +++ /dev/null @@ -1,58 +0,0 @@ -from __future__ import annotations -from agentverse.logging import logger - -from abc import abstractmethod -from typing import TYPE_CHECKING, Any, Dict, List - -from pydantic import BaseModel - -# from agentverse.agents.agent import Agent - -if TYPE_CHECKING: - from agentverse.agents.base import BaseAgent - from agentverse.message import Message - - -class BaseRule(BaseModel): - pass - - -class BaseEnvironment(BaseModel): - """ - Base class for environment. - - Args: - agents: List of agents - rule: Rule for the environment - max_turns: Maximum number of turns - cnt_turn: Current turn number - last_messages: Messages from last turn - rule_params: Variables set by the rule - """ - - agents: List[BaseAgent] - rule: BaseRule - max_turns: int = 10 - cnt_turn: int = 0 - last_messages: List[Message] = [] - rule_params: Dict = {} - - @abstractmethod - async def step(self) -> List[Message]: - """Run one step of the environment""" - pass - - @abstractmethod - def reset(self) -> None: - """Reset the environment""" - pass - - def report_metrics(self) -> None: - """Report useful metrics""" - total_spent = sum([agent.get_spend() for agent in self.agents]) - logger.info(f"Total spent: ${total_spent}") - pass - - def is_done(self) -> bool: - """Check if the environment is done""" - return self.cnt_turn >= self.max_turns diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Dots.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Dots.js deleted file mode 100644 index ee48586a6fcf6087eeb64dca8ef6f98cbda08c44..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/dots/Dots.js +++ /dev/null @@ -1,54 +0,0 @@ -import Base from '../base/Base.js'; -import { Circle } from '../utils/Geoms.js'; -import Yoyo from '../utils/Yoyo.js'; - - -const Linear = Phaser.Math.Linear; - -class Dots extends Base { - constructor(scene, config) { - super(scene, config); - this.type = 'rexSpinnerDots'; - } - - buildShapes() { - var cnt = 3; - for (var i = 0; i < cnt; i++) { - var dot = new Circle(); - this.addShape(dot); - - var offset = Yoyo(i / (cnt - 1)) / 2; - dot.setData('offset', offset); - } - } - - updateShapes() { - var centerX = this.centerX; - var centerY = this.centerY; - var radius = this.radius; - var leftBound = centerX - radius; - - var shapes = this.getShapes(), - cnt = shapes.length; - var cellWidth = (radius * 2) / cnt; - var maxDotRadius = cellWidth / 2; - - for (var i = 0; i < cnt; i++) { - var dot = shapes[i]; - var t = (this.value + dot.getData('offset')) % 1; - t = Yoyo(t); - - var dotAlpha = Linear(0.25, 1, t); - var dotRadius = Linear(0.5, 1, t) * maxDotRadius; - dot - .fillStyle(this.color, dotAlpha) - .setRadius(dotRadius) - .setCenterPosition( - leftBound + (cellWidth * (i + 0.5)), - centerY - ) - } - } -} - -export default Dots; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/GetNearestChildIndex.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/GetNearestChildIndex.js deleted file mode 100644 index e76113c0c4622afdbc1e4cedc4f06187c2eaccca..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/GetNearestChildIndex.js +++ /dev/null @@ -1,41 +0,0 @@ -const DistanceBetween = Phaser.Math.Distance.Between; - -var GetNearestChildIndex = function (x, y) { - var children = this.sizerChildren; - if (children.length === 0) { - return -1; - } - - var nearestIndex = -1, - minDistance = Infinity; - for (var i = 0, cnt = children.length; i < cnt; i++) { - var child = children[i]; - // position is not at this line - if (Math.abs(child.centerY - y) > (child.height / 2)) { - continue; - } - - // Check left bound - var distance = DistanceBetween(child.left, child.centerY, x, y); - if (minDistance > distance) { - minDistance = distance; - nearestIndex = i; - } - - // Is last child of this line - var nextChild = children[i + 1]; - if (nextChild && (nextChild.y === child.y)) { - continue; - } - - var distance = DistanceBetween(child.right, child.centerY, x, y); - if (minDistance > distance) { - minDistance = distance; - nearestIndex = i + 1; - } - } - - return nearestIndex; -} - -export default GetNearestChildIndex; \ No newline at end of file diff --git a/spaces/Alven/background-remover/app.py b/spaces/Alven/background-remover/app.py deleted file mode 100644 index c53c42ae3e0e6ec108301bc6f7dbce2c36684e95..0000000000000000000000000000000000000000 --- a/spaces/Alven/background-remover/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import cv2 -import gradio as gr -import numpy as np -import onnxruntime -import requests -from huggingface_hub import hf_hub_download -from PIL import Image - - -# Get x_scale_factor & y_scale_factor to resize image -def get_scale_factor(im_h, im_w, ref_size=512): - - if max(im_h, im_w) < ref_size or min(im_h, im_w) > ref_size: - if im_w >= im_h: - im_rh = ref_size - im_rw = int(im_w / im_h * ref_size) - elif im_w < im_h: - im_rw = ref_size - im_rh = int(im_h / im_w * ref_size) - else: - im_rh = im_h - im_rw = im_w - - im_rw = im_rw - im_rw % 32 - im_rh = im_rh - im_rh % 32 - - x_scale_factor = im_rw / im_w - y_scale_factor = im_rh / im_h - - return x_scale_factor, y_scale_factor - - -MODEL_PATH = hf_hub_download('nateraw/background-remover-files', 'modnet.onnx', repo_type='dataset') - - -def main(image_path, threshold): - - # read image - im = cv2.imread(image_path) - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - - # unify image channels to 3 - if len(im.shape) == 2: - im = im[:, :, None] - if im.shape[2] == 1: - im = np.repeat(im, 3, axis=2) - elif im.shape[2] == 4: - im = im[:, :, 0:3] - - # normalize values to scale it between -1 to 1 - im = (im - 127.5) / 127.5 - - im_h, im_w, im_c = im.shape - x, y = get_scale_factor(im_h, im_w) - - # resize image - im = cv2.resize(im, None, fx=x, fy=y, interpolation=cv2.INTER_AREA) - - # prepare input shape - im = np.transpose(im) - im = np.swapaxes(im, 1, 2) - im = np.expand_dims(im, axis=0).astype('float32') - - # Initialize session and get prediction - session = onnxruntime.InferenceSession(MODEL_PATH, None) - input_name = session.get_inputs()[0].name - output_name = session.get_outputs()[0].name - result = session.run([output_name], {input_name: im}) - - # refine matte - matte = (np.squeeze(result[0]) * 255).astype('uint8') - matte = cv2.resize(matte, dsize=(im_w, im_h), interpolation=cv2.INTER_AREA) - - # HACK - Could probably just convert this to PIL instead of writing - cv2.imwrite('out.png', matte) - - image = Image.open(image_path) - matte = Image.open('out.png') - - # obtain predicted foreground - image = np.asarray(image) - if len(image.shape) == 2: - image = image[:, :, None] - if image.shape[2] == 1: - image = np.repeat(image, 3, axis=2) - elif image.shape[2] == 4: - image = image[:, :, 0:3] - - b, g, r = cv2.split(image) - - mask = np.asarray(matte) - a = np.ones(mask.shape, dtype='uint8') * 255 - alpha_im = cv2.merge([b, g, r, a], 4) - bg = np.zeros(alpha_im.shape) - new_mask = np.stack([mask, mask, mask, mask], axis=2) - foreground = np.where(new_mask > threshold, alpha_im, bg).astype(np.uint8) - - return Image.fromarray(foreground) - - -title = "MODNet Background Remover" -description = "Gradio demo for MODNet, a model that can remove the background from a given image. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "" - -url = "https://huggingface.co/datasets/nateraw/background-remover-files/resolve/main/twitter_profile_pic.jpeg" -image = Image.open(requests.get(url, stream=True).raw) -image.save('twitter_profile_pic.jpg') - -url = "https://upload.wikimedia.org/wikipedia/commons/8/8d/President_Barack_Obama.jpg" -image = Image.open(requests.get(url, stream=True).raw) -image.save('obama.jpg') - -interface = gr.Interface( - fn=main, - inputs=[ - gr.inputs.Image(type='filepath'), - gr.inputs.Slider(minimum=0, maximum=250, default=100, step=5, label='Mask Cutoff Threshold'), - ], - outputs='image', - examples=[['twitter_profile_pic.jpg', 120], ['obama.jpg', 155]], - title=title, - description=description, - article=article, -) - -if __name__ == '__main__': - interface.launch(debug=True) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/attnprocessor.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/attnprocessor.md deleted file mode 100644 index 7a4812e0961e28887875e4ea715545da007ae421..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/attnprocessor.md +++ /dev/null @@ -1,42 +0,0 @@ -# Attention Processor - -An attention processor is a class for applying different types of attention mechanisms. - -## AttnProcessor -[[autodoc]] models.attention_processor.AttnProcessor - -## AttnProcessor2_0 -[[autodoc]] models.attention_processor.AttnProcessor2_0 - -## LoRAAttnProcessor -[[autodoc]] models.attention_processor.LoRAAttnProcessor - -## LoRAAttnProcessor2_0 -[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0 - -## CustomDiffusionAttnProcessor -[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor - -## AttnAddedKVProcessor -[[autodoc]] models.attention_processor.AttnAddedKVProcessor - -## AttnAddedKVProcessor2_0 -[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0 - -## LoRAAttnAddedKVProcessor -[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor - -## XFormersAttnProcessor -[[autodoc]] models.attention_processor.XFormersAttnProcessor - -## LoRAXFormersAttnProcessor -[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor - -## CustomDiffusionXFormersAttnProcessor -[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor - -## SlicedAttnProcessor -[[autodoc]] models.attention_processor.SlicedAttnProcessor - -## SlicedAttnAddedKVProcessor -[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/pipeline_overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/pipeline_overview.md deleted file mode 100644 index ca98fc3f4b63fa344f232690a0536028d668c875..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/pipeline_overview.md +++ /dev/null @@ -1,17 +0,0 @@ - - -# Overview - -A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like [`StableDiffusionPipeline`] or [`StableDiffusionControlNetPipeline`], with specific capabilities. All pipeline types inherit from the base [`DiffusionPipeline`] class; pass it any checkpoint, and it'll automatically detect the pipeline type and load the necessary components. - -This section introduces you to some of the tasks supported by our pipelines such as unconditional image generation and different techniques and variations of text-to-image generation. You'll also learn how to gain more control over the generation process by setting a seed for reproducibility and weighting prompts to adjust the influence certain words in the prompt has over the output. Finally, you'll see how you can create a community pipeline for a custom task like generating images from speech. \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/write_own_pipeline.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/write_own_pipeline.md deleted file mode 100644 index 50128f24fcdf7d008a6dcf8086e1aef314808e9a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/write_own_pipeline.md +++ /dev/null @@ -1,290 +0,0 @@ - - -# 파이프라인, 모델 및 스케줄러 이해하기 - -[[colab에서 열기]] - -🧨 Diffusers는 사용자 친화적이며 유연한 도구 상자로, 사용사례에 맞게 diffusion 시스템을 구축 할 수 있도록 설계되었습니다. 이 도구 상자의 핵심은 모델과 스케줄러입니다. [`DiffusionPipeline`]은 편의를 위해 이러한 구성 요소를 번들로 제공하지만, 파이프라인을 분리하고 모델과 스케줄러를 개별적으로 사용해 새로운 diffusion 시스템을 만들 수도 있습니다. - -이 튜토리얼에서는 기본 파이프라인부터 시작해 Stable Diffusion 파이프라인까지 진행하며 모델과 스케줄러를 사용해 추론을 위한 diffusion 시스템을 조립하는 방법을 배웁니다. - -## 기본 파이프라인 해체하기 - -파이프라인은 추론을 위해 모델을 실행하는 빠르고 쉬운 방법으로, 이미지를 생성하는 데 코드가 4줄 이상 필요하지 않습니다: - -```py ->>> from diffusers import DDPMPipeline - ->>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256").to("cuda") ->>> image = ddpm(num_inference_steps=25).images[0] ->>> image -``` - -
        - Image of cat created from DDPMPipeline -
        - -정말 쉽습니다. 그런데 파이프라인은 어떻게 이렇게 할 수 있었을까요? 파이프라인을 세분화하여 내부에서 어떤 일이 일어나고 있는지 살펴보겠습니다. - -위 예시에서 파이프라인에는 [`UNet2DModel`] 모델과 [`DDPMScheduler`]가 포함되어 있습니다. 파이프라인은 원하는 출력 크기의 랜덤 노이즈를 받아 모델을 여러번 통과시켜 이미지의 노이즈를 제거합니다. 각 timestep에서 모델은 *noise residual*을 예측하고 스케줄러는 이를 사용하여 노이즈가 적은 이미지를 예측합니다. 파이프라인은 지정된 추론 스텝수에 도달할 때까지 이 과정을 반복합니다. - -모델과 스케줄러를 별도로 사용하여 파이프라인을 다시 생성하기 위해 자체적인 노이즈 제거 프로세스를 작성해 보겠습니다. - -1. 모델과 스케줄러를 불러옵니다: - - ```py - >>> from diffusers import DDPMScheduler, UNet2DModel - - >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") - >>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") - ``` - -2. 노이즈 제거 프로세스를 실행할 timestep 수를 설정합니다: - - ```py - >>> scheduler.set_timesteps(50) - ``` - -3. 스케줄러의 timestep을 설정하면 균등한 간격의 구성 요소를 가진 텐서가 생성됩니다.(이 예시에서는 50개) 각 요소는 모델이 이미지의 노이즈를 제거하는 시간 간격에 해당합니다. 나중에 노이즈 제거 루프를 만들 때 이 텐서를 반복하여 이미지의 노이즈를 제거합니다: - - ```py - >>> scheduler.timesteps - tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, - 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, - 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, - 140, 120, 100, 80, 60, 40, 20, 0]) - ``` - -4. 원하는 출력과 같은 모양을 가진 랜덤 노이즈를 생성합니다: - - ```py - >>> import torch - - >>> sample_size = model.config.sample_size - >>> noise = torch.randn((1, 3, sample_size, sample_size)).to("cuda") - ``` - -5. 이제 timestep을 반복하는 루프를 작성합니다. 각 timestep에서 모델은 [`UNet2DModel.forward`]를 통해 noisy residual을 반환합니다. 스케줄러의 [`~DDPMScheduler.step`] 메서드는 noisy residual, timestep, 그리고 입력을 받아 이전 timestep에서 이미지를 예측합니다. 이 출력은 노이즈 제거 루프의 모델에 대한 다음 입력이 되며, `timesteps` 배열의 끝에 도달할 때까지 반복됩니다. - - ```py - >>> input = noise - - >>> for t in scheduler.timesteps: - ... with torch.no_grad(): - ... noisy_residual = model(input, t).sample - ... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample - ... input = previous_noisy_sample - ``` - - 이것이 전체 노이즈 제거 프로세스이며, 동일한 패턴을 사용해 모든 diffusion 시스템을 작성할 수 있습니다. - -6. 마지막 단계는 노이즈가 제거된 출력을 이미지로 변환하는 것입니다: - - ```py - >>> from PIL import Image - >>> import numpy as np - - >>> image = (input / 2 + 0.5).clamp(0, 1) - >>> image = image.cpu().permute(0, 2, 3, 1).numpy()[0] - >>> image = Image.fromarray((image * 255).round().astype("uint8")) - >>> image - ``` - -다음 섹션에서는 여러분의 기술을 시험해보고 좀 더 복잡한 Stable Diffusion 파이프라인을 분석해 보겠습니다. 방법은 거의 동일합니다. 필요한 구성요소들을 초기화하고 timestep수를 설정하여 `timestep` 배열을 생성합니다. 노이즈 제거 루프에서 `timestep` 배열이 사용되며, 이 배열의 각 요소에 대해 모델은 노이즈가 적은 이미지를 예측합니다. 노이즈 제거 루프는 `timestep`을 반복하고 각 timestep에서 noise residual을 출력하고 스케줄러는 이를 사용하여 이전 timestep에서 노이즈가 덜한 이미지를 예측합니다. 이 프로세스는 `timestep` 배열의 끝에 도달할 때까지 반복됩니다. - -한번 사용해 봅시다! - -## Stable Diffusion 파이프라인 해체하기 - -Stable Diffusion 은 text-to-image *latent diffusion* 모델입니다. latent diffusion 모델이라고 불리는 이유는 실제 픽셀 공간 대신 이미지의 저차원의 표현으로 작업하기 때문이고, 메모리 효율이 더 높습니다. 인코더는 이미지를 더 작은 표현으로 압축하고, 디코더는 압축된 표현을 다시 이미지로 변환합니다. text-to-image 모델의 경우 텍스트 임베딩을 생성하기 위해 tokenizer와 인코더가 필요합니다. 이전 예제에서 이미 UNet 모델과 스케줄러가 필요하다는 것은 알고 계셨을 것입니다. - -보시다시피, 이것은 UNet 모델만 포함된 DDPM 파이프라인보다 더 복잡합니다. Stable Diffusion 모델에는 세 개의 개별 사전학습된 모델이 있습니다. - - - -💡 VAE, UNet 및 텍스트 인코더 모델의 작동방식에 대한 자세한 내용은 [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) 블로그를 참조하세요. - - - -이제 Stable Diffusion 파이프라인에 필요한 구성요소들이 무엇인지 알았으니, [`~ModelMixin.from_pretrained`] 메서드를 사용해 모든 구성요소를 불러옵니다. 사전학습된 체크포인트 [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)에서 찾을 수 있으며, 각 구성요소들은 별도의 하위 폴더에 저장되어 있습니다: - -```py ->>> from PIL import Image ->>> import torch ->>> from transformers import CLIPTextModel, CLIPTokenizer ->>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler - ->>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae") ->>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") ->>> text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder") ->>> unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet") -``` - -기본 [`PNDMScheduler`] 대신, [`UniPCMultistepScheduler`]로 교체하여 다른 스케줄러를 얼마나 쉽게 연결할 수 있는지 확인합니다: - -```py ->>> from diffusers import UniPCMultistepScheduler - ->>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") -``` - -추론 속도를 높이려면 스케줄러와 달리 학습 가능한 가중치가 있으므로 모델을 GPU로 옮기세요: - -```py ->>> torch_device = "cuda" ->>> vae.to(torch_device) ->>> text_encoder.to(torch_device) ->>> unet.to(torch_device) -``` - -### 텍스트 임베딩 생성하기 - -다음 단계는 임베딩을 생성하기 위해 텍스트를 토큰화하는 것입니다. 이 텍스트는 UNet 모델에서 condition으로 사용되고 입력 프롬프트와 유사한 방향으로 diffusion 프로세스를 조정하는 데 사용됩니다. - - - -💡 `guidance_scale` 매개변수는 이미지를 생성할 때 프롬프트에 얼마나 많은 가중치를 부여할지 결정합니다. - - - -다른 프롬프트를 생성하고 싶다면 원하는 프롬프트를 자유롭게 선택하세요! - -```py ->>> prompt = ["a photograph of an astronaut riding a horse"] ->>> height = 512 # Stable Diffusion의 기본 높이 ->>> width = 512 # Stable Diffusion의 기본 너비 ->>> num_inference_steps = 25 # 노이즈 제거 스텝 수 ->>> guidance_scale = 7.5 # classifier-free guidance를 위한 scale ->>> generator = torch.manual_seed(0) # 초기 잠재 노이즈를 생성하는 seed generator ->>> batch_size = len(prompt) -``` - -텍스트를 토큰화하고 프롬프트에서 임베딩을 생성합니다: - -```py ->>> text_input = tokenizer( -... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" -... ) - ->>> with torch.no_grad(): -... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] -``` - -또한 패딩 토큰의 임베딩인 *unconditional 텍스트 임베딩*을 생성해야 합니다. 이 임베딩은 조건부 `text_embeddings`과 동일한 shape(`batch_size` 그리고 `seq_length`)을 가져야 합니다: - -```py ->>> max_length = text_input.input_ids.shape[-1] ->>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") ->>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] -``` - -두번의 forward pass를 피하기 위해 conditional 임베딩과 unconditional 임베딩을 배치(batch)로 연결하겠습니다: - -```py ->>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) -``` - -### 랜덤 노이즈 생성 - -그다음 diffusion 프로세스의 시작점으로 초기 랜덤 노이즈를 생성합니다. 이것이 이미지의 잠재적 표현이며 점차적으로 노이즈가 제거됩니다. 이 시점에서 `latent` 이미지는 최종 이미지 크기보다 작지만 나중에 모델이 이를 512x512 이미지 크기로 변환하므로 괜찮습니다. - - - -💡 `vae` 모델에는 3개의 다운 샘플링 레이어가 있기 때문에 높이와 너비가 8로 나뉩니다. 다음을 실행하여 확인할 수 있습니다: - -```py -2 ** (len(vae.config.block_out_channels) - 1) == 8 -``` - - - -```py ->>> latents = torch.randn( -... (batch_size, unet.in_channels, height // 8, width // 8), -... generator=generator, -... ) ->>> latents = latents.to(torch_device) -``` - -### 이미지 노이즈 제거 - -먼저 [`UniPCMultistepScheduler`]와 같은 향상된 스케줄러에 필요한 노이즈 스케일 값인 초기 노이즈 분포 *sigma* 로 입력을 스케일링 하는 것부터 시작합니다: - -```py ->>> latents = latents * scheduler.init_noise_sigma -``` - -마지막 단계는 `latent`의 순수한 노이즈를 점진적으로 프롬프트에 설명된 이미지로 변환하는 노이즈 제거 루프를 생성하는 것입니다. 노이즈 제거 루프는 세 가지 작업을 수행해야 한다는 점을 기억하세요: - -1. 노이즈 제거 중에 사용할 스케줄러의 timesteps를 설정합니다. -2. timestep을 따라 반복합니다. -3. 각 timestep에서 UNet 모델을 호출하여 noise residual을 예측하고 스케줄러에 전달하여 이전 노이즈 샘플을 계산합니다. - -```py ->>> from tqdm.auto import tqdm - ->>> scheduler.set_timesteps(num_inference_steps) - ->>> for t in tqdm(scheduler.timesteps): -... # classifier-free guidance를 수행하는 경우 두번의 forward pass를 수행하지 않도록 latent를 확장. -... latent_model_input = torch.cat([latents] * 2) - -... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) - -... # noise residual 예측 -... with torch.no_grad(): -... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - -... # guidance 수행 -... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) -... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - -... # 이전 노이즈 샘플을 계산 x_t -> x_t-1 -... latents = scheduler.step(noise_pred, t, latents).prev_sample -``` - -### 이미지 디코딩 - -마지막 단계는 `vae`를 이용하여 잠재 표현을 이미지로 디코딩하고 `sample`과 함께 디코딩된 출력을 얻는 것입니다: - -```py -# latent를 스케일링하고 vae로 이미지 디코딩 -latents = 1 / 0.18215 * latents -with torch.no_grad(): - image = vae.decode(latents).sample -``` - -마지막으로 이미지를 `PIL.Image`로 변환하면 생성된 이미지를 확인할 수 있습니다! - -```py ->>> image = (image / 2 + 0.5).clamp(0, 1) ->>> image = image.detach().cpu().permute(0, 2, 3, 1).numpy() ->>> images = (image * 255).round().astype("uint8") ->>> pil_images = [Image.fromarray(image) for image in images] ->>> pil_images[0] -``` - -
        - -
        - -## 다음 단계 - -기본 파이프라인부터 복잡한 파이프라인까지, 자신만의 diffusion 시스템을 작성하는 데 필요한 것은 노이즈 제거 루프뿐이라는 것을 알 수 있었습니다. 이 루프는 스케줄러의 timesteps를 설정하고, 이를 반복하며, UNet 모델을 호출하여 noise residual을 예측하고 스케줄러에 전달하여 이전 노이즈 샘플을 계산하는 과정을 번갈아 가며 수행해야 합니다. - -이것이 바로 🧨 Diffusers가 설계된 목적입니다: 모델과 스케줄러를 사용해 자신만의 diffusion 시스템을 직관적이고 쉽게 작성할 수 있도록 하기 위해서입니다. - -다음 단계를 자유롭게 진행하세요: - -* 🧨 Diffusers에 [파이프라인 구축 및 기여](using-diffusers/#contribute_pipeline)하는 방법을 알아보세요. 여러분이 어떤 아이디어를 내놓을지 기대됩니다! -* 라이브러리에서 [기본 파이프라인](./api/pipelines/overview)을 살펴보고, 모델과 스케줄러를 별도로 사용하여 파이프라인을 처음부터 해체하고 빌드할 수 있는지 확인해 보세요. diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/wider_face.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/wider_face.py deleted file mode 100644 index d1d649be42bca2955fb56a784fe80bcc2fdce4e1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/datasets/wider_face.py +++ /dev/null @@ -1,63 +0,0 @@ -# dataset settings -dataset_type = 'WIDERFaceDataset' -data_root = 'data/WIDERFace/' -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(300, 300), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(300, 300), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=60, - workers_per_gpu=2, - train=dict( - type='RepeatDataset', - times=2, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'train.txt', - img_prefix=data_root + 'WIDER_train/', - min_size=17, - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'val.txt', - img_prefix=data_root + 'WIDER_val/', - pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w40_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w40_1x_coco.py deleted file mode 100644 index 5b10c166cf36601bdb895de81874970aebc83310..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/mask_rcnn_hrnetv2p_w40_1x_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './mask_rcnn_hrnetv2p_w18_1x_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w40', - backbone=dict( - type='HRNet', - extra=dict( - stage2=dict(num_channels=(40, 80)), - stage3=dict(num_channels=(40, 80, 160)), - stage4=dict(num_channels=(40, 80, 160, 320)))), - neck=dict(type='HRFPN', in_channels=[40, 80, 160, 320], out_channels=256)) diff --git a/spaces/Andy1621/uniformer_image_detection/tools/dist_train.sh b/spaces/Andy1621/uniformer_image_detection/tools/dist_train.sh deleted file mode 100644 index 5b43fffbf28fc9b8ba7c14efcd5e4f8b19279470..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/dist_train.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env bash - -CONFIG=$1 -GPUS=$2 -PORT=${PORT:-29500} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \ - $(dirname "$0")/train.py $CONFIG --launcher pytorch ${@:3} diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py deleted file mode 100644 index 09604c39729abfc9015eb971069b987c8d8a82cb..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r50-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/dnl_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py deleted file mode 100644 index d4e368b2a11ed6433d8f2594a2cc3184fe5ddfff..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = './lraspp_m-v3-d8_512x1024_320k_cityscapes.py' -norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://contrib/mobilenet_v3_small', - backbone=dict( - type='MobileNetV3', - arch='small', - out_indices=(0, 1, 12), - norm_cfg=norm_cfg), - decode_head=dict( - type='LRASPPHead', - in_channels=(16, 16, 576), - in_index=(0, 1, 2), - channels=128, - input_transform='multiple_select', - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))) diff --git a/spaces/AnimalEquality/chatbot/scripts/nbdev_readme_patch_hface.sh b/spaces/AnimalEquality/chatbot/scripts/nbdev_readme_patch_hface.sh deleted file mode 100644 index e9303041257909c1160a3f5ad58635846021a4ce..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/scripts/nbdev_readme_patch_hface.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash -# Run from root dir - -nbdev_readme -yaml_file="hface.yaml" -readme_file="README.md" - -# Read the content of the YAML file -yaml_content=$(cat "$yaml_file") - -# Read the content of the README file -readme_content=$(cat "$readme_file") - -# Combine the YAML content and README content -combined_content="$yaml_content\n$readme_content" - -# Overwrite the README file with the combined content -echo -e "$combined_content" > "$readme_file" diff --git a/spaces/AsakuraMizu/moe-tts/text/sanskrit.py b/spaces/AsakuraMizu/moe-tts/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/AsakuraMizu/moe-tts/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Atualli/node-media-server/Dockerfile b/spaces/Atualli/node-media-server/Dockerfile deleted file mode 100644 index 9d3ec8b182833c65024ebe1f86f5d789516e552f..0000000000000000000000000000000000000000 --- a/spaces/Atualli/node-media-server/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM node:alpine -ENV PORT=7860 -# ENV PORT=7861 -# ENV UUID=d342d11e-d424-4583-b36e-524ab1f0afa4 -# EXPOSE 7860 7861 -WORKDIR /nms -RUN npm i node-media-server -#RUN npm uninstall node-media-server -#CMD ["node-media-server","start"] -#RUN nms -d -p 1935:1935 -p 8000:8000 -p 8443:8443 illuspas/node-media-server -COPY . /nms -CMD ["node", "app.js"] -# RUN node app.js \ No newline at end of file diff --git a/spaces/Awesimo/jojogan/e4e/models/discriminator.py b/spaces/Awesimo/jojogan/e4e/models/discriminator.py deleted file mode 100644 index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/discriminator.py +++ /dev/null @@ -1,20 +0,0 @@ -from torch import nn - - -class LatentCodesDiscriminator(nn.Module): - def __init__(self, style_dim, n_mlp): - super().__init__() - - self.style_dim = style_dim - - layers = [] - for i in range(n_mlp-1): - layers.append( - nn.Linear(style_dim, style_dim) - ) - layers.append(nn.LeakyReLU(0.2)) - layers.append(nn.Linear(512, 1)) - self.mlp = nn.Sequential(*layers) - - def forward(self, w): - return self.mlp(w) diff --git a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/+server.ts b/spaces/BetterAPI/BetterChat_new/src/routes/conversation/+server.ts deleted file mode 100644 index 26c5867e7d11d1048c4d234b517e8b95f8022864..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/routes/conversation/+server.ts +++ /dev/null @@ -1,57 +0,0 @@ -import type { RequestHandler } from "./$types"; -import { collections } from "$lib/server/database"; -import { ObjectId } from "mongodb"; -import { error, redirect } from "@sveltejs/kit"; -import { base } from "$app/paths"; -import { z } from "zod"; -import type { Message } from "$lib/types/Message"; - -export const POST: RequestHandler = async (input) => { - const body = await input.request.text(); - - let title = ""; - let messages: Message[] = []; - let fromShareId: string | undefined; - - if (body) { - fromShareId = z.object({ fromShare: z.string().optional() }).parse(JSON.parse(body)).fromShare; - - if (fromShareId) { - const conversation = await collections.sharedConversations.findOne({ - _id: fromShareId, - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - title = conversation.title; - messages = conversation.messages; - } - } - - const res = await collections.conversations.insertOne({ - _id: new ObjectId(), - title: - title || - "Untitled " + - ((await collections.conversations.countDocuments({ sessionId: input.locals.sessionId })) + - 1), - messages, - createdAt: new Date(), - updatedAt: new Date(), - sessionId: input.locals.sessionId, - ...(fromShareId ? { meta: { fromShareId } } : {}), - }); - - return new Response( - JSON.stringify({ - conversationId: res.insertedId.toString(), - }), - { headers: { "Content-Type": "application/json" } } - ); -}; - -export const GET: RequestHandler = async () => { - throw redirect(302, base || "/"); -}; diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/msvccompiler.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/msvccompiler.py deleted file mode 100644 index 1069e9951abd003cf347e3d151a6f6ca7340eb00..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/msvccompiler.py +++ /dev/null @@ -1,695 +0,0 @@ -"""distutils.msvccompiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for the Microsoft Visual Studio. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) - -import sys -import os -import warnings -from distutils.errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log - -_can_read_reg = False -try: - import winreg - - _can_read_reg = True - hkey_mod = winreg - - RegOpenKeyEx = winreg.OpenKeyEx - RegEnumKey = winreg.EnumKey - RegEnumValue = winreg.EnumValue - RegError = winreg.error - -except ImportError: - try: - import win32api - import win32con - - _can_read_reg = True - hkey_mod = win32con - - RegOpenKeyEx = win32api.RegOpenKeyEx - RegEnumKey = win32api.RegEnumKey - RegEnumValue = win32api.RegEnumValue - RegError = win32api.error - except ImportError: - log.info( - "Warning: Can't read registry to find the " - "necessary compiler setting\n" - "Make sure that Python modules winreg, " - "win32api or win32con are installed." - ) - pass - -if _can_read_reg: - HKEYS = ( - hkey_mod.HKEY_USERS, - hkey_mod.HKEY_CURRENT_USER, - hkey_mod.HKEY_LOCAL_MACHINE, - hkey_mod.HKEY_CLASSES_ROOT, - ) - - -warnings.warn( - "msvccompiler is deprecated and slated to be removed " - "in the future. Please discontinue use or file an issue " - "with pypa/distutils describing your use case.", - DeprecationWarning, -) - - -def read_keys(base, key): - """Return list of registry keys.""" - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - L = [] - i = 0 - while True: - try: - k = RegEnumKey(handle, i) - except RegError: - break - L.append(k) - i += 1 - return L - - -def read_values(base, key): - """Return dict of registry keys and values. - - All names are converted to lowercase. - """ - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - d = {} - i = 0 - while True: - try: - name, value, type = RegEnumValue(handle, i) - except RegError: - break - name = name.lower() - d[convert_mbcs(name)] = convert_mbcs(value) - i += 1 - return d - - -def convert_mbcs(s): - dec = getattr(s, "decode", None) - if dec is not None: - try: - s = dec("mbcs") - except UnicodeError: - pass - return s - - -class MacroExpander: - def __init__(self, version): - self.macros = {} - self.load_macros(version) - - def set_macro(self, macro, path, key): - for base in HKEYS: - d = read_values(base, path) - if d: - self.macros["$(%s)" % macro] = d[key] - break - - def load_macros(self, version): - vsbase = r"Software\Microsoft\VisualStudio\%0.1f" % version - self.set_macro("VCInstallDir", vsbase + r"\Setup\VC", "productdir") - self.set_macro("VSInstallDir", vsbase + r"\Setup\VS", "productdir") - net = r"Software\Microsoft\.NETFramework" - self.set_macro("FrameworkDir", net, "installroot") - try: - if version > 7.0: - self.set_macro("FrameworkSDKDir", net, "sdkinstallrootv1.1") - else: - self.set_macro("FrameworkSDKDir", net, "sdkinstallroot") - except KeyError: - raise DistutilsPlatformError( - """Python was built with Visual Studio 2003; -extensions must be built with a compiler than can generate compatible binaries. -Visual Studio 2003 was not found on this system. If you have Cygwin installed, -you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""" - ) - - p = r"Software\Microsoft\NET Framework Setup\Product" - for base in HKEYS: - try: - h = RegOpenKeyEx(base, p) - except RegError: - continue - key = RegEnumKey(h, 0) - d = read_values(base, r"{}\{}".format(p, key)) - self.macros["$(FrameworkVersion)"] = d["version"] - - def sub(self, s): - for k, v in self.macros.items(): - s = s.replace(k, v) - return s - - -def get_build_version(): - """Return the version of MSVC that was used to build Python. - - For Python 2.3 and up, the version number is included in - sys.version. For earlier versions, assume the compiler is MSVC 6. - """ - prefix = "MSC v." - i = sys.version.find(prefix) - if i == -1: - return 6 - i = i + len(prefix) - s, rest = sys.version[i:].split(" ", 1) - majorVersion = int(s[:-2]) - 6 - if majorVersion >= 13: - # v13 was skipped and should be v14 - majorVersion += 1 - minorVersion = int(s[2:3]) / 10.0 - # I don't think paths are affected by minor version in version 6 - if majorVersion == 6: - minorVersion = 0 - if majorVersion >= 6: - return majorVersion + minorVersion - # else we don't know what version of the compiler this is - return None - - -def get_build_architecture(): - """Return the processor architecture. - - Possible results are "Intel" or "AMD64". - """ - - prefix = " bit (" - i = sys.version.find(prefix) - if i == -1: - return "Intel" - j = sys.version.find(")", i) - return sys.version[i + len(prefix) : j] - - -def normalize_and_reduce_paths(paths): - """Return a list of normalized paths with duplicates removed. - - The current order of paths is maintained. - """ - # Paths are normalized so things like: /a and /a/ aren't both preserved. - reduced_paths = [] - for p in paths: - np = os.path.normpath(p) - # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set. - if np not in reduced_paths: - reduced_paths.append(np) - return reduced_paths - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - self.__version = get_build_version() - self.__arch = get_build_architecture() - if self.__arch == "Intel": - # x86 - if self.__version >= 7: - self.__root = r"Software\Microsoft\VisualStudio" - self.__macros = MacroExpander(self.__version) - else: - self.__root = r"Software\Microsoft\Devstudio" - self.__product = "Visual Studio version %s" % self.__version - else: - # Win64. Assume this was built with the platform SDK - self.__product = "Microsoft SDK compiler %s" % (self.__version + 6) - - self.initialized = False - - def initialize(self): - self.__paths = [] - if ( - "DISTUTILS_USE_SDK" in os.environ - and "MSSdk" in os.environ - and self.find_exe("cl.exe") - ): - # Assume that the SDK set up everything alright; don't try to be - # smarter - self.cc = "cl.exe" - self.linker = "link.exe" - self.lib = "lib.exe" - self.rc = "rc.exe" - self.mc = "mc.exe" - else: - self.__paths = self.get_msvc_paths("path") - - if len(self.__paths) == 0: - raise DistutilsPlatformError( - "Python was built with %s, " - "and extensions need to be built with the same " - "version of the compiler, but it isn't installed." % self.__product - ) - - self.cc = self.find_exe("cl.exe") - self.linker = self.find_exe("link.exe") - self.lib = self.find_exe("lib.exe") - self.rc = self.find_exe("rc.exe") # resource compiler - self.mc = self.find_exe("mc.exe") # message compiler - self.set_path_env_var('lib') - self.set_path_env_var('include') - - # extend the MSVC path with the current path - try: - for p in os.environ['path'].split(';'): - self.__paths.append(p) - except KeyError: - pass - self.__paths = normalize_and_reduce_paths(self.__paths) - os.environ['path'] = ";".join(self.__paths) - - self.preprocess_options = None - if self.__arch == "Intel": - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GX', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GX', - '/Z7', - '/D_DEBUG', - ] - else: - # Win64 - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GS-', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GS-', - '/Z7', - '/D_DEBUG', - ] - - self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO'] - if self.__version >= 7: - self.ldflags_shared_debug = ['/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG'] - else: - self.ldflags_shared_debug = [ - '/DLL', - '/nologo', - '/INCREMENTAL:no', - '/pdb:None', - '/DEBUG', - ] - self.ldflags_static = ['/nologo'] - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - # Copied from ccompiler.py, extended to return .res as 'object'-file - # for .rc input file - if output_dir is None: - output_dir = '' - obj_names = [] - for src_name in source_filenames: - (base, ext) = os.path.splitext(src_name) - base = os.path.splitdrive(base)[1] # Chop off the drive - base = base[os.path.isabs(base) :] # If abs, chop off leading / - if ext not in self.src_extensions: - # Better to raise an exception instead of silently continuing - # and later complain about sources and targets having - # different lengths - raise CompileError("Don't know how to compile %s" % src_name) - if strip_dir: - base = os.path.basename(base) - if ext in self._rc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - elif ext in self._mc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - else: - obj_names.append(os.path.join(output_dir, base + self.obj_extension)) - return obj_names - - def compile( # noqa: C901 - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt] + [input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc] + ['-h', h_dir, '-r', rc_dir] + [src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc] + ["/fo" + obj] + [rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError( - "Don't know how to compile {} to {}".format(src, obj) - ) - - output_opt = "/Fo" + obj - try: - self.spawn( - [self.cc] - + compile_opts - + pp_opts - + [input_opt, output_opt] - + extra_postargs - ) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( # noqa: C901 - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - (libraries, library_dirs, runtime_library_dirs) = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - if target_desc == CCompiler.EXECUTABLE: - if debug: - ldflags = self.ldflags_shared_debug[1:] - else: - ldflags = self.ldflags_shared[1:] - else: - if debug: - ldflags = self.ldflags_shared_debug - else: - ldflags = self.ldflags_shared - - export_opts = [] - for sym in export_symbols or []: - export_opts.append("/EXPORT:" + sym) - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join( - os.path.dirname(objects[0]), self.library_filename(dll_name) - ) - ld_args.append('/IMPLIB:' + implib_file) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC++" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # Helper methods for using the MSVC registry settings - - def find_exe(self, exe): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - for p in self.__paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - # didn't find it; try existing path - for p in os.environ['Path'].split(';'): - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - return exe - - def get_msvc_paths(self, path, platform='x86'): - """Get a list of devstudio directories (include, lib or path). - - Return a list of strings. The list will be empty if unable to - access the registry or appropriate registry keys not found. - """ - if not _can_read_reg: - return [] - - path = path + " dirs" - if self.__version >= 7: - key = r"{}\{:0.1f}\VC\VC_OBJECTS_PLATFORM_INFO\Win32\Directories".format( - self.__root, - self.__version, - ) - else: - key = ( - r"%s\6.0\Build System\Components\Platforms" - r"\Win32 (%s)\Directories" % (self.__root, platform) - ) - - for base in HKEYS: - d = read_values(base, key) - if d: - if self.__version >= 7: - return self.__macros.sub(d[path]).split(";") - else: - return d[path].split(";") - # MSVC 6 seems to create the registry entries we need only when - # the GUI is run. - if self.__version == 6: - for base in HKEYS: - if read_values(base, r"%s\6.0" % self.__root) is not None: - self.warn( - "It seems you have Visual Studio 6 installed, " - "but the expected registry settings are not present.\n" - "You must at least run the Visual Studio GUI once " - "so that these entries are created." - ) - break - return [] - - def set_path_env_var(self, name): - """Set environment variable 'name' to an MSVC path type value. - - This is equivalent to a SET command prior to execution of spawned - commands. - """ - - if name == "lib": - p = self.get_msvc_paths("library") - else: - p = self.get_msvc_paths(name) - if p: - os.environ[name] = ';'.join(p) - - -if get_build_version() >= 8.0: - log.debug("Importing new compiler from distutils.msvc9compiler") - OldMSVCCompiler = MSVCCompiler - from distutils.msvc9compiler import MSVCCompiler - - # get_build_architecture not really relevant now we support cross-compile - from distutils.msvc9compiler import MacroExpander # noqa: F811 diff --git a/spaces/Brij1808/Blog_Generator/app.py b/spaces/Brij1808/Blog_Generator/app.py deleted file mode 100644 index 653dabb9e5605811e653ba7873300326ba12b36e..0000000000000000000000000000000000000000 --- a/spaces/Brij1808/Blog_Generator/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import os -import gradio as gr -from transformers import pipeline - - -def blog_gen(txt): - generator=pipeline(task='text-generation',model='gpt2') - gen_blog=generator(txt,max_length=300,num_return_sequences=2) - remove_gen_text = list(map(lambda x: x["generated_text"], gen_blog)) - clean_text = list(map(lambda x: x.replace("\n\n", " "), remove_gen_text)) - return clean_text - - -iface = gr.Interface(fn=blog_gen, - inputs=[ - gr.inputs.Textbox( - lines=2, placeholder=None, label='Sentence'), - ], - outputs=[gr.outputs.JSON(label=None)]) -iface.launch() \ No newline at end of file diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_class.py b/spaces/CVPR/LIVE/pybind11/tests/test_class.py deleted file mode 100644 index 4214fe79d7fbab2b38a1f15ca39d41e7cd33a171..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_class.py +++ /dev/null @@ -1,333 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest - -import env # noqa: F401 - -from pybind11_tests import class_ as m -from pybind11_tests import UserType, ConstructorStats - - -def test_repr(): - # In Python 3.3+, repr() accesses __qualname__ - assert "pybind11_type" in repr(type(UserType)) - assert "UserType" in repr(UserType) - - -def test_instance(msg): - with pytest.raises(TypeError) as excinfo: - m.NoConstructor() - assert msg(excinfo.value) == "m.class_.NoConstructor: No constructor defined!" - - instance = m.NoConstructor.new_instance() - - cstats = ConstructorStats.get(m.NoConstructor) - assert cstats.alive() == 1 - del instance - assert cstats.alive() == 0 - - -def test_docstrings(doc): - assert doc(UserType) == "A `py::class_` type for testing" - assert UserType.__name__ == "UserType" - assert UserType.__module__ == "pybind11_tests" - assert UserType.get_value.__name__ == "get_value" - assert UserType.get_value.__module__ == "pybind11_tests" - - assert doc(UserType.get_value) == """ - get_value(self: m.UserType) -> int - - Get value using a method - """ - assert doc(UserType.value) == "Get/set value using a property" - - assert doc(m.NoConstructor.new_instance) == """ - new_instance() -> m.class_.NoConstructor - - Return an instance - """ - - -def test_qualname(doc): - """Tests that a properly qualified name is set in __qualname__ (even in pre-3.3, where we - backport the attribute) and that generated docstrings properly use it and the module name""" - assert m.NestBase.__qualname__ == "NestBase" - assert m.NestBase.Nested.__qualname__ == "NestBase.Nested" - - assert doc(m.NestBase.__init__) == """ - __init__(self: m.class_.NestBase) -> None - """ - assert doc(m.NestBase.g) == """ - g(self: m.class_.NestBase, arg0: m.class_.NestBase.Nested) -> None - """ - assert doc(m.NestBase.Nested.__init__) == """ - __init__(self: m.class_.NestBase.Nested) -> None - """ - assert doc(m.NestBase.Nested.fn) == """ - fn(self: m.class_.NestBase.Nested, arg0: int, arg1: m.class_.NestBase, arg2: m.class_.NestBase.Nested) -> None - """ # noqa: E501 line too long - assert doc(m.NestBase.Nested.fa) == """ - fa(self: m.class_.NestBase.Nested, a: int, b: m.class_.NestBase, c: m.class_.NestBase.Nested) -> None - """ # noqa: E501 line too long - assert m.NestBase.__module__ == "pybind11_tests.class_" - assert m.NestBase.Nested.__module__ == "pybind11_tests.class_" - - -def test_inheritance(msg): - roger = m.Rabbit('Rabbit') - assert roger.name() + " is a " + roger.species() == "Rabbit is a parrot" - assert m.pet_name_species(roger) == "Rabbit is a parrot" - - polly = m.Pet('Polly', 'parrot') - assert polly.name() + " is a " + polly.species() == "Polly is a parrot" - assert m.pet_name_species(polly) == "Polly is a parrot" - - molly = m.Dog('Molly') - assert molly.name() + " is a " + molly.species() == "Molly is a dog" - assert m.pet_name_species(molly) == "Molly is a dog" - - fred = m.Hamster('Fred') - assert fred.name() + " is a " + fred.species() == "Fred is a rodent" - - assert m.dog_bark(molly) == "Woof!" - - with pytest.raises(TypeError) as excinfo: - m.dog_bark(polly) - assert msg(excinfo.value) == """ - dog_bark(): incompatible function arguments. The following argument types are supported: - 1. (arg0: m.class_.Dog) -> str - - Invoked with: - """ - - with pytest.raises(TypeError) as excinfo: - m.Chimera("lion", "goat") - assert "No constructor defined!" in str(excinfo.value) - - -def test_inheritance_init(msg): - - # Single base - class Python(m.Pet): - def __init__(self): - pass - with pytest.raises(TypeError) as exc_info: - Python() - expected = ["m.class_.Pet.__init__() must be called when overriding __init__", - "Pet.__init__() must be called when overriding __init__"] # PyPy? - # TODO: fix PyPy error message wrt. tp_name/__qualname__? - assert msg(exc_info.value) in expected - - # Multiple bases - class RabbitHamster(m.Rabbit, m.Hamster): - def __init__(self): - m.Rabbit.__init__(self, "RabbitHamster") - - with pytest.raises(TypeError) as exc_info: - RabbitHamster() - expected = ["m.class_.Hamster.__init__() must be called when overriding __init__", - "Hamster.__init__() must be called when overriding __init__"] # PyPy - assert msg(exc_info.value) in expected - - -def test_automatic_upcasting(): - assert type(m.return_class_1()).__name__ == "DerivedClass1" - assert type(m.return_class_2()).__name__ == "DerivedClass2" - assert type(m.return_none()).__name__ == "NoneType" - # Repeat these a few times in a random order to ensure no invalid caching is applied - assert type(m.return_class_n(1)).__name__ == "DerivedClass1" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(0)).__name__ == "BaseClass" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(2)).__name__ == "DerivedClass2" - assert type(m.return_class_n(0)).__name__ == "BaseClass" - assert type(m.return_class_n(1)).__name__ == "DerivedClass1" - - -def test_isinstance(): - objects = [tuple(), dict(), m.Pet("Polly", "parrot")] + [m.Dog("Molly")] * 4 - expected = (True, True, True, True, True, False, False) - assert m.check_instances(objects) == expected - - -def test_mismatched_holder(): - import re - - with pytest.raises(RuntimeError) as excinfo: - m.mismatched_holder_1() - assert re.match('generic_type: type ".*MismatchDerived1" does not have a non-default ' - 'holder type while its base ".*MismatchBase1" does', str(excinfo.value)) - - with pytest.raises(RuntimeError) as excinfo: - m.mismatched_holder_2() - assert re.match('generic_type: type ".*MismatchDerived2" has a non-default holder type ' - 'while its base ".*MismatchBase2" does not', str(excinfo.value)) - - -def test_override_static(): - """#511: problem with inheritance + overwritten def_static""" - b = m.MyBase.make() - d1 = m.MyDerived.make2() - d2 = m.MyDerived.make() - - assert isinstance(b, m.MyBase) - assert isinstance(d1, m.MyDerived) - assert isinstance(d2, m.MyDerived) - - -def test_implicit_conversion_life_support(): - """Ensure the lifetime of temporary objects created for implicit conversions""" - assert m.implicitly_convert_argument(UserType(5)) == 5 - assert m.implicitly_convert_variable(UserType(5)) == 5 - - assert "outside a bound function" in m.implicitly_convert_variable_fail(UserType(5)) - - -def test_operator_new_delete(capture): - """Tests that class-specific operator new/delete functions are invoked""" - - class SubAliased(m.AliasedHasOpNewDelSize): - pass - - with capture: - a = m.HasOpNewDel() - b = m.HasOpNewDelSize() - d = m.HasOpNewDelBoth() - assert capture == """ - A new 8 - B new 4 - D new 32 - """ - sz_alias = str(m.AliasedHasOpNewDelSize.size_alias) - sz_noalias = str(m.AliasedHasOpNewDelSize.size_noalias) - with capture: - c = m.AliasedHasOpNewDelSize() - c2 = SubAliased() - assert capture == ( - "C new " + sz_noalias + "\n" + - "C new " + sz_alias + "\n" - ) - - with capture: - del a - pytest.gc_collect() - del b - pytest.gc_collect() - del d - pytest.gc_collect() - assert capture == """ - A delete - B delete 4 - D delete - """ - - with capture: - del c - pytest.gc_collect() - del c2 - pytest.gc_collect() - assert capture == ( - "C delete " + sz_noalias + "\n" + - "C delete " + sz_alias + "\n" - ) - - -def test_bind_protected_functions(): - """Expose protected member functions to Python using a helper class""" - a = m.ProtectedA() - assert a.foo() == 42 - - b = m.ProtectedB() - assert b.foo() == 42 - - class C(m.ProtectedB): - def __init__(self): - m.ProtectedB.__init__(self) - - def foo(self): - return 0 - - c = C() - assert c.foo() == 0 - - -def test_brace_initialization(): - """ Tests that simple POD classes can be constructed using C++11 brace initialization """ - a = m.BraceInitialization(123, "test") - assert a.field1 == 123 - assert a.field2 == "test" - - # Tests that a non-simple class doesn't get brace initialization (if the - # class defines an initializer_list constructor, in particular, it would - # win over the expected constructor). - b = m.NoBraceInitialization([123, 456]) - assert b.vec == [123, 456] - - -@pytest.mark.xfail("env.PYPY") -def test_class_refcount(): - """Instances must correctly increase/decrease the reference count of their types (#1029)""" - from sys import getrefcount - - class PyDog(m.Dog): - pass - - for cls in m.Dog, PyDog: - refcount_1 = getrefcount(cls) - molly = [cls("Molly") for _ in range(10)] - refcount_2 = getrefcount(cls) - - del molly - pytest.gc_collect() - refcount_3 = getrefcount(cls) - - assert refcount_1 == refcount_3 - assert refcount_2 > refcount_1 - - -def test_reentrant_implicit_conversion_failure(msg): - # ensure that there is no runaway reentrant implicit conversion (#1035) - with pytest.raises(TypeError) as excinfo: - m.BogusImplicitConversion(0) - assert msg(excinfo.value) == ''' - __init__(): incompatible constructor arguments. The following argument types are supported: - 1. m.class_.BogusImplicitConversion(arg0: m.class_.BogusImplicitConversion) - - Invoked with: 0 - ''' - - -def test_error_after_conversions(): - with pytest.raises(TypeError) as exc_info: - m.test_error_after_conversions("hello") - assert str(exc_info.value).startswith( - "Unable to convert function return value to a Python type!") - - -def test_aligned(): - if hasattr(m, "Aligned"): - p = m.Aligned().ptr() - assert p % 1024 == 0 - - -# https://foss.heptapod.net/pypy/pypy/-/issues/2742 -@pytest.mark.xfail("env.PYPY") -def test_final(): - with pytest.raises(TypeError) as exc_info: - class PyFinalChild(m.IsFinal): - pass - assert str(exc_info.value).endswith("is not an acceptable base type") - - -# https://foss.heptapod.net/pypy/pypy/-/issues/2742 -@pytest.mark.xfail("env.PYPY") -def test_non_final_final(): - with pytest.raises(TypeError) as exc_info: - class PyNonFinalFinalChild(m.IsNonFinalFinal): - pass - assert str(exc_info.value).endswith("is not an acceptable base type") - - -# https://github.com/pybind/pybind11/issues/1878 -def test_exception_rvalue_abort(): - with pytest.raises(RuntimeError): - m.PyPrintDestructor().throw_something() diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_embed/test_interpreter.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_embed/test_interpreter.cpp deleted file mode 100644 index 222bd565fbffd6484db09876ae9cceabffcb69cd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_embed/test_interpreter.cpp +++ /dev/null @@ -1,284 +0,0 @@ -#include - -#ifdef _MSC_VER -// Silence MSVC C++17 deprecation warning from Catch regarding std::uncaught_exceptions (up to catch -// 2.0.1; this should be fixed in the next catch release after 2.0.1). -# pragma warning(disable: 4996) -#endif - -#include - -#include -#include -#include - -namespace py = pybind11; -using namespace py::literals; - -class Widget { -public: - Widget(std::string message) : message(message) { } - virtual ~Widget() = default; - - std::string the_message() const { return message; } - virtual int the_answer() const = 0; - -private: - std::string message; -}; - -class PyWidget final : public Widget { - using Widget::Widget; - - int the_answer() const override { PYBIND11_OVERLOAD_PURE(int, Widget, the_answer); } -}; - -PYBIND11_EMBEDDED_MODULE(widget_module, m) { - py::class_(m, "Widget") - .def(py::init()) - .def_property_readonly("the_message", &Widget::the_message); - - m.def("add", [](int i, int j) { return i + j; }); -} - -PYBIND11_EMBEDDED_MODULE(throw_exception, ) { - throw std::runtime_error("C++ Error"); -} - -PYBIND11_EMBEDDED_MODULE(throw_error_already_set, ) { - auto d = py::dict(); - d["missing"].cast(); -} - -TEST_CASE("Pass classes and data between modules defined in C++ and Python") { - auto module = py::module::import("test_interpreter"); - REQUIRE(py::hasattr(module, "DerivedWidget")); - - auto locals = py::dict("hello"_a="Hello, World!", "x"_a=5, **module.attr("__dict__")); - py::exec(R"( - widget = DerivedWidget("{} - {}".format(hello, x)) - message = widget.the_message - )", py::globals(), locals); - REQUIRE(locals["message"].cast() == "Hello, World! - 5"); - - auto py_widget = module.attr("DerivedWidget")("The question"); - auto message = py_widget.attr("the_message"); - REQUIRE(message.cast() == "The question"); - - const auto &cpp_widget = py_widget.cast(); - REQUIRE(cpp_widget.the_answer() == 42); -} - -TEST_CASE("Import error handling") { - REQUIRE_NOTHROW(py::module::import("widget_module")); - REQUIRE_THROWS_WITH(py::module::import("throw_exception"), - "ImportError: C++ Error"); - REQUIRE_THROWS_WITH(py::module::import("throw_error_already_set"), - Catch::Contains("ImportError: KeyError")); -} - -TEST_CASE("There can be only one interpreter") { - static_assert(std::is_move_constructible::value, ""); - static_assert(!std::is_move_assignable::value, ""); - static_assert(!std::is_copy_constructible::value, ""); - static_assert(!std::is_copy_assignable::value, ""); - - REQUIRE_THROWS_WITH(py::initialize_interpreter(), "The interpreter is already running"); - REQUIRE_THROWS_WITH(py::scoped_interpreter(), "The interpreter is already running"); - - py::finalize_interpreter(); - REQUIRE_NOTHROW(py::scoped_interpreter()); - { - auto pyi1 = py::scoped_interpreter(); - auto pyi2 = std::move(pyi1); - } - py::initialize_interpreter(); -} - -bool has_pybind11_internals_builtin() { - auto builtins = py::handle(PyEval_GetBuiltins()); - return builtins.contains(PYBIND11_INTERNALS_ID); -}; - -bool has_pybind11_internals_static() { - auto **&ipp = py::detail::get_internals_pp(); - return ipp && *ipp; -} - -TEST_CASE("Restart the interpreter") { - // Verify pre-restart state. - REQUIRE(py::module::import("widget_module").attr("add")(1, 2).cast() == 3); - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - REQUIRE(py::module::import("external_module").attr("A")(123).attr("value").cast() == 123); - - // local and foreign module internals should point to the same internals: - REQUIRE(reinterpret_cast(*py::detail::get_internals_pp()) == - py::module::import("external_module").attr("internals_at")().cast()); - - // Restart the interpreter. - py::finalize_interpreter(); - REQUIRE(Py_IsInitialized() == 0); - - py::initialize_interpreter(); - REQUIRE(Py_IsInitialized() == 1); - - // Internals are deleted after a restart. - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - pybind11::detail::get_internals(); - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - REQUIRE(reinterpret_cast(*py::detail::get_internals_pp()) == - py::module::import("external_module").attr("internals_at")().cast()); - - // Make sure that an interpreter with no get_internals() created until finalize still gets the - // internals destroyed - py::finalize_interpreter(); - py::initialize_interpreter(); - bool ran = false; - py::module::import("__main__").attr("internals_destroy_test") = - py::capsule(&ran, [](void *ran) { py::detail::get_internals(); *static_cast(ran) = true; }); - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - REQUIRE_FALSE(ran); - py::finalize_interpreter(); - REQUIRE(ran); - py::initialize_interpreter(); - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE_FALSE(has_pybind11_internals_static()); - - // C++ modules can be reloaded. - auto cpp_module = py::module::import("widget_module"); - REQUIRE(cpp_module.attr("add")(1, 2).cast() == 3); - - // C++ type information is reloaded and can be used in python modules. - auto py_module = py::module::import("test_interpreter"); - auto py_widget = py_module.attr("DerivedWidget")("Hello after restart"); - REQUIRE(py_widget.attr("the_message").cast() == "Hello after restart"); -} - -TEST_CASE("Subinterpreter") { - // Add tags to the modules in the main interpreter and test the basics. - py::module::import("__main__").attr("main_tag") = "main interpreter"; - { - auto m = py::module::import("widget_module"); - m.attr("extension_module_tag") = "added to module in main interpreter"; - - REQUIRE(m.attr("add")(1, 2).cast() == 3); - } - REQUIRE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - - /// Create and switch to a subinterpreter. - auto main_tstate = PyThreadState_Get(); - auto sub_tstate = Py_NewInterpreter(); - - // Subinterpreters get their own copy of builtins. detail::get_internals() still - // works by returning from the static variable, i.e. all interpreters share a single - // global pybind11::internals; - REQUIRE_FALSE(has_pybind11_internals_builtin()); - REQUIRE(has_pybind11_internals_static()); - - // Modules tags should be gone. - REQUIRE_FALSE(py::hasattr(py::module::import("__main__"), "tag")); - { - auto m = py::module::import("widget_module"); - REQUIRE_FALSE(py::hasattr(m, "extension_module_tag")); - - // Function bindings should still work. - REQUIRE(m.attr("add")(1, 2).cast() == 3); - } - - // Restore main interpreter. - Py_EndInterpreter(sub_tstate); - PyThreadState_Swap(main_tstate); - - REQUIRE(py::hasattr(py::module::import("__main__"), "main_tag")); - REQUIRE(py::hasattr(py::module::import("widget_module"), "extension_module_tag")); -} - -TEST_CASE("Execution frame") { - // When the interpreter is embedded, there is no execution frame, but `py::exec` - // should still function by using reasonable globals: `__main__.__dict__`. - py::exec("var = dict(number=42)"); - REQUIRE(py::globals()["var"]["number"].cast() == 42); -} - -TEST_CASE("Threads") { - // Restart interpreter to ensure threads are not initialized - py::finalize_interpreter(); - py::initialize_interpreter(); - REQUIRE_FALSE(has_pybind11_internals_static()); - - constexpr auto num_threads = 10; - auto locals = py::dict("count"_a=0); - - { - py::gil_scoped_release gil_release{}; - REQUIRE(has_pybind11_internals_static()); - - auto threads = std::vector(); - for (auto i = 0; i < num_threads; ++i) { - threads.emplace_back([&]() { - py::gil_scoped_acquire gil{}; - locals["count"] = locals["count"].cast() + 1; - }); - } - - for (auto &thread : threads) { - thread.join(); - } - } - - REQUIRE(locals["count"].cast() == num_threads); -} - -// Scope exit utility https://stackoverflow.com/a/36644501/7255855 -struct scope_exit { - std::function f_; - explicit scope_exit(std::function f) noexcept : f_(std::move(f)) {} - ~scope_exit() { if (f_) f_(); } -}; - -TEST_CASE("Reload module from file") { - // Disable generation of cached bytecode (.pyc files) for this test, otherwise - // Python might pick up an old version from the cache instead of the new versions - // of the .py files generated below - auto sys = py::module::import("sys"); - bool dont_write_bytecode = sys.attr("dont_write_bytecode").cast(); - sys.attr("dont_write_bytecode") = true; - // Reset the value at scope exit - scope_exit reset_dont_write_bytecode([&]() { - sys.attr("dont_write_bytecode") = dont_write_bytecode; - }); - - std::string module_name = "test_module_reload"; - std::string module_file = module_name + ".py"; - - // Create the module .py file - std::ofstream test_module(module_file); - test_module << "def test():\n"; - test_module << " return 1\n"; - test_module.close(); - // Delete the file at scope exit - scope_exit delete_module_file([&]() { - std::remove(module_file.c_str()); - }); - - // Import the module from file - auto module = py::module::import(module_name.c_str()); - int result = module.attr("test")().cast(); - REQUIRE(result == 1); - - // Update the module .py file with a small change - test_module.open(module_file); - test_module << "def test():\n"; - test_module << " return 2\n"; - test_module.close(); - - // Reload the module - module.reload(); - result = module.attr("test")().cast(); - REQUIRE(result == 2); -} diff --git a/spaces/CVPR/LIVE/thrust/cub/cmake/cub-config.cmake b/spaces/CVPR/LIVE/thrust/cub/cmake/cub-config.cmake deleted file mode 100644 index 0900becd8fbcff9ee791c9b990ed2bf82e26f220..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/cub/cmake/cub-config.cmake +++ /dev/null @@ -1,62 +0,0 @@ -# -# find_package(CUB) config file. -# -# Defines a CUB::CUB target that may be linked from user projects to include -# CUB. - -if (TARGET CUB::CUB) - return() -endif() - -function(_cub_declare_interface_alias alias_name ugly_name) - # 1) Only IMPORTED and ALIAS targets can be placed in a namespace. - # 2) When an IMPORTED library is linked to another target, its include - # directories are treated as SYSTEM includes. - # 3) nvcc will automatically check the CUDA Toolkit include path *before* the - # system includes. This means that the Toolkit CUB will *always* be used - # during compilation, and the include paths of an IMPORTED CUB::CUB - # target will never have any effect. - # 4) This behavior can be fixed by setting the property NO_SYSTEM_FROM_IMPORTED - # on EVERY target that links to CUB::CUB. This would be a burden and a - # footgun for our users. Forgetting this would silently pull in the wrong CUB! - # 5) A workaround is to make a non-IMPORTED library outside of the namespace, - # configure it, and then ALIAS it into the namespace (or ALIAS and then - # configure, that seems to work too). - add_library(${ugly_name} INTERFACE) - add_library(${alias_name} ALIAS ${ugly_name}) -endfunction() - -# -# Setup targets -# - -_cub_declare_interface_alias(CUB::CUB _CUB_CUB) -# Strip out the 'cub/cmake/' from 'cub/cmake/cub-config.cmake': -get_filename_component(_CUB_INCLUDE_DIR "../.." ABSOLUTE BASE_DIR "${CMAKE_CURRENT_LIST_DIR}") -target_include_directories(_CUB_CUB INTERFACE "${_CUB_INCLUDE_DIR}") - -if (CUB_IGNORE_DEPRECATED_CPP_DIALECT OR - THRUST_IGNORE_DEPRECATED_CPP_DIALECT) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_CPP_DIALECT") -endif() - -if (CUB_IGNORE_DEPRECATED_CPP_11 OR - THRUST_IGNORE_DEPRECATED_CPP_11) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_CPP_11") -endif() - -if (CUB_IGNORE_DEPRECATED_COMPILER OR - THRUST_IGNORE_DEPRECATED_COMPILER) - target_compile_definitions(_CUB_CUB INTERFACE "CUB_IGNORE_DEPRECATED_COMPILER") -endif() - -# -# Standardize version info -# - -set(CUB_VERSION ${${CMAKE_FIND_PACKAGE_NAME}_VERSION} CACHE INTERNAL "") -set(CUB_VERSION_MAJOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MAJOR} CACHE INTERNAL "") -set(CUB_VERSION_MINOR ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_MINOR} CACHE INTERNAL "") -set(CUB_VERSION_PATCH ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_PATCH} CACHE INTERNAL "") -set(CUB_VERSION_TWEAK ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_TWEAK} CACHE INTERNAL "") -set(CUB_VERSION_COUNT ${${CMAKE_FIND_PACKAGE_NAME}_VERSION_COUNT} CACHE INTERNAL "") diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/caching_allocator.h b/spaces/CVPR/LIVE/thrust/thrust/detail/caching_allocator.h deleted file mode 100644 index bb98f815f70aae67f1a89c98b548b420331c1062..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/caching_allocator.h +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Copyright 2020 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ -inline -thrust::mr::allocator< - char, - thrust::mr::disjoint_unsynchronized_pool_resource< - thrust::device_memory_resource, - thrust::mr::new_delete_resource - > -> single_device_tls_caching_allocator() -{ - return { - &thrust::mr::tls_disjoint_pool( - thrust::mr::get_global_resource(), - thrust::mr::get_global_resource() - ) - }; -} -} -} diff --git a/spaces/CVPR/WALT/mmdet/core/mask/structures.py b/spaces/CVPR/WALT/mmdet/core/mask/structures.py deleted file mode 100644 index f7e7ab8620b9f21710fc8a61bdaaec20d96e5c20..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/mask/structures.py +++ /dev/null @@ -1,1042 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import cv2 -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -import torch -from mmcv.ops.roi_align import roi_align - - -class BaseInstanceMasks(metaclass=ABCMeta): - """Base class for instance masks.""" - - @abstractmethod - def rescale(self, scale, interpolation='nearest'): - """Rescale masks as large as possible while keeping the aspect ratio. - For details can refer to `mmcv.imrescale`. - - Args: - scale (tuple[int]): The maximum size (h, w) of rescaled mask. - interpolation (str): Same as :func:`mmcv.imrescale`. - - Returns: - BaseInstanceMasks: The rescaled masks. - """ - - @abstractmethod - def resize(self, out_shape, interpolation='nearest'): - """Resize masks to the given out_shape. - - Args: - out_shape: Target (h, w) of resized mask. - interpolation (str): See :func:`mmcv.imresize`. - - Returns: - BaseInstanceMasks: The resized masks. - """ - - @abstractmethod - def flip(self, flip_direction='horizontal'): - """Flip masks alone the given direction. - - Args: - flip_direction (str): Either 'horizontal' or 'vertical'. - - Returns: - BaseInstanceMasks: The flipped masks. - """ - - @abstractmethod - def pad(self, out_shape, pad_val): - """Pad masks to the given size of (h, w). - - Args: - out_shape (tuple[int]): Target (h, w) of padded mask. - pad_val (int): The padded value. - - Returns: - BaseInstanceMasks: The padded masks. - """ - - @abstractmethod - def crop(self, bbox): - """Crop each mask by the given bbox. - - Args: - bbox (ndarray): Bbox in format [x1, y1, x2, y2], shape (4, ). - - Return: - BaseInstanceMasks: The cropped masks. - """ - - @abstractmethod - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device, - interpolation='bilinear'): - """Crop and resize masks by the given bboxes. - - This function is mainly used in mask targets computation. - It firstly align mask to bboxes by assigned_inds, then crop mask by the - assigned bbox and resize to the size of (mask_h, mask_w) - - Args: - bboxes (Tensor): Bboxes in format [x1, y1, x2, y2], shape (N, 4) - out_shape (tuple[int]): Target (h, w) of resized mask - inds (ndarray): Indexes to assign masks to each bbox, - shape (N,) and values should be between [0, num_masks - 1]. - device (str): Device of bboxes - interpolation (str): See `mmcv.imresize` - - Return: - BaseInstanceMasks: the cropped and resized masks. - """ - - @abstractmethod - def expand(self, expanded_h, expanded_w, top, left): - """see :class:`Expand`.""" - - @property - @abstractmethod - def areas(self): - """ndarray: areas of each instance.""" - - @abstractmethod - def to_ndarray(self): - """Convert masks to the format of ndarray. - - Return: - ndarray: Converted masks in the format of ndarray. - """ - - @abstractmethod - def to_tensor(self, dtype, device): - """Convert masks to the format of Tensor. - - Args: - dtype (str): Dtype of converted mask. - device (torch.device): Device of converted masks. - - Returns: - Tensor: Converted masks in the format of Tensor. - """ - - @abstractmethod - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - Translated masks. - """ - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. Default 0. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - ndarray: Sheared masks. - """ - - @abstractmethod - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the masks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - Rotated masks. - """ - - -class BitmapMasks(BaseInstanceMasks): - """This class represents masks in the form of bitmaps. - - Args: - masks (ndarray): ndarray of masks in shape (N, H, W), where N is - the number of objects. - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> num_masks, H, W = 3, 32, 32 - >>> rng = np.random.RandomState(0) - >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int) - >>> self = BitmapMasks(masks, height=H, width=W) - - >>> # demo crop_and_resize - >>> num_boxes = 5 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (14, 14) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - self.height = height - self.width = width - if len(masks) == 0: - self.masks = np.empty((0, self.height, self.width), dtype=np.uint8) - else: - assert isinstance(masks, (list, np.ndarray)) - if isinstance(masks, list): - assert isinstance(masks[0], np.ndarray) - assert masks[0].ndim == 2 # (H, W) - else: - assert masks.ndim == 3 or masks.ndim == 4# (N, H, W) - - self.masks = np.stack(masks).reshape(-1, height, width) - assert self.masks.shape[1] == self.height - assert self.masks.shape[2] == self.width - - def __getitem__(self, index): - """Index the BitmapMask. - - Args: - index (int | ndarray): Indices in the format of integer or ndarray. - - Returns: - :obj:`BitmapMasks`: Indexed bitmap masks. - """ - try: - masks = self.masks[index].reshape(-1, self.height, self.width) - except: - masks = self.masks[index].reshape(-1, self.height, self.width) - - return BitmapMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation='nearest'): - """See :func:`BaseInstanceMasks.rescale`.""" - if len(self.masks) == 0: - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8) - else: - rescaled_masks = np.stack([ - mmcv.imrescale(mask, scale, interpolation=interpolation) - for mask in self.masks - ]) - height, width = rescaled_masks.shape[1:] - return BitmapMasks(rescaled_masks, height, width) - - def resize(self, out_shape, interpolation='nearest'): - """See :func:`BaseInstanceMasks.resize`.""" - if len(self.masks) == 0: - resized_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - resized_masks = np.stack([ - mmcv.imresize( - mask, out_shape[::-1], interpolation=interpolation) - for mask in self.masks - ]) - return BitmapMasks(resized_masks, *out_shape) - - def flip(self, flip_direction='horizontal'): - """See :func:`BaseInstanceMasks.flip`.""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - - if len(self.masks) == 0: - flipped_masks = self.masks - else: - flipped_masks = np.stack([ - mmcv.imflip(mask, direction=flip_direction) - for mask in self.masks - ]) - return BitmapMasks(flipped_masks, self.height, self.width) - - def pad(self, out_shape, pad_val=0): - """See :func:`BaseInstanceMasks.pad`.""" - if len(self.masks) == 0: - padded_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - padded_masks = np.stack([ - mmcv.impad(mask, shape=out_shape, pad_val=pad_val) - for mask in self.masks - ]) - return BitmapMasks(padded_masks, *out_shape) - - def crop(self, bbox): - """See :func:`BaseInstanceMasks.crop`.""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = np.empty((0, h, w), dtype=np.uint8) - else: - cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w] - return BitmapMasks(cropped_masks, h, w) - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.crop_and_resize`.""" - if len(self.masks) == 0: - empty_masks = np.empty((0, *out_shape), dtype=np.uint8) - return BitmapMasks(empty_masks, *out_shape) - - # convert bboxes to tensor - if isinstance(bboxes, np.ndarray): - bboxes = torch.from_numpy(bboxes).to(device=device) - if isinstance(inds, np.ndarray): - inds = torch.from_numpy(inds).to(device=device) - - num_bbox = bboxes.shape[0] - fake_inds = torch.arange( - num_bbox, device=device).to(dtype=bboxes.dtype)[:, None] - rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5 - rois = rois.to(device=device) - if num_bbox > 0: - #masks_vis = (self.masks == 1) - masks_vis = (self.masks > 0) - gt_masks_th = torch.from_numpy(masks_vis).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - targets = targets.cpu().numpy().astype(int) - resized_masks_vis = (targets > 0.5) - - #masks_full = (self.masks > 0) - masks_full = (self.masks == 2) - #masks_occ = (self.masks == 2) - gt_masks_th = torch.from_numpy(masks_full).to(device).index_select( - 0, inds).to(dtype=rois.dtype) - targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape, - 1.0, 0, 'avg', True).squeeze(1) - targets = targets.cpu().numpy().astype(int) - resized_masks_full = (targets > 0.5) - resized_masks = np.stack([resized_masks_vis,resized_masks_full],axis=1) - else: - resized_masks = [] - return BitmapMasks(resized_masks, *out_shape) - - def expand(self, expanded_h, expanded_w, top, left): - """See :func:`BaseInstanceMasks.expand`.""" - if len(self.masks) == 0: - expanded_mask = np.empty((0, expanded_h, expanded_w), - dtype=np.uint8) - else: - expanded_mask = np.zeros((len(self), expanded_h, expanded_w), - dtype=np.uint8) - expanded_mask[:, top:top + self.height, - left:left + self.width] = self.masks - return BitmapMasks(expanded_mask, expanded_h, expanded_w) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=0, - interpolation='bilinear'): - """Translate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - offset (int | float): The offset for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - fill_val (int | float): Border value. Default 0 for masks. - interpolation (str): Same as :func:`mmcv.imtranslate`. - - Returns: - BitmapMasks: Translated BitmapMasks. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random(dtype=np.uint8) - >>> out_shape = (32, 32) - >>> offset = 4 - >>> direction = 'horizontal' - >>> fill_val = 0 - >>> interpolation = 'bilinear' - >>> # Note, There seem to be issues when: - >>> # * out_shape is different than self's shape - >>> # * the mask dtype is not supported by cv2.AffineWarp - >>> new = self.translate(out_shape, offset, direction, fill_val, - >>> interpolation) - >>> assert len(new) == len(self) - >>> assert new.height, new.width == out_shape - """ - if len(self.masks) == 0: - translated_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - translated_masks = mmcv.imtranslate( - self.masks.transpose((1, 2, 0)), - offset, - direction, - border_value=fill_val, - interpolation=interpolation) - if translated_masks.ndim == 2: - translated_masks = translated_masks[:, :, None] - translated_masks = translated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(translated_masks, *out_shape) - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - magnitude (int | float): The magnitude used for shear. - direction (str): The shear direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as in :func:`mmcv.imshear`. - - Returns: - BitmapMasks: The sheared masks. - """ - if len(self.masks) == 0: - sheared_masks = np.empty((0, *out_shape), dtype=np.uint8) - else: - sheared_masks = mmcv.imshear( - self.masks.transpose((1, 2, 0)), - magnitude, - direction, - border_value=border_value, - interpolation=interpolation) - if sheared_masks.ndim == 2: - sheared_masks = sheared_masks[:, :, None] - sheared_masks = sheared_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(sheared_masks, *out_shape) - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """Rotate the BitmapMasks. - - Args: - out_shape (tuple[int]): Shape for output mask, format (h, w). - angle (int | float): Rotation angle in degrees. Positive values - mean counter-clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the - rotation in source image. If not specified, the center of - the image will be used. - scale (int | float): Isotropic scale factor. - fill_val (int | float): Border value. Default 0 for masks. - - Returns: - BitmapMasks: Rotated BitmapMasks. - """ - if len(self.masks) == 0: - rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype) - else: - rotated_masks = mmcv.imrotate( - self.masks.transpose((1, 2, 0)), - angle, - center=center, - scale=scale, - border_value=fill_val) - if rotated_masks.ndim == 2: - # case when only one mask, (h, w) - rotated_masks = rotated_masks[:, :, None] # (h, w, 1) - rotated_masks = rotated_masks.transpose( - (2, 0, 1)).astype(self.masks.dtype) - return BitmapMasks(rotated_masks, *out_shape) - - @property - def areas(self): - """See :py:attr:`BaseInstanceMasks.areas`.""" - return self.masks.sum((1, 2)) - - def to_ndarray(self): - """See :func:`BaseInstanceMasks.to_ndarray`.""" - return self.masks - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - return torch.tensor(self.masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - dtype=np.uint8, - rng=None): - """Generate random bitmap masks for demo / testing purposes. - - Example: - >>> from mmdet.core.mask.structures import BitmapMasks - >>> self = BitmapMasks.random() - >>> print('self = {}'.format(self)) - self = BitmapMasks(num_masks=3, height=32, width=32) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - masks = (rng.rand(num_masks, height, width) > 0.1).astype(dtype) - self = cls(masks, height=height, width=width) - return self - - -class PolygonMasks(BaseInstanceMasks): - """This class represents masks in the form of polygons. - - Polygons is a list of three levels. The first level of the list - corresponds to objects, the second level to the polys that compose the - object, the third level to the poly coordinates - - Args: - masks (list[list[ndarray]]): The first level of the list - corresponds to objects, the second level to the polys that - compose the object, the third level to the poly coordinates - height (int): height of masks - width (int): width of masks - - Example: - >>> from mmdet.core.mask.structures import * # NOQA - >>> masks = [ - >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ] - >>> ] - >>> height, width = 16, 16 - >>> self = PolygonMasks(masks, height, width) - - >>> # demo translate - >>> new = self.translate((16, 16), 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4) - - >>> # demo crop_and_resize - >>> num_boxes = 3 - >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes) - >>> out_shape = (16, 16) - >>> inds = torch.randint(0, len(self), size=(num_boxes,)) - >>> device = 'cpu' - >>> interpolation = 'bilinear' - >>> new = self.crop_and_resize( - ... bboxes, out_shape, inds, device, interpolation) - >>> assert len(new) == num_boxes - >>> assert new.height, new.width == out_shape - """ - - def __init__(self, masks, height, width): - assert isinstance(masks, list) - if len(masks) > 0: - assert isinstance(masks[0], list) - assert isinstance(masks[0][0], np.ndarray) - - self.height = height - self.width = width - self.masks = masks - - def __getitem__(self, index): - """Index the polygon masks. - - Args: - index (ndarray | List): The indices. - - Returns: - :obj:`PolygonMasks`: The indexed polygon masks. - """ - if isinstance(index, np.ndarray): - index = index.tolist() - if isinstance(index, list): - masks = [self.masks[i] for i in index] - else: - try: - masks = self.masks[index] - except Exception: - raise ValueError( - f'Unsupported input of type {type(index)} for indexing!') - if len(masks) and isinstance(masks[0], np.ndarray): - masks = [masks] # ensure a list of three levels - return PolygonMasks(masks, self.height, self.width) - - def __iter__(self): - return iter(self.masks) - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += f'num_masks={len(self.masks)}, ' - s += f'height={self.height}, ' - s += f'width={self.width})' - return s - - def __len__(self): - """Number of masks.""" - return len(self.masks) - - def rescale(self, scale, interpolation=None): - """see :func:`BaseInstanceMasks.rescale`""" - new_w, new_h = mmcv.rescale_size((self.width, self.height), scale) - if len(self.masks) == 0: - rescaled_masks = PolygonMasks([], new_h, new_w) - else: - rescaled_masks = self.resize((new_h, new_w)) - return rescaled_masks - - def resize(self, out_shape, interpolation=None): - """see :func:`BaseInstanceMasks.resize`""" - if len(self.masks) == 0: - resized_masks = PolygonMasks([], *out_shape) - else: - h_scale = out_shape[0] / self.height - w_scale = out_shape[1] / self.width - resized_masks = [] - for poly_per_obj in self.masks: - resized_poly = [] - for p in poly_per_obj: - p = p.copy() - p[0::2] *= w_scale - p[1::2] *= h_scale - resized_poly.append(p) - resized_masks.append(resized_poly) - resized_masks = PolygonMasks(resized_masks, *out_shape) - return resized_masks - - def flip(self, flip_direction='horizontal'): - """see :func:`BaseInstanceMasks.flip`""" - assert flip_direction in ('horizontal', 'vertical', 'diagonal') - if len(self.masks) == 0: - flipped_masks = PolygonMasks([], self.height, self.width) - else: - flipped_masks = [] - for poly_per_obj in self.masks: - flipped_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if flip_direction == 'horizontal': - p[0::2] = self.width - p[0::2] - elif flip_direction == 'vertical': - p[1::2] = self.height - p[1::2] - else: - p[0::2] = self.width - p[0::2] - p[1::2] = self.height - p[1::2] - flipped_poly_per_obj.append(p) - flipped_masks.append(flipped_poly_per_obj) - flipped_masks = PolygonMasks(flipped_masks, self.height, - self.width) - return flipped_masks - - def crop(self, bbox): - """see :func:`BaseInstanceMasks.crop`""" - assert isinstance(bbox, np.ndarray) - assert bbox.ndim == 1 - - # clip the boundary - bbox = bbox.copy() - bbox[0::2] = np.clip(bbox[0::2], 0, self.width) - bbox[1::2] = np.clip(bbox[1::2], 0, self.height) - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - - if len(self.masks) == 0: - cropped_masks = PolygonMasks([], h, w) - else: - cropped_masks = [] - for poly_per_obj in self.masks: - cropped_poly_per_obj = [] - for p in poly_per_obj: - # pycocotools will clip the boundary - p = p.copy() - p[0::2] -= bbox[0] - p[1::2] -= bbox[1] - cropped_poly_per_obj.append(p) - cropped_masks.append(cropped_poly_per_obj) - cropped_masks = PolygonMasks(cropped_masks, h, w) - return cropped_masks - - def pad(self, out_shape, pad_val=0): - """padding has no effect on polygons`""" - return PolygonMasks(self.masks, *out_shape) - - def expand(self, *args, **kwargs): - """TODO: Add expand for polygon""" - raise NotImplementedError - - def crop_and_resize(self, - bboxes, - out_shape, - inds, - device='cpu', - interpolation='bilinear'): - """see :func:`BaseInstanceMasks.crop_and_resize`""" - out_h, out_w = out_shape - if len(self.masks) == 0: - return PolygonMasks([], out_h, out_w) - - resized_masks = [] - for i in range(len(bboxes)): - mask = self.masks[inds[i]] - bbox = bboxes[i, :] - x1, y1, x2, y2 = bbox - w = np.maximum(x2 - x1, 1) - h = np.maximum(y2 - y1, 1) - h_scale = out_h / max(h, 0.1) # avoid too large scale - w_scale = out_w / max(w, 0.1) - - resized_mask = [] - for p in mask: - p = p.copy() - # crop - # pycocotools will clip the boundary - p[0::2] -= bbox[0] - p[1::2] -= bbox[1] - - # resize - p[0::2] *= w_scale - p[1::2] *= h_scale - resized_mask.append(p) - resized_masks.append(resized_mask) - return PolygonMasks(resized_masks, *out_shape) - - def translate(self, - out_shape, - offset, - direction='horizontal', - fill_val=None, - interpolation=None): - """Translate the PolygonMasks. - - Example: - >>> self = PolygonMasks.random(dtype=np.int) - >>> out_shape = (self.height, self.width) - >>> new = self.translate(out_shape, 4., direction='horizontal') - >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2]) - >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501 - """ - assert fill_val is None or fill_val == 0, 'Here fill_val is not '\ - f'used, and defaultly should be None or 0. got {fill_val}.' - if len(self.masks) == 0: - translated_masks = PolygonMasks([], *out_shape) - else: - translated_masks = [] - for poly_per_obj in self.masks: - translated_poly_per_obj = [] - for p in poly_per_obj: - p = p.copy() - if direction == 'horizontal': - p[0::2] = np.clip(p[0::2] + offset, 0, out_shape[1]) - elif direction == 'vertical': - p[1::2] = np.clip(p[1::2] + offset, 0, out_shape[0]) - translated_poly_per_obj.append(p) - translated_masks.append(translated_poly_per_obj) - translated_masks = PolygonMasks(translated_masks, *out_shape) - return translated_masks - - def shear(self, - out_shape, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """See :func:`BaseInstanceMasks.shear`.""" - if len(self.masks) == 0: - sheared_masks = PolygonMasks([], *out_shape) - else: - sheared_masks = [] - if direction == 'horizontal': - shear_matrix = np.stack([[1, magnitude], - [0, 1]]).astype(np.float32) - elif direction == 'vertical': - shear_matrix = np.stack([[1, 0], [magnitude, - 1]]).astype(np.float32) - for poly_per_obj in self.masks: - sheared_poly = [] - for p in poly_per_obj: - p = np.stack([p[0::2], p[1::2]], axis=0) # [2, n] - new_coords = np.matmul(shear_matrix, p) # [2, n] - new_coords[0, :] = np.clip(new_coords[0, :], 0, - out_shape[1]) - new_coords[1, :] = np.clip(new_coords[1, :], 0, - out_shape[0]) - sheared_poly.append( - new_coords.transpose((1, 0)).reshape(-1)) - sheared_masks.append(sheared_poly) - sheared_masks = PolygonMasks(sheared_masks, *out_shape) - return sheared_masks - - def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0): - """See :func:`BaseInstanceMasks.rotate`.""" - if len(self.masks) == 0: - rotated_masks = PolygonMasks([], *out_shape) - else: - rotated_masks = [] - rotate_matrix = cv2.getRotationMatrix2D(center, -angle, scale) - for poly_per_obj in self.masks: - rotated_poly = [] - for p in poly_per_obj: - p = p.copy() - coords = np.stack([p[0::2], p[1::2]], axis=1) # [n, 2] - # pad 1 to convert from format [x, y] to homogeneous - # coordinates format [x, y, 1] - coords = np.concatenate( - (coords, np.ones((coords.shape[0], 1), coords.dtype)), - axis=1) # [n, 3] - rotated_coords = np.matmul( - rotate_matrix[None, :, :], - coords[:, :, None])[..., 0] # [n, 2, 1] -> [n, 2] - rotated_coords[:, 0] = np.clip(rotated_coords[:, 0], 0, - out_shape[1]) - rotated_coords[:, 1] = np.clip(rotated_coords[:, 1], 0, - out_shape[0]) - rotated_poly.append(rotated_coords.reshape(-1)) - rotated_masks.append(rotated_poly) - rotated_masks = PolygonMasks(rotated_masks, *out_shape) - return rotated_masks - - def to_bitmap(self): - """convert polygon masks to bitmap masks.""" - bitmap_masks = self.to_ndarray() - return BitmapMasks(bitmap_masks, self.height, self.width) - - @property - def areas(self): - """Compute areas of masks. - - This func is modified from `detectron2 - `_. - The function only works with Polygons using the shoelace formula. - - Return: - ndarray: areas of each instance - """ # noqa: W501 - area = [] - for polygons_per_obj in self.masks: - area_per_obj = 0 - for p in polygons_per_obj: - area_per_obj += self._polygon_area(p[0::2], p[1::2]) - area.append(area_per_obj) - return np.asarray(area) - - def _polygon_area(self, x, y): - """Compute the area of a component of a polygon. - - Using the shoelace formula: - https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates - - Args: - x (ndarray): x coordinates of the component - y (ndarray): y coordinates of the component - - Return: - float: the are of the component - """ # noqa: 501 - return 0.5 * np.abs( - np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) - - def to_ndarray(self): - """Convert masks to the format of ndarray.""" - if len(self.masks) == 0: - return np.empty((0, self.height, self.width), dtype=np.uint8) - bitmap_masks = [] - for poly_per_obj in self.masks: - bitmap_masks.append( - polygon_to_bitmap(poly_per_obj, self.height, self.width)) - return np.stack(bitmap_masks) - - def to_tensor(self, dtype, device): - """See :func:`BaseInstanceMasks.to_tensor`.""" - if len(self.masks) == 0: - return torch.empty((0, self.height, self.width), - dtype=dtype, - device=device) - ndarray_masks = self.to_ndarray() - return torch.tensor(ndarray_masks, dtype=dtype, device=device) - - @classmethod - def random(cls, - num_masks=3, - height=32, - width=32, - n_verts=5, - dtype=np.float32, - rng=None): - """Generate random polygon masks for demo / testing purposes. - - Adapted from [1]_ - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501 - - Example: - >>> from mmdet.core.mask.structures import PolygonMasks - >>> self = PolygonMasks.random() - >>> print('self = {}'.format(self)) - """ - from mmdet.utils.util_random import ensure_rng - rng = ensure_rng(rng) - - def _gen_polygon(n, irregularity, spikeyness): - """Creates the polygon by sampling points on a circle around the - centre. Random noise is added by varying the angular spacing - between sequential points, and by varying the radial distance of - each point from the centre. - - Based on original code by Mike Ounsworth - - Args: - n (int): number of vertices - irregularity (float): [0,1] indicating how much variance there - is in the angular spacing of vertices. [0,1] will map to - [0, 2pi/numberOfVerts] - spikeyness (float): [0,1] indicating how much variance there is - in each vertex from the circle of radius aveRadius. [0,1] - will map to [0, aveRadius] - - Returns: - a list of vertices, in CCW order. - """ - from scipy.stats import truncnorm - # Generate around the unit circle - cx, cy = (0.0, 0.0) - radius = 1 - - tau = np.pi * 2 - - irregularity = np.clip(irregularity, 0, 1) * 2 * np.pi / n - spikeyness = np.clip(spikeyness, 1e-9, 1) - - # generate n angle steps - lower = (tau / n) - irregularity - upper = (tau / n) + irregularity - angle_steps = rng.uniform(lower, upper, n) - - # normalize the steps so that point 0 and point n+1 are the same - k = angle_steps.sum() / (2 * np.pi) - angles = (angle_steps / k).cumsum() + rng.uniform(0, tau) - - # Convert high and low values to be wrt the standard normal range - # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html - low = 0 - high = 2 * radius - mean = radius - std = spikeyness - a = (low - mean) / std - b = (high - mean) / std - tnorm = truncnorm(a=a, b=b, loc=mean, scale=std) - - # now generate the points - radii = tnorm.rvs(n, random_state=rng) - x_pts = cx + radii * np.cos(angles) - y_pts = cy + radii * np.sin(angles) - - points = np.hstack([x_pts[:, None], y_pts[:, None]]) - - # Scale to 0-1 space - points = points - points.min(axis=0) - points = points / points.max(axis=0) - - # Randomly place within 0-1 space - points = points * (rng.rand() * .8 + .2) - min_pt = points.min(axis=0) - max_pt = points.max(axis=0) - - high = (1 - max_pt) - low = (0 - min_pt) - offset = (rng.rand(2) * (high - low)) + low - points = points + offset - return points - - def _order_vertices(verts): - """ - References: - https://stackoverflow.com/questions/1709283/how-can-i-sort-a-coordinate-list-for-a-rectangle-counterclockwise - """ - mlat = verts.T[0].sum() / len(verts) - mlng = verts.T[1].sum() / len(verts) - - tau = np.pi * 2 - angle = (np.arctan2(mlat - verts.T[0], verts.T[1] - mlng) + - tau) % tau - sortx = angle.argsort() - verts = verts.take(sortx, axis=0) - return verts - - # Generate a random exterior for each requested mask - masks = [] - for _ in range(num_masks): - exterior = _order_vertices(_gen_polygon(n_verts, 0.9, 0.9)) - exterior = (exterior * [(width, height)]).astype(dtype) - masks.append([exterior.ravel()]) - - self = cls(masks, height, width) - return self - - -def polygon_to_bitmap(polygons, height, width): - """Convert masks from the form of polygons to bitmaps. - - Args: - polygons (list[ndarray]): masks in polygon representation - height (int): mask height - width (int): mask width - - Return: - ndarray: the converted masks in bitmap representation - """ - rles = maskUtils.frPyObjects(polygons, height, width) - rle = maskUtils.merge(rles) - bitmap_mask = maskUtils.decode(rle).astype(np.bool) - return bitmap_mask diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/CVPR/WALT/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index f0fd80d41407162df71ba5349fc659d4713cdb6e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,66 +0,0 @@ -from ..builder import DETECTORS -from .faster_rcnn import FasterRCNN - - -@DETECTORS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - - super(TridentFasterRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - if proposals is None: - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = img_metas * num_branch - proposal_list = self.rpn_head.simple_test_rpn(x, trident_img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, trident_img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) - - def forward_train(self, img, img_metas, gt_bboxes, gt_labels, **kwargs): - """make copies of img and gts to fit multi-branch.""" - trident_gt_bboxes = tuple(gt_bboxes * self.num_branch) - trident_gt_labels = tuple(gt_labels * self.num_branch) - trident_img_metas = tuple(img_metas * self.num_branch) - - return super(TridentFasterRCNN, - self).forward_train(img, trident_img_metas, - trident_gt_bboxes, trident_gt_labels) diff --git a/spaces/Corran/qnagenerator/app.py b/spaces/Corran/qnagenerator/app.py deleted file mode 100644 index 471276a873b1fee06250dda02662265b50dc00d0..0000000000000000000000000000000000000000 --- a/spaces/Corran/qnagenerator/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import gradio as gr -import torch -import random - -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelWithLMHead -from sentence_splitter import SentenceSplitter, split_text_into_sentences -splitter = SentenceSplitter(language='en') - -if torch.cuda.is_available(): - torch_device="cuda:0" -else: - torch_device="cpu" - -ptokenizer = AutoTokenizer.from_pretrained("tuner007/pegasus_paraphrase") -pmodel = AutoModelForSeq2SeqLM.from_pretrained("tuner007/pegasus_paraphrase").to(torch_device) - -def get_answer(input_text,num_return_sequences,num_beams): - batch = ptokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device) - translated = pmodel.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) - tgt_text = ptokenizer.batch_decode(translated, skip_special_tokens=True) - return tgt_text - -qtokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap") -qmodel = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-question-generation-ap").to(torch_device) - -def get_question(answer, context, max_length=64): - input_text = "answer: %s context: %s " % (answer, context) - features = qtokenizer([input_text], return_tensors='pt').to(torch_device) - - output = qmodel.generate(input_ids=features['input_ids'], - attention_mask=features['attention_mask'], - max_length=max_length) - - return qtokenizer.decode(output[0]) - -def getqna(input): - input=split_text_into_sentences(text=input, language='en') - if len(input)==0: - answer= get_answer(input,10,10)[random.randint(0, 9)] - else: - sentences=[get_answer(sentence,10,10)[random.randint(0, 9)] for sentence in input] - answer= " ".join(sentences) - answer= get_answer(answer,10,10)[random.randint(0, 9)] - question= get_question(answer, input).replace("","").replace("","") - return "%s \n answer:%s" % (question, answer) - -app = gr.Interface(fn=getqna, inputs="text", outputs="text") -app.launch() diff --git a/spaces/DHEIVER/Kidney_Image_Classifier/app.py b/spaces/DHEIVER/Kidney_Image_Classifier/app.py deleted file mode 100644 index b655c53c7f7855b7d8b4d3d00a99d0a4cce98e17..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/Kidney_Image_Classifier/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -import tensorflow as tf -import tensorflow_io as tfio -import numpy as np - -loaded_model = tf.keras.models.load_model('kidney2.h5') - -label_names = { - "1": "Cyst", - "2": "Normal", - "3": "Stone", - "4": "Tumor" -} - -def classify_kidney_image(img): - resize = tf.image.resize(img, (224, 224)) - gray = tfio.experimental.color.bgr_to_rgb(resize) - normalized_img = gray / 255.0 - yhat = loaded_model.predict(np.expand_dims(normalized_img, 0)) - class_index = np.argmax(yhat, axis=1)[0] - predicted_label = label_names[str(class_index + 1)] - probabilities = {label_names[str(i + 1)]: str(prob) for i, prob in enumerate(yhat[0])} - return predicted_label, probabilities - -image = gr.inputs.Image(shape=(224, 224)) -label = gr.outputs.Label() - -app = gr.Interface(fn=classify_kidney_image, inputs=image, outputs=label, interpretation='default', title='Kidney Image Classifier') -app.launch(debug=True) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ec39e521.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ec39e521.css deleted file mode 100644 index 71962e1bf13e4d7173920eb76341f62a3250c82e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ec39e521.css +++ /dev/null @@ -1 +0,0 @@ -@font-face{font-family:KaTeX_AMS;font-style:normal;font-weight:400;src:url(./KaTeX_AMS-Regular-0cdd387c.woff2) format("woff2"),url(./KaTeX_AMS-Regular-30da91e8.woff) format("woff"),url(./KaTeX_AMS-Regular-68534840.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:700;src:url(./KaTeX_Caligraphic-Bold-de7701e4.woff2) format("woff2"),url(./KaTeX_Caligraphic-Bold-1ae6bd74.woff) format("woff"),url(./KaTeX_Caligraphic-Bold-07d8e303.ttf) format("truetype")}@font-face{font-family:KaTeX_Caligraphic;font-style:normal;font-weight:400;src:url(./KaTeX_Caligraphic-Regular-5d53e70a.woff2) format("woff2"),url(./KaTeX_Caligraphic-Regular-3398dd02.woff) format("woff"),url(./KaTeX_Caligraphic-Regular-ed0b7437.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:700;src:url(./KaTeX_Fraktur-Bold-74444efd.woff2) format("woff2"),url(./KaTeX_Fraktur-Bold-9be7ceb8.woff) format("woff"),url(./KaTeX_Fraktur-Bold-9163df9c.ttf) format("truetype")}@font-face{font-family:KaTeX_Fraktur;font-style:normal;font-weight:400;src:url(./KaTeX_Fraktur-Regular-51814d27.woff2) format("woff2"),url(./KaTeX_Fraktur-Regular-5e28753b.woff) format("woff"),url(./KaTeX_Fraktur-Regular-1e6f9579.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:700;src:url(./KaTeX_Main-Bold-0f60d1b8.woff2) format("woff2"),url(./KaTeX_Main-Bold-c76c5d69.woff) format("woff"),url(./KaTeX_Main-Bold-138ac28d.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:700;src:url(./KaTeX_Main-BoldItalic-99cd42a3.woff2) format("woff2"),url(./KaTeX_Main-BoldItalic-a6f7ec0d.woff) format("woff"),url(./KaTeX_Main-BoldItalic-70ee1f64.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:italic;font-weight:400;src:url(./KaTeX_Main-Italic-97479ca6.woff2) format("woff2"),url(./KaTeX_Main-Italic-f1d6ef86.woff) format("woff"),url(./KaTeX_Main-Italic-0d85ae7c.ttf) format("truetype")}@font-face{font-family:KaTeX_Main;font-style:normal;font-weight:400;src:url(./KaTeX_Main-Regular-c2342cd8.woff2) format("woff2"),url(./KaTeX_Main-Regular-c6368d87.woff) format("woff"),url(./KaTeX_Main-Regular-d0332f52.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:700;src:url(./KaTeX_Math-BoldItalic-dc47344d.woff2) format("woff2"),url(./KaTeX_Math-BoldItalic-850c0af5.woff) format("woff"),url(./KaTeX_Math-BoldItalic-f9377ab0.ttf) format("truetype")}@font-face{font-family:KaTeX_Math;font-style:italic;font-weight:400;src:url(./KaTeX_Math-Italic-7af58c5e.woff2) format("woff2"),url(./KaTeX_Math-Italic-8a8d2445.woff) format("woff"),url(./KaTeX_Math-Italic-08ce98e5.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:700;src:url(./KaTeX_SansSerif-Bold-e99ae511.woff2) format("woff2"),url(./KaTeX_SansSerif-Bold-ece03cfd.woff) format("woff"),url(./KaTeX_SansSerif-Bold-1ece03f7.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:italic;font-weight:400;src:url(./KaTeX_SansSerif-Italic-00b26ac8.woff2) format("woff2"),url(./KaTeX_SansSerif-Italic-91ee6750.woff) format("woff"),url(./KaTeX_SansSerif-Italic-3931dd81.ttf) format("truetype")}@font-face{font-family:KaTeX_SansSerif;font-style:normal;font-weight:400;src:url(./KaTeX_SansSerif-Regular-68e8c73e.woff2) format("woff2"),url(./KaTeX_SansSerif-Regular-11e4dc8a.woff) format("woff"),url(./KaTeX_SansSerif-Regular-f36ea897.ttf) format("truetype")}@font-face{font-family:KaTeX_Script;font-style:normal;font-weight:400;src:url(./KaTeX_Script-Regular-036d4e95.woff2) format("woff2"),url(./KaTeX_Script-Regular-d96cdf2b.woff) format("woff"),url(./KaTeX_Script-Regular-1c67f068.ttf) format("truetype")}@font-face{font-family:KaTeX_Size1;font-style:normal;font-weight:400;src:url(./KaTeX_Size1-Regular-6b47c401.woff2) format("woff2"),url(./KaTeX_Size1-Regular-c943cc98.woff) format("woff"),url(./KaTeX_Size1-Regular-95b6d2f1.ttf) format("truetype")}@font-face{font-family:KaTeX_Size2;font-style:normal;font-weight:400;src:url(./KaTeX_Size2-Regular-d04c5421.woff2) format("woff2"),url(./KaTeX_Size2-Regular-2014c523.woff) format("woff"),url(./KaTeX_Size2-Regular-a6b2099f.ttf) format("truetype")}@font-face{font-family:KaTeX_Size3;font-style:normal;font-weight:400;src:url(data:font/woff2;base64,d09GMgABAAAAAA4oAA4AAAAAHbQAAA3TAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAABmAAgRQIDgmcDBEICo1oijYBNgIkA14LMgAEIAWJAAeBHAyBHBvbGiMRdnO0IkRRkiYDgr9KsJ1NUAf2kILNxgUmgqIgq1P89vcbIcmsQbRps3vCcXdYOKSWEPEKgZgQkprQQsxIXUgq0DqpGKmIvrgkeVGtEQD9DzAO29fM9jYhxZEsL2FeURH2JN4MIcTdO049NCVdxQ/w9NrSYFEBKTDKpLKfNkCGDc1RwjZLQcm3vqJ2UW9Xfa3tgAHz6ivp6vgC2yD4/6352ndnN0X0TL7seypkjZlMsjmZnf0Mm5Q+JykRWQBKCVCVPbARPXWyQtb5VgLB6Biq7/Uixcj2WGqdI8tGSgkuRG+t910GKP2D7AQH0DB9FMDW/obJZ8giFI3Wg8Cvevz0M+5m0rTh7XDBlvo9Y4vm13EXmfttwI4mBo1EG15fxJhUiCLbiiyCf/ZA6MFAhg3pGIZGdGIVjtPn6UcMk9A/UUr9PhoNsCENw1APAq0gpH73e+M+0ueyHbabc3vkbcdtzcf/fiy+NxQEjf9ud/ELBHAXJ0nk4z+MXH2Ev/kWyV4k7SkvpPc9Qr38F6RPWnM9cN6DJ0AdD1BhtgABtmoRoFCvPsBAumNm6soZG2Gk5GyVTo2sJncSyp0jQTYoR6WDvTwaaEcHsxHfvuWhHA3a6bN7twRKtcGok6NsCi7jYRrM2jExsUFMxMQYuJbMhuWNOumEJy9hi29Dmg5zMp/A5+hhPG19j1vBrq8JTLr8ki5VLPmG/PynJHVul440bxg5xuymHUFPBshC+nA9I1FmwbRBTNHAcik3Oae0cxKoI3MOriM42UrPe51nsaGxJ+WfXubAsP84aabUlQSJ1IiE0iPETLUU4CATgfXSCSpuRFRmCGbO+wSpAnzaeaCYW1VNEysRtuXCEL1kUFUbbtMv3Tilt/1c11jt3Q5bbMa84cpWipp8Elw3MZhOHsOlwwVUQM3lAR35JiFQbaYCRnMF2lxAWoOg2gyoIV4PouX8HytNIfLhqpJtXB4vjiViUI8IJ7bkC4ikkQvKksnOTKICwnqWSZ9YS5f0WCxmpgjbIq7EJcM4aI2nmhLNY2JIUgOjXZFWBHb+x5oh6cwb0Tv1ackHdKi0I9OO2wE9aogIOn540CCCziyhN+IaejtgAONKznHlHyutPrHGwCx9S6B8kfS4Mfi4Eyv7OU730bT1SCBjt834cXsf43zVjPUqqJjgrjeGnBxSG4aYAKFuVbeCfkDIjAqMb6yLNIbCuvXhMH2/+k2vkNpkORhR59N1CkzoOENvneIosjYmuTxlhUzaGEJQ/iWqx4dmwpmKjrwTiTGTCVozNAYqk/zXOndWxuWSmJkQpJw3pK5KX6QrLt5LATMqpmPAQhkhK6PUjzHUn7E0gHE0kPE0iKkolgkUx9SZmVAdDgpffdyJKg3k7VmzYGCwVXGz/tXmkOIp+vcWs+EMuhhvN0h9uhfzWJziBQmCREGSIFmQIkgVpAnSBRmC//6hkLZwaVhwxlrJSOdqlFtOYxlau9F2QN5Y98xmIAsiM1HVp2VFX+DHHGg6Ecjh3vmqtidX3qHI2qycTk/iwxSt5UzTmEP92ZBnEWTk4Mx8Mpl78ZDokxg/KWb+Q0QkvdKVmq3TMW+RXEgrsziSAfNXFMhDc60N5N9jQzjfO0kBKpUZl0ZmwJ41j/B9Hz6wmRaJB84niNmQrzp9eSlQCDDzazGDdVi3P36VZQ+Jy4f9UBNp+3zTjqI4abaFAm+GShVaXlsGdF3FYzZcDI6cori4kMxUECl9IjJZpzkvitAoxKue+90pDMvcKRxLl53TmOKCmV/xRolNKSqqUxc6LStOETmFOiLZZptlZepcKiAzteG8PEdpnQpbOMNcMsR4RR2Bs0cKFEvSmIjAFcnarqwUL4lDhHmnVkwu1IwshbiCcgvOheZuYyOteufZZwlcTlLgnZ3o/WcYdzZHW/WGaqaVfmTZ1aWCceJjkbZqsfbkOtcFlUZM/jy+hXHDbaUobWqqXaeWobbLO99yG5N3U4wxco0rQGGcOLASFMXeJoham8M+/x6O2WywK2l4HGbq1CoUyC/IZikQhdq3SiuNrvAEj0AVu9x2x3lp/xWzahaxidezFVtdcb5uEnzyl0ZmYiuKI0exvCd4Xc9CV1KB0db00z92wDPde0kukbvZIWN6jUWFTmPIC/Y4UPCm8UfDTFZpZNon1qLFTkBhxzB+FjQRA2Q/YRJT8pQigslMaUpFyAG8TMlXigiqmAZX4xgijKjRlGpLE0GdplRfCaJo0JQaSxNBk6ZmMzcya0FmrcisDdn0Q3HI2sWSppYigmlM1XT/kLQZSNpMJG0WkjYbSZuDpM1F0uYhFc1HxU4m1QJjDK6iL0S5uSj5rgXc3RejEigtcRBtqYPQsiTskmO5vosV+q4VGIKbOkDg0jtRrq+Em1YloaTFar3EGr1EUC8R0kus1Uus00usL97ABr2BjXoDm/QGNhuWtMVBKOwg/i78lT7hBsAvDmwHc/ao3vmUbBmhjeYySZNWvGkfZAgISDSaDo1SVpzGDsAEkF8B+gEapViUoZgUWXcRIGFZNm6gWbAKk0bp0k1MHG9fLYtV4iS2SmLEQFARzRcnf9PUS0LVn05/J9MiRRBU3v2IrvW974v4N00L7ZMk0wXP1409CHo/an8zTRHD3eSJ6m8D4YMkZNl3M79sqeuAsr/m3f+8/yl7A50aiAEJgeBeMWzu7ui9UfUBCe2TIqZIoOd/3/udRBOQidQZUERzb2/VwZN1H/Sju82ew2H2Wfr6qvfVf3hqwDvAIpkQVFy4B9Pe9e4/XvPeceu7h3dvO56iJPf0+A6cqA2ip18ER+iFgggiuOkvj24bby0N9j2UHIkgqIt+sVgfodC4YghLSMjSZbH0VR/6dMDrYJeKHilKTemt6v6kvzvn3/RrdWtr0GoN/xL+Sex/cPYLUpepx9cz/D46UPU5KXgAQa+NDps1v6J3xP1i2HtaDB0M9aX2deA7SYff//+gUCovMmIK/qfsFcOk+4Y5ZN97XlG6zebqtMbKgeRFi51vnxTQYBUik2rS/Cn6PC8ADR8FGxsRPB82dzfND90gIcshOcYUkfjherBz53odpm6TP8txlwOZ71xmfHHOvq053qFF/MRlS3jP0ELudrf2OeN8DHvp6ZceLe8qKYvWz/7yp0u4dKPfli3CYq0O13Ih71mylJ80tOi10On8wi+F4+LWgDPeJ30msSQt9/vkmHq9/Lvo2b461mP801v3W4xTcs6CbvF9UDdrSt+A8OUbpSh55qAUFXWznBBfdeJ8a4d7ugT5tvxUza3h9m4H7ptTqiG4z0g5dc0X29OcGlhpGFMpQo9ytTS+NViZpNdvU4kWx+LKxNY10kQ1yqGXrhe4/1nvP7E+nd5A92TtaRplbHSqoIdOqtRWti+fkB5/n1+/VvCmz12pG1kpQWsfi1ftlBobm0bpngs16CHkbIwdLnParxtTV3QYRlfJ0KFskH7pdN/YDn+yRuSd7sNH3aO0DYPggk6uWuXrfOc+fa3VTxFVvKaNxHsiHmsXyCLIE5yuOeN3/Jdf8HBL/5M6shjyhxHx9BjB1O0+4NLOnjLLSxwO7ukN4jMbOIcD879KLSi6Pk61Oqm2377n8079PXEEQ7cy7OKEC9nbpet118fxweTafpt69x/Bt8UqGzNQt7aelpc44dn5cqhwf71+qKp/Zf/+a0zcizOUWpl/iBcSXip0pplkatCchoH5c5aUM8I7/dWxAej8WicPL1URFZ9BDJelUwEwTkGqUhgSlydVes95YdXvhh9Gfz/aeFWvgVb4tuLbcv4+wLdutVZv/cUonwBD/6eDlE0aSiKK/uoH3+J1wDE/jMVqY2ysGufN84oIXB0sPzy8ollX/LegY74DgJXJR57sn+VGza0x3DnuIgABFM15LmajjjsNlYj+JEZGbuRYcAMOWxFkPN2w6Wd46xo4gVWQR/X4lyI/R6K/YK0110GzudPRW7Y+UOBGTfNNzHeYT0fiH0taunBpq9HEW8OKSaBGj21L0MqenEmNRWBAWDWAk4CpNoEZJ2tTaPFgbQYj8HxtFilErs3BTRwT8uO1NXQaWfIotchmPkAF5mMBAliEmZiOGVgCG9LgRzpscMAOOwowlT3JhusdazXGSC/hxR3UlmWVwWHpOIKheqONvjyhSiTHIkVUco5bnji8m//zL7PKaT1Vl5I6UE609f+gkr6MZKVyKc7zJRmCahLsdlyA5fdQkRSan9LgnnLEyGSkaKJCJog0wAgvepWBt80+1yKln1bMVtCljfNWDueKLsWwaEbBSfSPTEmVRsUcYYMnEjcjeyCZzBXK9E9BYBXLKjOSpUDR+nEV3TFSUdQaz+ot98QxgXwx0GQ+EEUAKB2qZPkQQ0GqFD8UPFMqyaCHM24BZmSGic9EYMagKizOw9Hz50DMrDLrqqLkTAhplMictiCAx5S3BIUQdeJeLnBy2CNtMfz6cV4u8XKoFZQesbf9YZiIERiHjaNodDW6LgcirX/mPnJIkBGDUpTBhSa0EIr38D5hCIszhCM8URGBqImoWjpvpt1ebu/v3Gl3qJfMnNM+9V+kiRFyROTPHQWOcs1dNW94/ukKMPZBvDi55i5CttdeJz84DLngLqjcdwEZ87bFFR8CIG35OAkDVN6VRDZ7aq67NteYqZ2lpT8oYB2CytoBd6VuAx4WgiAsnuj3WohG+LugzXiQRDeM3XYXlULv4dp5VFYC) format("woff2"),url(./KaTeX_Size3-Regular-6ab6b62e.woff) format("woff"),url(./KaTeX_Size3-Regular-500e04d5.ttf) format("truetype")}@font-face{font-family:KaTeX_Size4;font-style:normal;font-weight:400;src:url(./KaTeX_Size4-Regular-a4af7d41.woff2) format("woff2"),url(./KaTeX_Size4-Regular-99f9c675.woff) format("woff"),url(./KaTeX_Size4-Regular-c647367d.ttf) format("truetype")}@font-face{font-family:KaTeX_Typewriter;font-style:normal;font-weight:400;src:url(./KaTeX_Typewriter-Regular-71d517d6.woff2) format("woff2"),url(./KaTeX_Typewriter-Regular-e14fed02.woff) format("woff"),url(./KaTeX_Typewriter-Regular-f01f3e87.ttf) format("truetype")}.gradio-container-3-37-0 .katex{text-rendering:auto;font: 1.21em KaTeX_Main,Times New Roman,serif;line-height:1.2;text-indent:0}.gradio-container-3-37-0 .katex *{-ms-high-contrast-adjust:none!important;border-color:currentColor}.gradio-container-3-37-0 .katex .katex-version:after{content:"0.16.7"}.gradio-container-3-37-0 .katex .katex-mathml{clip:rect(1px,1px,1px,1px);border:0;height:1px;overflow:hidden;padding:0;position:absolute;width:1px}.gradio-container-3-37-0 .katex .katex-html>.newline{display:block}.gradio-container-3-37-0 .katex .base{position:relative;white-space:nowrap;width:-webkit-min-content;width:-moz-min-content;width:min-content}.gradio-container-3-37-0 .katex .base,.gradio-container-3-37-0 .katex .strut{display:inline-block}.gradio-container-3-37-0 .katex .textbf{font-weight:700}.gradio-container-3-37-0 .katex .textit{font-style:italic}.gradio-container-3-37-0 .katex .textrm{font-family:KaTeX_Main}.gradio-container-3-37-0 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-37-0 .katex .texttt{font-family:KaTeX_Typewriter}.gradio-container-3-37-0 .katex .mathnormal{font-family:KaTeX_Math;font-style:italic}.gradio-container-3-37-0 .katex .mathit{font-family:KaTeX_Main;font-style:italic}.gradio-container-3-37-0 .katex .mathrm{font-style:normal}.gradio-container-3-37-0 .katex .mathbf{font-family:KaTeX_Main;font-weight:700}.gradio-container-3-37-0 .katex .boldsymbol{font-family:KaTeX_Math;font-style:italic;font-weight:700}.gradio-container-3-37-0 .katex .amsrm,.gradio-container-3-37-0 .katex .mathbb,.gradio-container-3-37-0 .katex .textbb{font-family:KaTeX_AMS}.gradio-container-3-37-0 .katex .mathcal{font-family:KaTeX_Caligraphic}.gradio-container-3-37-0 .katex .mathfrak,.gradio-container-3-37-0 .katex .textfrak{font-family:KaTeX_Fraktur}.gradio-container-3-37-0 .katex .mathtt{font-family:KaTeX_Typewriter}.gradio-container-3-37-0 .katex .mathscr,.gradio-container-3-37-0 .katex .textscr{font-family:KaTeX_Script}.gradio-container-3-37-0 .katex .mathsf,.gradio-container-3-37-0 .katex .textsf{font-family:KaTeX_SansSerif}.gradio-container-3-37-0 .katex .mathboldsf,.gradio-container-3-37-0 .katex .textboldsf{font-family:KaTeX_SansSerif;font-weight:700}.gradio-container-3-37-0 .katex .mathitsf,.gradio-container-3-37-0 .katex .textitsf{font-family:KaTeX_SansSerif;font-style:italic}.gradio-container-3-37-0 .katex .mainrm{font-family:KaTeX_Main;font-style:normal}.gradio-container-3-37-0 .katex .vlist-t{border-collapse:collapse;display:inline-table;table-layout:fixed}.gradio-container-3-37-0 .katex .vlist-r{display:table-row}.gradio-container-3-37-0 .katex .vlist{display:table-cell;position:relative;vertical-align:bottom}.gradio-container-3-37-0 .katex .vlist>span{display:block;height:0;position:relative}.gradio-container-3-37-0 .katex .vlist>span>span{display:inline-block}.gradio-container-3-37-0 .katex .vlist>span>.pstrut{overflow:hidden;width:0}.gradio-container-3-37-0 .katex .vlist-t2{margin-right:-2px}.gradio-container-3-37-0 .katex .vlist-s{display:table-cell;font-size:1px;min-width:2px;vertical-align:bottom;width:2px}.gradio-container-3-37-0 .katex .vbox{align-items:baseline;display:inline-flex;flex-direction:column}.gradio-container-3-37-0 .katex .hbox{width:100%}.gradio-container-3-37-0 .katex .hbox,.gradio-container-3-37-0 .katex .thinbox{display:inline-flex;flex-direction:row}.gradio-container-3-37-0 .katex .thinbox{max-width:0;width:0}.gradio-container-3-37-0 .katex .msupsub{text-align:left}.gradio-container-3-37-0 .katex .mfrac>span>span{text-align:center}.gradio-container-3-37-0 .katex .mfrac .frac-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-37-0 .katex .hdashline,.gradio-container-3-37-0 .katex .hline,.gradio-container-3-37-0 .katex .mfrac .frac-line,.gradio-container-3-37-0 .katex .overline .overline-line,.gradio-container-3-37-0 .katex .rule,.gradio-container-3-37-0 .katex .underline .underline-line{min-height:1px}.gradio-container-3-37-0 .katex .mspace{display:inline-block}.gradio-container-3-37-0 .katex .clap,.gradio-container-3-37-0 .katex .llap,.gradio-container-3-37-0 .katex .rlap{position:relative;width:0}.gradio-container-3-37-0 .katex .clap>.inner,.gradio-container-3-37-0 .katex .llap>.inner,.gradio-container-3-37-0 .katex .rlap>.inner{position:absolute}.gradio-container-3-37-0 .katex .clap>.fix,.gradio-container-3-37-0 .katex .llap>.fix,.gradio-container-3-37-0 .katex .rlap>.fix{display:inline-block}.gradio-container-3-37-0 .katex .llap>.inner{right:0}.gradio-container-3-37-0 .katex .clap>.inner,.gradio-container-3-37-0 .katex .rlap>.inner{left:0}.gradio-container-3-37-0 .katex .clap>.inner>span{margin-left:-50%;margin-right:50%}.gradio-container-3-37-0 .katex .rule{border:0 solid;display:inline-block;position:relative}.gradio-container-3-37-0 .katex .hline,.gradio-container-3-37-0 .katex .overline .overline-line,.gradio-container-3-37-0 .katex .underline .underline-line{border-bottom-style:solid;display:inline-block;width:100%}.gradio-container-3-37-0 .katex .hdashline{border-bottom-style:dashed;display:inline-block;width:100%}.gradio-container-3-37-0 .katex .sqrt>.root{margin-left:.27777778em;margin-right:-.55555556em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size1,.gradio-container-3-37-0 .katex .sizing.reset-size1.size1{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size2,.gradio-container-3-37-0 .katex .sizing.reset-size1.size2{font-size:1.2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size3,.gradio-container-3-37-0 .katex .sizing.reset-size1.size3{font-size:1.4em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size4,.gradio-container-3-37-0 .katex .sizing.reset-size1.size4{font-size:1.6em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size5,.gradio-container-3-37-0 .katex .sizing.reset-size1.size5{font-size:1.8em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size6,.gradio-container-3-37-0 .katex .sizing.reset-size1.size6{font-size:2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size7,.gradio-container-3-37-0 .katex .sizing.reset-size1.size7{font-size:2.4em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size8,.gradio-container-3-37-0 .katex .sizing.reset-size1.size8{font-size:2.88em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size9,.gradio-container-3-37-0 .katex .sizing.reset-size1.size9{font-size:3.456em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size10,.gradio-container-3-37-0 .katex .sizing.reset-size1.size10{font-size:4.148em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size1.size11,.gradio-container-3-37-0 .katex .sizing.reset-size1.size11{font-size:4.976em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size1,.gradio-container-3-37-0 .katex .sizing.reset-size2.size1{font-size:.83333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size2,.gradio-container-3-37-0 .katex .sizing.reset-size2.size2{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size3,.gradio-container-3-37-0 .katex .sizing.reset-size2.size3{font-size:1.16666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size4,.gradio-container-3-37-0 .katex .sizing.reset-size2.size4{font-size:1.33333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size5,.gradio-container-3-37-0 .katex .sizing.reset-size2.size5{font-size:1.5em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size6,.gradio-container-3-37-0 .katex .sizing.reset-size2.size6{font-size:1.66666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size7,.gradio-container-3-37-0 .katex .sizing.reset-size2.size7{font-size:2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size8,.gradio-container-3-37-0 .katex .sizing.reset-size2.size8{font-size:2.4em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size9,.gradio-container-3-37-0 .katex .sizing.reset-size2.size9{font-size:2.88em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size10,.gradio-container-3-37-0 .katex .sizing.reset-size2.size10{font-size:3.45666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size2.size11,.gradio-container-3-37-0 .katex .sizing.reset-size2.size11{font-size:4.14666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size1,.gradio-container-3-37-0 .katex .sizing.reset-size3.size1{font-size:.71428571em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size2,.gradio-container-3-37-0 .katex .sizing.reset-size3.size2{font-size:.85714286em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size3,.gradio-container-3-37-0 .katex .sizing.reset-size3.size3{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size4,.gradio-container-3-37-0 .katex .sizing.reset-size3.size4{font-size:1.14285714em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size5,.gradio-container-3-37-0 .katex .sizing.reset-size3.size5{font-size:1.28571429em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size6,.gradio-container-3-37-0 .katex .sizing.reset-size3.size6{font-size:1.42857143em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size7,.gradio-container-3-37-0 .katex .sizing.reset-size3.size7{font-size:1.71428571em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size8,.gradio-container-3-37-0 .katex .sizing.reset-size3.size8{font-size:2.05714286em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size9,.gradio-container-3-37-0 .katex .sizing.reset-size3.size9{font-size:2.46857143em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size10,.gradio-container-3-37-0 .katex .sizing.reset-size3.size10{font-size:2.96285714em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size3.size11,.gradio-container-3-37-0 .katex .sizing.reset-size3.size11{font-size:3.55428571em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size1,.gradio-container-3-37-0 .katex .sizing.reset-size4.size1{font-size:.625em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size2,.gradio-container-3-37-0 .katex .sizing.reset-size4.size2{font-size:.75em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size3,.gradio-container-3-37-0 .katex .sizing.reset-size4.size3{font-size:.875em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size4,.gradio-container-3-37-0 .katex .sizing.reset-size4.size4{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size5,.gradio-container-3-37-0 .katex .sizing.reset-size4.size5{font-size:1.125em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size6,.gradio-container-3-37-0 .katex .sizing.reset-size4.size6{font-size:1.25em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size7,.gradio-container-3-37-0 .katex .sizing.reset-size4.size7{font-size:1.5em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size8,.gradio-container-3-37-0 .katex .sizing.reset-size4.size8{font-size:1.8em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size9,.gradio-container-3-37-0 .katex .sizing.reset-size4.size9{font-size:2.16em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size10,.gradio-container-3-37-0 .katex .sizing.reset-size4.size10{font-size:2.5925em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size4.size11,.gradio-container-3-37-0 .katex .sizing.reset-size4.size11{font-size:3.11em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size1,.gradio-container-3-37-0 .katex .sizing.reset-size5.size1{font-size:.55555556em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size2,.gradio-container-3-37-0 .katex .sizing.reset-size5.size2{font-size:.66666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size3,.gradio-container-3-37-0 .katex .sizing.reset-size5.size3{font-size:.77777778em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size4,.gradio-container-3-37-0 .katex .sizing.reset-size5.size4{font-size:.88888889em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size5,.gradio-container-3-37-0 .katex .sizing.reset-size5.size5{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size6,.gradio-container-3-37-0 .katex .sizing.reset-size5.size6{font-size:1.11111111em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size7,.gradio-container-3-37-0 .katex .sizing.reset-size5.size7{font-size:1.33333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size8,.gradio-container-3-37-0 .katex .sizing.reset-size5.size8{font-size:1.6em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size9,.gradio-container-3-37-0 .katex .sizing.reset-size5.size9{font-size:1.92em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size10,.gradio-container-3-37-0 .katex .sizing.reset-size5.size10{font-size:2.30444444em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size5.size11,.gradio-container-3-37-0 .katex .sizing.reset-size5.size11{font-size:2.76444444em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size1,.gradio-container-3-37-0 .katex .sizing.reset-size6.size1{font-size:.5em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size2,.gradio-container-3-37-0 .katex .sizing.reset-size6.size2{font-size:.6em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size3,.gradio-container-3-37-0 .katex .sizing.reset-size6.size3{font-size:.7em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size4,.gradio-container-3-37-0 .katex .sizing.reset-size6.size4{font-size:.8em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size5,.gradio-container-3-37-0 .katex .sizing.reset-size6.size5{font-size:.9em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size6,.gradio-container-3-37-0 .katex .sizing.reset-size6.size6{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size7,.gradio-container-3-37-0 .katex .sizing.reset-size6.size7{font-size:1.2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size8,.gradio-container-3-37-0 .katex .sizing.reset-size6.size8{font-size:1.44em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size9,.gradio-container-3-37-0 .katex .sizing.reset-size6.size9{font-size:1.728em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size10,.gradio-container-3-37-0 .katex .sizing.reset-size6.size10{font-size:2.074em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size6.size11,.gradio-container-3-37-0 .katex .sizing.reset-size6.size11{font-size:2.488em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size1,.gradio-container-3-37-0 .katex .sizing.reset-size7.size1{font-size:.41666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size2,.gradio-container-3-37-0 .katex .sizing.reset-size7.size2{font-size:.5em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size3,.gradio-container-3-37-0 .katex .sizing.reset-size7.size3{font-size:.58333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size4,.gradio-container-3-37-0 .katex .sizing.reset-size7.size4{font-size:.66666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size5,.gradio-container-3-37-0 .katex .sizing.reset-size7.size5{font-size:.75em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size6,.gradio-container-3-37-0 .katex .sizing.reset-size7.size6{font-size:.83333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size7,.gradio-container-3-37-0 .katex .sizing.reset-size7.size7{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size8,.gradio-container-3-37-0 .katex .sizing.reset-size7.size8{font-size:1.2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size9,.gradio-container-3-37-0 .katex .sizing.reset-size7.size9{font-size:1.44em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size10,.gradio-container-3-37-0 .katex .sizing.reset-size7.size10{font-size:1.72833333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size7.size11,.gradio-container-3-37-0 .katex .sizing.reset-size7.size11{font-size:2.07333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size1,.gradio-container-3-37-0 .katex .sizing.reset-size8.size1{font-size:.34722222em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size2,.gradio-container-3-37-0 .katex .sizing.reset-size8.size2{font-size:.41666667em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size3,.gradio-container-3-37-0 .katex .sizing.reset-size8.size3{font-size:.48611111em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size4,.gradio-container-3-37-0 .katex .sizing.reset-size8.size4{font-size:.55555556em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size5,.gradio-container-3-37-0 .katex .sizing.reset-size8.size5{font-size:.625em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size6,.gradio-container-3-37-0 .katex .sizing.reset-size8.size6{font-size:.69444444em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size7,.gradio-container-3-37-0 .katex .sizing.reset-size8.size7{font-size:.83333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size8,.gradio-container-3-37-0 .katex .sizing.reset-size8.size8{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size9,.gradio-container-3-37-0 .katex .sizing.reset-size8.size9{font-size:1.2em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size10,.gradio-container-3-37-0 .katex .sizing.reset-size8.size10{font-size:1.44027778em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size8.size11,.gradio-container-3-37-0 .katex .sizing.reset-size8.size11{font-size:1.72777778em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size1,.gradio-container-3-37-0 .katex .sizing.reset-size9.size1{font-size:.28935185em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size2,.gradio-container-3-37-0 .katex .sizing.reset-size9.size2{font-size:.34722222em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size3,.gradio-container-3-37-0 .katex .sizing.reset-size9.size3{font-size:.40509259em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size4,.gradio-container-3-37-0 .katex .sizing.reset-size9.size4{font-size:.46296296em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size5,.gradio-container-3-37-0 .katex .sizing.reset-size9.size5{font-size:.52083333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size6,.gradio-container-3-37-0 .katex .sizing.reset-size9.size6{font-size:.5787037em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size7,.gradio-container-3-37-0 .katex .sizing.reset-size9.size7{font-size:.69444444em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size8,.gradio-container-3-37-0 .katex .sizing.reset-size9.size8{font-size:.83333333em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size9,.gradio-container-3-37-0 .katex .sizing.reset-size9.size9{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size10,.gradio-container-3-37-0 .katex .sizing.reset-size9.size10{font-size:1.20023148em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size9.size11,.gradio-container-3-37-0 .katex .sizing.reset-size9.size11{font-size:1.43981481em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size1,.gradio-container-3-37-0 .katex .sizing.reset-size10.size1{font-size:.24108004em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size2,.gradio-container-3-37-0 .katex .sizing.reset-size10.size2{font-size:.28929605em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size3,.gradio-container-3-37-0 .katex .sizing.reset-size10.size3{font-size:.33751205em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size4,.gradio-container-3-37-0 .katex .sizing.reset-size10.size4{font-size:.38572806em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size5,.gradio-container-3-37-0 .katex .sizing.reset-size10.size5{font-size:.43394407em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size6,.gradio-container-3-37-0 .katex .sizing.reset-size10.size6{font-size:.48216008em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size7,.gradio-container-3-37-0 .katex .sizing.reset-size10.size7{font-size:.57859209em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size8,.gradio-container-3-37-0 .katex .sizing.reset-size10.size8{font-size:.69431051em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size9,.gradio-container-3-37-0 .katex .sizing.reset-size10.size9{font-size:.83317261em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size10,.gradio-container-3-37-0 .katex .sizing.reset-size10.size10{font-size:1em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size10.size11,.gradio-container-3-37-0 .katex .sizing.reset-size10.size11{font-size:1.19961427em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size1,.gradio-container-3-37-0 .katex .sizing.reset-size11.size1{font-size:.20096463em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size2,.gradio-container-3-37-0 .katex .sizing.reset-size11.size2{font-size:.24115756em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size3,.gradio-container-3-37-0 .katex .sizing.reset-size11.size3{font-size:.28135048em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size4,.gradio-container-3-37-0 .katex .sizing.reset-size11.size4{font-size:.32154341em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size5,.gradio-container-3-37-0 .katex .sizing.reset-size11.size5{font-size:.36173633em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size6,.gradio-container-3-37-0 .katex .sizing.reset-size11.size6{font-size:.40192926em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size7,.gradio-container-3-37-0 .katex .sizing.reset-size11.size7{font-size:.48231511em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size8,.gradio-container-3-37-0 .katex .sizing.reset-size11.size8{font-size:.57877814em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size9,.gradio-container-3-37-0 .katex .sizing.reset-size11.size9{font-size:.69453376em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size10,.gradio-container-3-37-0 .katex .sizing.reset-size11.size10{font-size:.83360129em}.gradio-container-3-37-0 .katex .fontsize-ensurer.reset-size11.size11,.gradio-container-3-37-0 .katex .sizing.reset-size11.size11{font-size:1em}.gradio-container-3-37-0 .katex .delimsizing.size1{font-family:KaTeX_Size1}.gradio-container-3-37-0 .katex .delimsizing.size2{font-family:KaTeX_Size2}.gradio-container-3-37-0 .katex .delimsizing.size3{font-family:KaTeX_Size3}.gradio-container-3-37-0 .katex .delimsizing.size4{font-family:KaTeX_Size4}.gradio-container-3-37-0 .katex .delimsizing.mult .delim-size1>span{font-family:KaTeX_Size1}.gradio-container-3-37-0 .katex .delimsizing.mult .delim-size4>span{font-family:KaTeX_Size4}.gradio-container-3-37-0 .katex .nulldelimiter{display:inline-block;width:.12em}.gradio-container-3-37-0 .katex .delimcenter,.gradio-container-3-37-0 .katex .op-symbol{position:relative}.gradio-container-3-37-0 .katex .op-symbol.small-op{font-family:KaTeX_Size1}.gradio-container-3-37-0 .katex .op-symbol.large-op{font-family:KaTeX_Size2}.gradio-container-3-37-0 .katex .accent>.vlist-t,.gradio-container-3-37-0 .katex .op-limits>.vlist-t{text-align:center}.gradio-container-3-37-0 .katex .accent .accent-body{position:relative}.gradio-container-3-37-0 .katex .accent .accent-body:not(.accent-full){width:0}.gradio-container-3-37-0 .katex .overlay{display:block}.gradio-container-3-37-0 .katex .mtable .vertical-separator{display:inline-block;min-width:1px}.gradio-container-3-37-0 .katex .mtable .arraycolsep{display:inline-block}.gradio-container-3-37-0 .katex .mtable .col-align-c>.vlist-t{text-align:center}.gradio-container-3-37-0 .katex .mtable .col-align-l>.vlist-t{text-align:left}.gradio-container-3-37-0 .katex .mtable .col-align-r>.vlist-t{text-align:right}.gradio-container-3-37-0 .katex .svg-align{text-align:left}.gradio-container-3-37-0 .katex svg{fill:currentColor;stroke:currentColor;fill-rule:nonzero;fill-opacity:1;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;display:block;height:inherit;position:absolute;width:100%}.gradio-container-3-37-0 .katex svg path{stroke:none}.gradio-container-3-37-0 .katex img{border-style:none;max-height:none;max-width:none;min-height:0;min-width:0}.gradio-container-3-37-0 .katex .stretchy{display:block;overflow:hidden;position:relative;width:100%}.gradio-container-3-37-0 .katex .stretchy:after,.gradio-container-3-37-0 .katex .stretchy:before{content:""}.gradio-container-3-37-0 .katex .hide-tail{overflow:hidden;position:relative;width:100%}.gradio-container-3-37-0 .katex .halfarrow-left{left:0;overflow:hidden;position:absolute;width:50.2%}.gradio-container-3-37-0 .katex .halfarrow-right{overflow:hidden;position:absolute;right:0;width:50.2%}.gradio-container-3-37-0 .katex .brace-left{left:0;overflow:hidden;position:absolute;width:25.1%}.gradio-container-3-37-0 .katex .brace-center{left:25%;overflow:hidden;position:absolute;width:50%}.gradio-container-3-37-0 .katex .brace-right{overflow:hidden;position:absolute;right:0;width:25.1%}.gradio-container-3-37-0 .katex .x-arrow-pad{padding:0 .5em}.gradio-container-3-37-0 .katex .cd-arrow-pad{padding:0 .55556em 0 .27778em}.gradio-container-3-37-0 .katex .mover,.gradio-container-3-37-0 .katex .munder,.gradio-container-3-37-0 .katex .x-arrow{text-align:center}.gradio-container-3-37-0 .katex .boxpad{padding:0 .3em}.gradio-container-3-37-0 .katex .fbox,.gradio-container-3-37-0 .katex .fcolorbox{border:.04em solid;box-sizing:border-box}.gradio-container-3-37-0 .katex .cancel-pad{padding:0 .2em}.gradio-container-3-37-0 .katex .cancel-lap{margin-left:-.2em;margin-right:-.2em}.gradio-container-3-37-0 .katex .sout{border-bottom-style:solid;border-bottom-width:.08em}.gradio-container-3-37-0 .katex .angl{border-right:.049em solid;border-top:.049em solid;box-sizing:border-box;margin-right:.03889em}.gradio-container-3-37-0 .katex .anglpad{padding:0 .03889em}.gradio-container-3-37-0 .katex .eqn-num:before{content:"(" counter(katexEqnNo) ")";counter-increment:katexEqnNo}.gradio-container-3-37-0 .katex .mml-eqn-num:before{content:"(" counter(mmlEqnNo) ")";counter-increment:mmlEqnNo}.gradio-container-3-37-0 .katex .mtr-glue{width:50%}.gradio-container-3-37-0 .katex .cd-vert-arrow{display:inline-block;position:relative}.gradio-container-3-37-0 .katex .cd-label-left{display:inline-block;position:absolute;right:calc(50% + .3em);text-align:left}.gradio-container-3-37-0 .katex .cd-label-right{display:inline-block;left:calc(50% + .3em);position:absolute;text-align:right}.gradio-container-3-37-0 .katex-display{display:block;margin:1em 0;text-align:center}.gradio-container-3-37-0 .katex-display>.katex{display:block;text-align:center;white-space:nowrap}.gradio-container-3-37-0 .katex-display>.katex>.katex-html{display:block;position:relative}.gradio-container-3-37-0 .katex-display>.katex>.katex-html>.tag{position:absolute;right:0}.gradio-container-3-37-0 .katex-display.leqno>.katex>.katex-html>.tag{left:0;right:auto}.gradio-container-3-37-0 .katex-display.fleqn>.katex{padding-left:2em;text-align:left}.gradio-container-3-37-0 body{counter-reset:katexEqnNo mmlEqnNo}span.svelte-15hifvz code[class*=language-],span.svelte-15hifvz pre[class*=language-]{font-size:var(--text-md)}.wrap.svelte-1fzvtqo.svelte-1fzvtqo{padding:var(--block-padding);width:100%;overflow-y:auto}.message-wrap.svelte-1fzvtqo.svelte-1fzvtqo{display:flex;flex-direction:column;gap:var(--spacing-xxl)}.message-wrap.svelte-1fzvtqo>div.svelte-1fzvtqo img{border-radius:13px;max-width:30vw}.message-wrap.svelte-1fzvtqo>div.svelte-1fzvtqo p:not(:first-child){margin-top:var(--spacing-xxl)}.message-wrap.svelte-1fzvtqo audio{width:100%}.message.svelte-1fzvtqo.svelte-1fzvtqo{position:relative;align-self:flex-start;border-width:1px;border-radius:var(--radius-xxl);background:var(--background-fill-secondary);padding:var(--spacing-xxl);width:calc(100% - var(--spacing-xxl));color:var(--body-text-color);font-size:var(--text-lg);line-height:var(--line-lg);overflow-wrap:break-word}.user.svelte-1fzvtqo.svelte-1fzvtqo{align-self:flex-end;border-bottom-right-radius:0}.bot.svelte-1fzvtqo.svelte-1fzvtqo{border-bottom-left-radius:0;padding-left:calc(2 * var(--spacing-xxl))}@media (max-width: 480px){.message.svelte-1fzvtqo.svelte-1fzvtqo{width:auto}.bot.svelte-1fzvtqo.svelte-1fzvtqo{padding-left:var(--spacing-xxl)}}.bot.svelte-1fzvtqo.svelte-1fzvtqo,.pending.svelte-1fzvtqo.svelte-1fzvtqo{border-color:var(--border-color-primary);background:var(--background-fill-secondary)}.user.svelte-1fzvtqo.svelte-1fzvtqo{border-color:var(--border-color-accent);background-color:var(--color-accent-soft)}.feedback.svelte-1fzvtqo.svelte-1fzvtqo{display:flex;position:absolute;top:var(--spacing-xl);right:calc(var(--spacing-xxl) + var(--spacing-xl));gap:var(--spacing-lg);font-size:var(--text-sm)}.feedback.svelte-1fzvtqo button.svelte-1fzvtqo{color:var(--body-text-color-subdued)}.feedback.svelte-1fzvtqo button.svelte-1fzvtqo:hover{color:var(--body-text-color)}.selectable.svelte-1fzvtqo.svelte-1fzvtqo{cursor:pointer}.pending.svelte-1fzvtqo.svelte-1fzvtqo{display:flex;justify-content:center;align-items:center;align-self:center;gap:2px}.dot-flashing.svelte-1fzvtqo.svelte-1fzvtqo{animation:svelte-1fzvtqo-dot-flashing 1s infinite linear alternate;border-radius:5px;background-color:var(--body-text-color);width:5px;height:5px;color:var(--body-text-color)}.dot-flashing.svelte-1fzvtqo.svelte-1fzvtqo:nth-child(2){animation-delay:.33s}.dot-flashing.svelte-1fzvtqo.svelte-1fzvtqo:nth-child(3){animation-delay:.66s}@media (max-width: 480px){.user.svelte-1fzvtqo.svelte-1fzvtqo{align-self:flex-end}.bot.svelte-1fzvtqo.svelte-1fzvtqo{align-self:flex-start;padding-left:var(--size-3)}}@keyframes svelte-1fzvtqo-dot-flashing{0%{opacity:.8}50%{opacity:.5}to{opacity:.8}}.message-wrap.svelte-1fzvtqo .message.svelte-1fzvtqo img{margin:var(--size-2);max-height:200px}.message-wrap.svelte-1fzvtqo .message.svelte-1fzvtqo a{color:var(--color-text-link);text-decoration:underline}.hide.svelte-1fzvtqo.svelte-1fzvtqo{display:none}.message-wrap.svelte-1fzvtqo pre[class*=language-],.message-wrap.svelte-1fzvtqo pre{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);box-shadow:none;border:none;border-radius:var(--radius-md);background-color:var(--chatbot-code-background-color);padding:var(--spacing-xl) 10px;direction:ltr}.message-wrap.svelte-1fzvtqo table,.message-wrap.svelte-1fzvtqo tr,.message-wrap.svelte-1fzvtqo td,.message-wrap.svelte-1fzvtqo th{margin-top:var(--spacing-sm);margin-bottom:var(--spacing-sm);padding:var(--spacing-xl)}.message-wrap.svelte-1fzvtqo .bot.svelte-1fzvtqo table,.message-wrap.svelte-1fzvtqo .bot.svelte-1fzvtqo tr,.message-wrap.svelte-1fzvtqo .bot.svelte-1fzvtqo td,.message-wrap.svelte-1fzvtqo .bot.svelte-1fzvtqo th{border:1px solid var(--border-color-primary)}.message-wrap.svelte-1fzvtqo .user.svelte-1fzvtqo table,.message-wrap.svelte-1fzvtqo .user.svelte-1fzvtqo tr,.message-wrap.svelte-1fzvtqo .user.svelte-1fzvtqo td,.message-wrap.svelte-1fzvtqo .user.svelte-1fzvtqo th{border:1px solid var(--border-color-accent)}.message-wrap.svelte-1fzvtqo ol,.message-wrap.svelte-1fzvtqo ul{padding-inline-start:2em}.message-wrap.svelte-1fzvtqo span.katex{font-size:var(--text-lg);direction:ltr}.message-wrap.svelte-1fzvtqo code>button{position:absolute;top:var(--spacing-md);right:var(--spacing-md);z-index:1;cursor:pointer;border-bottom-left-radius:var(--radius-sm);padding:5px;padding:var(--spacing-md);width:22px;height:22px}.message-wrap.svelte-1fzvtqo code>button>span{position:absolute;top:var(--spacing-md);right:var(--spacing-md);width:12px;height:12px}.message-wrap.svelte-1fzvtqo .check{position:absolute;top:0;right:0;opacity:0;z-index:var(--layer-top);transition:opacity .2s;background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}.message-wrap.svelte-1fzvtqo pre{position:relative}.icon-button.svelte-1fzvtqo.svelte-1fzvtqo{position:absolute;top:6px;right:6px}.wrapper.svelte-nab2ao{display:flex;position:relative;flex-direction:column;align-items:start;width:100%;height:100%} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_util.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_util.py deleted file mode 100644 index 79bc095185c79313b238fb034ef746c7f67b9d93..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/test_util.py +++ /dev/null @@ -1,112 +0,0 @@ -import re -import sys -import traceback -from typing import NoReturn - -import pytest - -from .._util import ( - bytesify, - LocalProtocolError, - ProtocolError, - RemoteProtocolError, - Sentinel, - validate, -) - - -def test_ProtocolError() -> None: - with pytest.raises(TypeError): - ProtocolError("abstract base class") - - -def test_LocalProtocolError() -> None: - try: - raise LocalProtocolError("foo") - except LocalProtocolError as e: - assert str(e) == "foo" - assert e.error_status_hint == 400 - - try: - raise LocalProtocolError("foo", error_status_hint=418) - except LocalProtocolError as e: - assert str(e) == "foo" - assert e.error_status_hint == 418 - - def thunk() -> NoReturn: - raise LocalProtocolError("a", error_status_hint=420) - - try: - try: - thunk() - except LocalProtocolError as exc1: - orig_traceback = "".join(traceback.format_tb(sys.exc_info()[2])) - exc1._reraise_as_remote_protocol_error() - except RemoteProtocolError as exc2: - assert type(exc2) is RemoteProtocolError - assert exc2.args == ("a",) - assert exc2.error_status_hint == 420 - new_traceback = "".join(traceback.format_tb(sys.exc_info()[2])) - assert new_traceback.endswith(orig_traceback) - - -def test_validate() -> None: - my_re = re.compile(rb"(?P[0-9]+)\.(?P[0-9]+)") - with pytest.raises(LocalProtocolError): - validate(my_re, b"0.") - - groups = validate(my_re, b"0.1") - assert groups == {"group1": b"0", "group2": b"1"} - - # successful partial matches are an error - must match whole string - with pytest.raises(LocalProtocolError): - validate(my_re, b"0.1xx") - with pytest.raises(LocalProtocolError): - validate(my_re, b"0.1\n") - - -def test_validate_formatting() -> None: - my_re = re.compile(rb"foo") - - with pytest.raises(LocalProtocolError) as excinfo: - validate(my_re, b"", "oops") - assert "oops" in str(excinfo.value) - - with pytest.raises(LocalProtocolError) as excinfo: - validate(my_re, b"", "oops {}") - assert "oops {}" in str(excinfo.value) - - with pytest.raises(LocalProtocolError) as excinfo: - validate(my_re, b"", "oops {} xx", 10) - assert "oops 10 xx" in str(excinfo.value) - - -def test_make_sentinel() -> None: - class S(Sentinel, metaclass=Sentinel): - pass - - assert repr(S) == "S" - assert S == S - assert type(S).__name__ == "S" - assert S in {S} - assert type(S) is S - - class S2(Sentinel, metaclass=Sentinel): - pass - - assert repr(S2) == "S2" - assert S != S2 - assert S not in {S2} - assert type(S) is not type(S2) - - -def test_bytesify() -> None: - assert bytesify(b"123") == b"123" - assert bytesify(bytearray(b"123")) == b"123" - assert bytesify("123") == b"123" - - with pytest.raises(UnicodeEncodeError): - bytesify("\u1234") - - with pytest.raises(TypeError): - bytesify(10) diff --git a/spaces/DanielPinsk/StableDiffusion/README.md b/spaces/DanielPinsk/StableDiffusion/README.md deleted file mode 100644 index 8036971166566285c289f0e7267f0df7280d1362..0000000000000000000000000000000000000000 --- a/spaces/DanielPinsk/StableDiffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StableDiffusion -emoji: 💩 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.3 -app_file: app.py -pinned: false -license: wtfpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DaniilMIPT/greenatomtest/app.py b/spaces/DaniilMIPT/greenatomtest/app.py deleted file mode 100644 index e97534827f7d3c9411342f2e6920cca7b64adfea..0000000000000000000000000000000000000000 --- a/spaces/DaniilMIPT/greenatomtest/app.py +++ /dev/null @@ -1,111 +0,0 @@ -import streamlit as st -import re -from pymorphy2 import MorphAnalyzer -from functools import lru_cache -from nltk.corpus import stopwords -from tqdm import tqdm -import pandas as pd -import numpy as np -from catboost import Pool -import nltk -from catboost import CatBoostClassifier -from catboost import CatBoostRegressor -model = CatBoostClassifier() -model.load_model('classifire_model_MVP.cbm') -model_reg = CatBoostRegressor() -model_reg.load_model('regressor_model_MVP.cbm') -nltk.download('stopwords') -st.markdown( - """ - - """, - unsafe_allow_html=True -) -st.markdown("

        Green Atom

        ", unsafe_allow_html=True) -st.markdown("

        Check the status of your comment

        ", unsafe_allow_html=True) - -start_text = "The Boys is one of the best Superhero shows I've ever seen. While Season 1 was the best season of the series, Season 2 and 3 were also both very good and absolutely worth watching. Season 3 was fantastic, Jensen Ackles was the perfect actor to add to this already incredible show! This show continues to amaze as it's not afraid to try new things and is a show that is definitely for adults. It has no problems being offensive and making you feel squeamish. You don't even have to be a fan of superhero shows to enjoy this. It's violent, funny, thrilling, etc.. Everything you want in a good superhero show. Season 4 just added Jeffrey Dean Morgan to the cast. Another great addition to an already incredible cast. I don't know what else I can say except I absolutely love this series and can't wait for more to come!" -#st.markdown("
        Enter your review text and click Ctrl+Enter
        ", unsafe_allow_html=True) -text = st.text_area("Enter your review text and click Ctrl+Enter",start_text) -#text = st.text_area(':white[Enter your review text and click Ctrl+Enter]',start_text) -print(text) - -data = pd.DataFrame({'text': [text]}) -m = MorphAnalyzer() -regex = re.compile("[А-Яа-яA-z]+") -mystopwords = stopwords.words('english') - - -def words_only(text, regex=regex): - try: - return regex.findall(text.lower()) - except: - return [] - - -@lru_cache(maxsize=128) -def lemmatize_word(token, pymorphy=m): - return pymorphy.parse(token)[0].normal_form - - -def lemmatize_text(text): - return [lemmatize_word(w) for w in text] - - -def remove_stopwords(lemmas, stopwords=mystopwords): - return [w for w in lemmas if not w in stopwords and len(w) > 3] - - -def clean_text(text): - tokens = words_only(text) - lemmas = lemmatize_text(tokens) - - return ' '.join(remove_stopwords(lemmas)) - - -from multiprocessing import Pool as PoolSklearn - -with PoolSklearn(4) as p: - lemmas = list(tqdm(p.imap(clean_text, data['text']), total=len(data))) - -data['text_lemmas'] = lemmas - -data['sym_len'] = data.text_lemmas.apply(len) -data['word_len'] = data.text_lemmas.apply(lambda x: len(x.split())) -data['sym_len'] = np.log(data['sym_len']) -data['word_len'] = np.log(data['word_len']) - -test_pool = Pool( - data, - text_features=['text', 'text_lemmas'], -) - -y_pred = model.predict(test_pool) -y_pred_reg = model_reg.predict(test_pool) -arr = np.round(y_pred_reg, 0).astype(int) -arr_1 = [] -for i in range(len(arr)): - if arr[i]<=0: - arr_1.append(1) - elif arr[i]>10: - arr_1.append(10) - else: - arr_1.append(arr[i]) - -ans = '' -if y_pred[0] == 1: - ans = 'positive' -else: - ans = 'negative' -st.write('Estimated revocation status :', ans) -st.write('Estimated comment rating :', arr_1[0]) diff --git a/spaces/Dao3/MBTI_Test/app.py b/spaces/Dao3/MBTI_Test/app.py deleted file mode 100644 index de3b99aa55287a8d4d34ab108ea15c4224e42a37..0000000000000000000000000000000000000000 --- a/spaces/Dao3/MBTI_Test/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import gradio as gr -import os -import openai - -# 请记得要把 api 的 key 放到 settings 下面的 Repository Secrets 里。 -# 目前有个特别奇怪的问题: duplicate 的 key 如果和原来的 key 重名,build 就会失败。不知是否是今天正在 migrating 的原因。 -# 作为 workaround,请对 key 使用一个不同的名字,并且记得修改下面这行代码中的 key 的名字。 -openai.api_key = os.getenv("key4") - - - - - -# 如果你只打算通过 prompt 来定制机器人的行为,只需要修改这段 prompt 就够了。 -prompt = '你现在是一个MBTI测试助理,现在我要开始MBTI测试。你要通过5个问题来测试我的mbti人格类型,每个问题要有abcd四个选项。5个问题内容必须是不同的方面。而且每个问题的选项都和其他问题的选项不同。等我直接回复这六个问题我的答案之后,你帮我给出人格类型分析结果,并对这个结果的人格特征做详细描述,并且告诉我这个人格类型适合和什么类型的人做朋友,适合和什么样类型的人做情侣。注意,一次性发给我这5个问题以及问题相应的四个选项,在我回复答案后你再回复结果和描述。注意,每次回答的时候可能是我的不同朋友来做测试,你必须对每次回答你刚刚设计的问题,单独分析每次答案的人格类型,不要参考过往对话中的答案。注意,全程使用中文。在我说完第一句话后你就开始出题,然后我做答,你再回复我的人格类型。如果你理解了,我们现在就开始,' - -# 修改本函数,来实现你自己的 chatbot -# p: 对机器人说话的内容 -# qid: 当前消息的唯一标识。例如 `'bxqid-cManAtRMszw...'`。由平台生成并传递给机器人,以便机器人区分单个问题(写日志、追踪调试、异步回调等)。同步调用可忽略。 -# uid: 用户的唯一标识。例如`'bxuid-Aj8Spso8Xsp...'`。由平台生成并传递给机器人,以便机器人区分用户。可被用于实现多轮对话的功能。 -# 返回值:[type, content] -# 详见 https://huggingface.co/spaces/baixing/hackathon_test/blob/main/bot-api.md -def chat(p, qid, uid): - return ["text", callapi(p)] - -def callapi(p): - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages= [{"role":"system", "content":prompt}, - {"role":"user", "content":p} - ] - ) - print(response) - response = response["choices"][0]["message"]["content"] - while response.startswith("\n"): - response = response[1:] - return response - - - -iface = gr.Interface(fn=chat, - inputs=["text", "text", "text"], - outputs=["text", "text"], - description="""我是人格测试助手,在瀛海威广场的多轮对话中可以做人格测试哦~ [瀛海威广场](https://huggingface.co/spaces/baixing/hackathon_test),需要填写的api是 https://dao3-mbti-test.hf.space/run/predict - """) - -iface.launch() \ No newline at end of file diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 7bc5a1e331c2bbb1893ac748cfd0f144ff0651b4..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,184 +0,0 @@ -import os - -import torch -from torch.autograd import Function -from torch.utils.cpp_extension import load - -module_path = os.path.dirname(__file__) -upfirdn2d_op = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'upfirdn2d.cpp'), - os.path.join(module_path, 'upfirdn2d_kernel.cu'), - ], -) - - -class UpFirDn2dBackward(Function): - @staticmethod - def forward( - ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size - ): - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_op.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_op.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view( - ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1] - ) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_op.upfirdn2d( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 - ) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = UpFirDn2d.apply( - input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1]) - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - - return out[:, ::down_y, ::down_x, :] diff --git a/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/README.md b/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/README.md deleted file mode 100644 index 0ba820f4ea0c89aaa69eaaf144219503cf120e26..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 -emoji: 🏃 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Dauzy/whisper-webui/src/source.py b/spaces/Dauzy/whisper-webui/src/source.py deleted file mode 100644 index e304e278bfae8ef289c999fc76311ce01b547991..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/source.py +++ /dev/null @@ -1,80 +0,0 @@ -# Gradio seems to truncate files without keeping the extension, so we need to truncate the file prefix ourself -import os -import pathlib -from typing import List -import zipfile - -import ffmpeg -from more_itertools import unzip - -from src.download import ExceededMaximumDuration, download_url - -MAX_FILE_PREFIX_LENGTH = 17 - -class AudioSource: - def __init__(self, source_path, source_name = None, audio_duration = None): - self.source_path = source_path - self.source_name = source_name - self._audio_duration = audio_duration - - # Load source name if not provided - if (self.source_name is None): - file_path = pathlib.Path(self.source_path) - self.source_name = file_path.name - - def get_audio_duration(self): - if self._audio_duration is None: - self._audio_duration = float(ffmpeg.probe(self.source_path)["format"]["duration"]) - - return self._audio_duration - - def get_full_name(self): - return self.source_name - - def get_short_name(self, max_length: int = MAX_FILE_PREFIX_LENGTH): - file_path = pathlib.Path(self.source_name) - short_name = file_path.stem[:max_length] + file_path.suffix - - return short_name - - def __str__(self) -> str: - return self.source_path - -class AudioSourceCollection: - def __init__(self, sources: List[AudioSource]): - self.sources = sources - - def __iter__(self): - return iter(self.sources) - -def get_audio_source_collection(urlData: str, multipleFiles: List, microphoneData: str, input_audio_max_duration: float = -1) -> List[AudioSource]: - output: List[AudioSource] = [] - - if urlData: - # Download from YouTube. This could also be a playlist or a channel. - output.extend([ AudioSource(x) for x in download_url(urlData, input_audio_max_duration, playlistItems=None) ]) - else: - # Add input files - if (multipleFiles is not None): - output.extend([ AudioSource(x.name) for x in multipleFiles ]) - if (microphoneData is not None): - output.append(AudioSource(microphoneData)) - - total_duration = 0 - - # Calculate total audio length. We do this even if input_audio_max_duration - # is disabled to ensure that all the audio files are valid. - for source in output: - audioDuration = ffmpeg.probe(source.source_path)["format"]["duration"] - total_duration += float(audioDuration) - - # Save audio duration - source._audio_duration = float(audioDuration) - - # Ensure the total duration of the audio is not too long - if input_audio_max_duration > 0: - if float(total_duration) > input_audio_max_duration: - raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=input_audio_max_duration, message="Video(s) is too long") - - # Return a list of audio sources - return output \ No newline at end of file diff --git a/spaces/Deepak7376/demo-sapce/README.md b/spaces/Deepak7376/demo-sapce/README.md deleted file mode 100644 index bba17e5bd55a20da9d34ba08dd6c39dc7d2bad2c..0000000000000000000000000000000000000000 --- a/spaces/Deepak7376/demo-sapce/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Demo Sapce -emoji: 🐢 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DiamondYin/AnewGame/README.md b/spaces/DiamondYin/AnewGame/README.md deleted file mode 100644 index f6b26c5f77aac98fb7866dcd373ade01cd807a6b..0000000000000000000000000000000000000000 --- a/spaces/DiamondYin/AnewGame/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: WaliGameFPS -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Docfile/open_llm_leaderboard/app.py b/spaces/Docfile/open_llm_leaderboard/app.py deleted file mode 100644 index cfdbfe18af16d774ffd4045c9b45ace4847c6e44..0000000000000000000000000000000000000000 --- a/spaces/Docfile/open_llm_leaderboard/app.py +++ /dev/null @@ -1,591 +0,0 @@ -import json -import os -from datetime import datetime, timezone - -import gradio as gr -import pandas as pd -from apscheduler.schedulers.background import BackgroundScheduler -from huggingface_hub import HfApi - -from src.assets.css_html_js import custom_css, get_window_url_params -from src.assets.text_content import ( - CITATION_BUTTON_LABEL, - CITATION_BUTTON_TEXT, - EVALUATION_QUEUE_TEXT, - INTRODUCTION_TEXT, - LLM_BENCHMARKS_TEXT, - TITLE, -) -from src.display_models.get_model_metadata import DO_NOT_SUBMIT_MODELS, ModelType -from src.display_models.utils import ( - AutoEvalColumn, - EvalQueueColumn, - fields, - styled_error, - styled_message, - styled_warning, -) -from src.load_from_hub import get_evaluation_queue_df, get_leaderboard_df, is_model_on_hub, load_all_info_from_hub -from src.rate_limiting import user_submission_permission - -pd.set_option("display.precision", 1) - -# clone / pull the lmeh eval data -H4_TOKEN = os.environ.get("H4_TOKEN", None) - -QUEUE_REPO = "open-llm-leaderboard/requests" -RESULTS_REPO = "open-llm-leaderboard/results" - -PRIVATE_QUEUE_REPO = "open-llm-leaderboard/private-requests" -PRIVATE_RESULTS_REPO = "open-llm-leaderboard/private-results" - -IS_PUBLIC = bool(os.environ.get("IS_PUBLIC", True)) - -EVAL_REQUESTS_PATH = "eval-queue" -EVAL_RESULTS_PATH = "eval-results" - -EVAL_REQUESTS_PATH_PRIVATE = "eval-queue-private" -EVAL_RESULTS_PATH_PRIVATE = "eval-results-private" - -api = HfApi(token=H4_TOKEN) - - -def restart_space(): - api.restart_space(repo_id="HuggingFaceH4/open_llm_leaderboard", token=H4_TOKEN) - -# Rate limit variables -RATE_LIMIT_PERIOD = 7 -RATE_LIMIT_QUOTA = 5 - -# Column selection -COLS = [c.name for c in fields(AutoEvalColumn) if not c.hidden] -TYPES = [c.type for c in fields(AutoEvalColumn) if not c.hidden] -COLS_LITE = [c.name for c in fields(AutoEvalColumn) if c.displayed_by_default and not c.hidden] -TYPES_LITE = [c.type for c in fields(AutoEvalColumn) if c.displayed_by_default and not c.hidden] - -if not IS_PUBLIC: - COLS.insert(2, AutoEvalColumn.precision.name) - TYPES.insert(2, AutoEvalColumn.precision.type) - -EVAL_COLS = [c.name for c in fields(EvalQueueColumn)] -EVAL_TYPES = [c.type for c in fields(EvalQueueColumn)] - -BENCHMARK_COLS = [ - c.name - for c in [ - AutoEvalColumn.arc, - AutoEvalColumn.hellaswag, - AutoEvalColumn.mmlu, - AutoEvalColumn.truthfulqa, - ] -] - -## LOAD INFO FROM HUB -eval_queue, requested_models, eval_results, users_to_submission_dates = load_all_info_from_hub( - QUEUE_REPO, RESULTS_REPO, EVAL_REQUESTS_PATH, EVAL_RESULTS_PATH -) - -if not IS_PUBLIC: - (eval_queue_private, requested_models_private, eval_results_private, _) = load_all_info_from_hub( - PRIVATE_QUEUE_REPO, - PRIVATE_RESULTS_REPO, - EVAL_REQUESTS_PATH_PRIVATE, - EVAL_RESULTS_PATH_PRIVATE, - ) -else: - eval_queue_private, eval_results_private = None, None - -original_df = get_leaderboard_df(eval_results, eval_results_private, COLS, BENCHMARK_COLS) -models = original_df["model_name_for_query"].tolist() # needed for model backlinks in their to the leaderboard - -to_be_dumped = f"models = {repr(models)}\n" - -# with open("models_backlinks.py", "w") as f: -# f.write(to_be_dumped) - -# print(to_be_dumped) - -leaderboard_df = original_df.copy() -( - finished_eval_queue_df, - running_eval_queue_df, - pending_eval_queue_df, -) = get_evaluation_queue_df(eval_queue, eval_queue_private, EVAL_REQUESTS_PATH, EVAL_COLS) - -print(leaderboard_df["Precision"].unique()) - - -## INTERACTION FUNCTIONS -def add_new_eval( - model: str, - base_model: str, - revision: str, - precision: str, - private: bool, - weight_type: str, - model_type: str, -): - precision = precision.split(" ")[0] - current_time = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ") - - num_models_submitted_in_period = user_submission_permission(model, users_to_submission_dates, RATE_LIMIT_PERIOD) - if num_models_submitted_in_period > RATE_LIMIT_QUOTA: - error_msg = f"Organisation or user `{model.split('/')[0]}`" - error_msg += f"already has {num_models_submitted_in_period} model requests submitted to the leaderboard " - error_msg += f"in the last {RATE_LIMIT_PERIOD} days.\n" - error_msg += "Please wait a couple of days before resubmitting, so that everybody can enjoy using the leaderboard 🤗" - return styled_error(error_msg) - - if model_type is None or model_type == "": - return styled_error("Please select a model type.") - - # check the model actually exists before adding the eval - if revision == "": - revision = "main" - - if weight_type in ["Delta", "Adapter"]: - base_model_on_hub, error = is_model_on_hub(base_model, revision) - if not base_model_on_hub: - return styled_error(f'Base model "{base_model}" {error}') - - if not weight_type == "Adapter": - model_on_hub, error = is_model_on_hub(model, revision) - if not model_on_hub: - return styled_error(f'Model "{model}" {error}') - - print("adding new eval") - - eval_entry = { - "model": model, - "base_model": base_model, - "revision": revision, - "private": private, - "precision": precision, - "weight_type": weight_type, - "status": "PENDING", - "submitted_time": current_time, - "model_type": model_type, - } - - user_name = "" - model_path = model - if "/" in model: - user_name = model.split("/")[0] - model_path = model.split("/")[1] - - OUT_DIR = f"{EVAL_REQUESTS_PATH}/{user_name}" - os.makedirs(OUT_DIR, exist_ok=True) - out_path = f"{OUT_DIR}/{model_path}_eval_request_{private}_{precision}_{weight_type}.json" - - # Check if the model has been forbidden: - if out_path.split("eval-queue/")[1] in DO_NOT_SUBMIT_MODELS: - return styled_warning("Model authors have requested that their model be not submitted on the leaderboard.") - - # Check for duplicate submission - if f"{model}_{revision}_{precision}" in requested_models: - return styled_warning("This model has been already submitted.") - - with open(out_path, "w") as f: - f.write(json.dumps(eval_entry)) - - api.upload_file( - path_or_fileobj=out_path, - path_in_repo=out_path.split("eval-queue/")[1], - repo_id=QUEUE_REPO, - repo_type="dataset", - commit_message=f"Add {model} to eval queue", - ) - - # remove the local file - os.remove(out_path) - - return styled_message( - "Your request has been submitted to the evaluation queue!\nPlease wait for up to an hour for the model to show in the PENDING list." - ) - - -# Basics -def change_tab(query_param: str): - query_param = query_param.replace("'", '"') - query_param = json.loads(query_param) - - if isinstance(query_param, dict) and "tab" in query_param and query_param["tab"] == "evaluation": - return gr.Tabs.update(selected=1) - else: - return gr.Tabs.update(selected=0) - - -# Searching and filtering -def update_table(hidden_df: pd.DataFrame, current_columns_df: pd.DataFrame, columns: list, type_query: list, precision_query: str, size_query: list, show_deleted: bool, query: str): - filtered_df = filter_models(hidden_df, type_query, size_query, precision_query, show_deleted) - if query != "": - filtered_df = search_table(filtered_df, query) - df = select_columns(filtered_df, columns) - - return df - -def search_table(df: pd.DataFrame, query: str) -> pd.DataFrame: - return df[(df[AutoEvalColumn.dummy.name].str.contains(query, case=False))] - -def select_columns(df: pd.DataFrame, columns: list) -> pd.DataFrame: - always_here_cols = [ - AutoEvalColumn.model_type_symbol.name, - AutoEvalColumn.model.name, - ] - # We use COLS to maintain sorting - filtered_df = df[ - always_here_cols + [c for c in COLS if c in df.columns and c in columns] + [AutoEvalColumn.dummy.name] - ] - return filtered_df - -NUMERIC_INTERVALS = { - "Unknown": pd.Interval(-1, 0, closed="right"), - "< 1.5B": pd.Interval(0, 1.5, closed="right"), - "~3B": pd.Interval(1.5, 5, closed="right"), - "~7B": pd.Interval(6, 11, closed="right"), - "~13B": pd.Interval(12, 15, closed="right"), - "~35B": pd.Interval(16, 55, closed="right"), - "60B+": pd.Interval(55, 10000, closed="right"), -} - -def filter_models( - df: pd.DataFrame, type_query: list, size_query: list, precision_query: list, show_deleted: bool -) -> pd.DataFrame: - # Show all models - if show_deleted: - filtered_df = df - else: # Show only still on the hub models - filtered_df = df[df[AutoEvalColumn.still_on_hub.name] == True] - - type_emoji = [t[0] for t in type_query] - filtered_df = filtered_df[df[AutoEvalColumn.model_type_symbol.name].isin(type_emoji)] - filtered_df = filtered_df[df[AutoEvalColumn.precision.name].isin(precision_query)] - - numeric_interval = pd.IntervalIndex(sorted([NUMERIC_INTERVALS[s] for s in size_query])) - params_column = pd.to_numeric(df[AutoEvalColumn.params.name], errors="coerce") - mask = params_column.apply(lambda x: any(numeric_interval.contains(x))) - filtered_df = filtered_df.loc[mask] - - return filtered_df - - -demo = gr.Blocks(css=custom_css) -with demo: - gr.HTML(TITLE) - gr.Markdown(INTRODUCTION_TEXT, elem_classes="markdown-text") - - with gr.Tabs(elem_classes="tab-buttons") as tabs: - with gr.TabItem("🏅 LLM Benchmark", elem_id="llm-benchmark-tab-table", id=0): - with gr.Row(): - with gr.Column(): - with gr.Row(): - search_bar = gr.Textbox( - placeholder=" 🔍 Search for your model and press ENTER...", - show_label=False, - elem_id="search-bar", - ) - with gr.Row(): - shown_columns = gr.CheckboxGroup( - choices=[ - c - for c in COLS - if c - not in [ - AutoEvalColumn.dummy.name, - AutoEvalColumn.model.name, - AutoEvalColumn.model_type_symbol.name, - AutoEvalColumn.still_on_hub.name, - ] - ], - value=[ - c - for c in COLS_LITE - if c - not in [ - AutoEvalColumn.dummy.name, - AutoEvalColumn.model.name, - AutoEvalColumn.model_type_symbol.name, - AutoEvalColumn.still_on_hub.name, - ] - ], - label="Select columns to show", - elem_id="column-select", - interactive=True, - ) - with gr.Row(): - deleted_models_visibility = gr.Checkbox( - value=True, label="Show gated/private/deleted models", interactive=True - ) - with gr.Column(min_width=320): - with gr.Box(elem_id="box-filter"): - filter_columns_type = gr.CheckboxGroup( - label="Model types", - choices=[ - ModelType.PT.to_str(), - ModelType.FT.to_str(), - ModelType.IFT.to_str(), - ModelType.RL.to_str(), - ], - value=[ - ModelType.PT.to_str(), - ModelType.FT.to_str(), - ModelType.IFT.to_str(), - ModelType.RL.to_str(), - ], - interactive=True, - elem_id="filter-columns-type", - ) - filter_columns_precision = gr.CheckboxGroup( - label="Precision", - choices=["torch.float16", "torch.bfloat16", "torch.float32", "8bit", "4bit", "GPTQ"], - value=["torch.float16", "torch.bfloat16", "torch.float32", "8bit", "4bit", "GPTQ"], - interactive=True, - elem_id="filter-columns-precision", - ) - filter_columns_size = gr.CheckboxGroup( - label="Model sizes", - choices=list(NUMERIC_INTERVALS.keys()), - value=list(NUMERIC_INTERVALS.keys()), - interactive=True, - elem_id="filter-columns-size", - ) - - leaderboard_table = gr.components.Dataframe( - value=leaderboard_df[ - [AutoEvalColumn.model_type_symbol.name, AutoEvalColumn.model.name] - + shown_columns.value - + [AutoEvalColumn.dummy.name] - ], - headers=[ - AutoEvalColumn.model_type_symbol.name, - AutoEvalColumn.model.name, - ] - + shown_columns.value - + [AutoEvalColumn.dummy.name], - datatype=TYPES, - max_rows=None, - elem_id="leaderboard-table", - interactive=False, - visible=True, - ) - - # Dummy leaderboard for handling the case when the user uses backspace key - hidden_leaderboard_table_for_search = gr.components.Dataframe( - value=original_df, - headers=COLS, - datatype=TYPES, - max_rows=None, - visible=False, - ) - search_bar.submit( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - ) - shown_columns.change( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - queue=True, - ) - filter_columns_type.change( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - queue=True, - ) - filter_columns_precision.change( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - queue=True, - ) - filter_columns_size.change( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - queue=True, - ) - deleted_models_visibility.change( - update_table, - [ - hidden_leaderboard_table_for_search, - leaderboard_table, - shown_columns, - filter_columns_type, - filter_columns_precision, - filter_columns_size, - deleted_models_visibility, - search_bar, - ], - leaderboard_table, - queue=True, - ) - with gr.TabItem("📝 About", elem_id="llm-benchmark-tab-table", id=2): - gr.Markdown(LLM_BENCHMARKS_TEXT, elem_classes="markdown-text") - - with gr.TabItem("🚀 Submit here! ", elem_id="llm-benchmark-tab-table", id=3): - with gr.Column(): - with gr.Row(): - gr.Markdown(EVALUATION_QUEUE_TEXT, elem_classes="markdown-text") - - with gr.Column(): - with gr.Accordion( - f"✅ Finished Evaluations ({len(finished_eval_queue_df)})", - open=False, - ): - with gr.Row(): - finished_eval_table = gr.components.Dataframe( - value=finished_eval_queue_df, - headers=EVAL_COLS, - datatype=EVAL_TYPES, - max_rows=5, - ) - with gr.Accordion( - f"🔄 Running Evaluation Queue ({len(running_eval_queue_df)})", - open=False, - ): - with gr.Row(): - running_eval_table = gr.components.Dataframe( - value=running_eval_queue_df, - headers=EVAL_COLS, - datatype=EVAL_TYPES, - max_rows=5, - ) - - with gr.Accordion( - f"⏳ Pending Evaluation Queue ({len(pending_eval_queue_df)})", - open=False, - ): - with gr.Row(): - pending_eval_table = gr.components.Dataframe( - value=pending_eval_queue_df, - headers=EVAL_COLS, - datatype=EVAL_TYPES, - max_rows=5, - ) - with gr.Row(): - gr.Markdown("# ✉️✨ Submit your model here!", elem_classes="markdown-text") - - with gr.Row(): - with gr.Column(): - model_name_textbox = gr.Textbox(label="Model name") - revision_name_textbox = gr.Textbox(label="revision", placeholder="main") - private = gr.Checkbox(False, label="Private", visible=not IS_PUBLIC) - model_type = gr.Dropdown( - choices=[ - ModelType.PT.to_str(" : "), - ModelType.FT.to_str(" : "), - ModelType.IFT.to_str(" : "), - ModelType.RL.to_str(" : "), - ], - label="Model type", - multiselect=False, - value=None, - interactive=True, - ) - - with gr.Column(): - precision = gr.Dropdown( - choices=[ - "float16", - "bfloat16", - "8bit (LLM.int8)", - "4bit (QLoRA / FP4)", - "GPTQ" - ], - label="Precision", - multiselect=False, - value="float16", - interactive=True, - ) - weight_type = gr.Dropdown( - choices=["Original", "Delta", "Adapter"], - label="Weights type", - multiselect=False, - value="Original", - interactive=True, - ) - base_model_name_textbox = gr.Textbox(label="Base model (for delta or adapter weights)") - - submit_button = gr.Button("Submit Eval") - submission_result = gr.Markdown() - submit_button.click( - add_new_eval, - [ - model_name_textbox, - base_model_name_textbox, - revision_name_textbox, - precision, - private, - weight_type, - model_type, - ], - submission_result, - ) - - with gr.Row(): - with gr.Accordion("📙 Citation", open=False): - citation_button = gr.Textbox( - value=CITATION_BUTTON_TEXT, - label=CITATION_BUTTON_LABEL, - elem_id="citation-button", - ).style(show_copy_button=True) - - dummy = gr.Textbox(visible=False) - demo.load( - change_tab, - dummy, - tabs, - _js=get_window_url_params, - ) - -scheduler = BackgroundScheduler() -scheduler.add_job(restart_space, "interval", seconds=1800) -scheduler.start() -demo.queue(concurrency_count=40).launch() diff --git a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg2/training/dataset.py b/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg2/training/dataset.py deleted file mode 100644 index 8c6b05e09d9b1d8c386e36e0f47a44cb1290e1d9..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/training_scripts/sg2/training/dataset.py +++ /dev/null @@ -1,257 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib -import cv2 -from collections import Counter - - -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - square = False, - ): - # print(' Inside Dataset ') - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - self._square = square - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - if self._square: - assert self.image_shape[1] == self.image_shape[2] - else: - assert self.image_shape[1] == self.image_shape[2] * 2 - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - square = False, - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - self._square = square - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - # if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - # raise IOError('Image files do not match the specified resolution') - if resolution is not None: - if self._square: - raw_shape[2] = raw_shape[3] = resolution - else: - raw_shape[2] = resolution - raw_shape[3] = resolution // 2 - # print(raw_shape) - super().__init__(name=name, raw_shape=raw_shape,square=square, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): #load single image - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - - -#---------------------------------------------------------------------------- diff --git a/spaces/Eduardovco/Potato/README.md b/spaces/Eduardovco/Potato/README.md deleted file mode 100644 index e9003aa36aa51d444e549bfc5d9be11c84b55244..0000000000000000000000000000000000000000 --- a/spaces/Eduardovco/Potato/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Potato -emoji: 🐢 -colorFrom: green -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EmanAbelwhab/foodvision_mini/model.py b/spaces/EmanAbelwhab/foodvision_mini/model.py deleted file mode 100644 index e3c4e4df92e9aa0a7e7104b44d64d239c8ffe864..0000000000000000000000000000000000000000 --- a/spaces/EmanAbelwhab/foodvision_mini/model.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -import torchvision - -from torch import nn - - -def create_effnetb2_model(num_classes:int=3, - seed:int=42): - """Creates an EfficientNetB2 feature extractor model and transforms. - Args: - num_classes (int, optional): number of classes in the classifier head. - Defaults to 3. - seed (int, optional): random seed value. Defaults to 42. - Returns: - model (torch.nn.Module): EffNetB2 feature extractor model. - transforms (torchvision.transforms): EffNetB2 image transforms. - """ - # Create EffNetB2 pretrained weights, transforms and model - weights = torchvision.models.EfficientNet_B2_Weights.DEFAULT - transforms = weights.transforms() - model = torchvision.models.efficientnet_b2(weights=weights) - - # Freeze all layers in base model - for param in model.parameters(): - param.requires_grad = False - - # Change classifier head with random seed for reproducibility - torch.manual_seed(seed) - model.classifier = nn.Sequential( - nn.Dropout(p=0.3, inplace=True), - nn.Linear(in_features=1408, out_features=num_classes), - ) - - return model, transforms - diff --git a/spaces/EuroPython2022/batangkali/app.py b/spaces/EuroPython2022/batangkali/app.py deleted file mode 100644 index d860a45d46dcefd21abcdb62e4f2b5237a938485..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/batangkali/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr - - -def greet(name): - return "Hello " + name + "!!" - - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/utils.py b/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/utils.py deleted file mode 100644 index 45aff2df435cb2920ca1142e944d2c8162758c79..0000000000000000000000000000000000000000 --- a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/utils.py +++ /dev/null @@ -1,25 +0,0 @@ -import torch -import config - - -def categorical_accuracy(preds, y): - """ - Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 - """ - max_preds = preds.argmax( - dim=1, keepdim=True) # get the index of the max probability - correct = max_preds.squeeze(1).eq(y) - return correct.sum() / torch.FloatTensor([y.shape[0]]) - -def label_encoder(x): - label_vec = {"0": 0, "1": 1, "-1": 2} - return label_vec[x.replace("__label__", "")] - -def label_decoder(x): - label_vec = { 0:"U", 1:"P", 2:"N"} - return label_vec[x] - -def label_full_decoder(x): - label_vec = { 0:"Neutral", 1:"Positive", 2:"Negative"} - return label_vec[x] - diff --git a/spaces/Femurbreaker/Femur/Dockerfile b/spaces/Femurbreaker/Femur/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Femurbreaker/Femur/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Phind.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Phind.py deleted file mode 100644 index 9fa8ec821f701d7841432e498a11ac9dd017978c..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Phind.py +++ /dev/null @@ -1,36 +0,0 @@ -import os -import json -import time -import subprocess - -from ...typing import sha256, Dict, get_type_hints - -url = 'https://phind.com' -model = ['gpt-4'] -supports_stream = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - path = os.path.dirname(os.path.realpath(__file__)) - config = json.dumps({ - 'model': model, - 'messages': messages}, separators=(',', ':')) - - cmd = ['python', f'{path}/helpers/phind.py', config] - - p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - - for line in iter(p.stdout.readline, b''): - if b'Just a moment...' in line: - os.system('clear' if os.name == 'posix' else 'cls') - yield 'Clouflare error, please try again...' - os._exit(0) - - else: - if b'ping - 2023-' in line: - continue - - yield line.decode('cp1251') #[:-1] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/examples/py_example.py b/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/examples/py_example.py deleted file mode 100644 index fa1b526f87b065a6acda35e06d563be134ffb27b..0000000000000000000000000000000000000000 --- a/spaces/Froleptan/stablediffusion-infinity/PyPatchMatch/examples/py_example.py +++ /dev/null @@ -1,21 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -# File : test.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 01/09/2020 -# -# Distributed under terms of the MIT license. - -from PIL import Image - -import sys -sys.path.insert(0, '../') -import patch_match - - -if __name__ == '__main__': - source = Image.open('./images/forest_pruned.bmp') - result = patch_match.inpaint(source, patch_size=3) - Image.fromarray(result).save('./images/forest_recovered.bmp') - diff --git a/spaces/Future-Tense/Slo-Mo-YOLO-Video/README.md b/spaces/Future-Tense/Slo-Mo-YOLO-Video/README.md deleted file mode 100644 index 7587e8b4f800005a1e6bce46c19da1bea48f7403..0000000000000000000000000000000000000000 --- a/spaces/Future-Tense/Slo-Mo-YOLO-Video/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Slo-Mo YOLO8s Video -emoji: 🥴 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/gensim/sim_runner.py b/spaces/Gen-Sim/Gen-Sim/gensim/sim_runner.py deleted file mode 100644 index 661b92f8321ea62b1ce79e030a3b8c289a254722..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/gensim/sim_runner.py +++ /dev/null @@ -1,321 +0,0 @@ -import numpy as np -import os -import IPython -from cliport import tasks -from cliport.dataset import RavensDataset -from cliport.environments.environment import Environment - -import imageio - -from pygments import highlight -from pygments.lexers import PythonLexer -from pygments.formatters import HtmlFormatter, TerminalFormatter -import gradio -import time -import random -import json -import traceback -from gensim.utils import ( - mkdir_if_missing, - save_text, - save_stat, - compute_diversity_score_from_assets, - add_to_txt -) -import pybullet as p - -class SimulationRunner: - """ the main class that runs simulation loop """ - def __init__(self, cfg, agent, critic, memory): - self.cfg = cfg - self.agent = agent - self.critic = critic - self.memory = memory - self.log = "" - - # statistics - self.syntax_pass_rate = 0 - self.runtime_pass_rate = 0 - self.env_pass_rate = 0 - self.curr_trials = 0 - - self.prompt_folder = f"prompts/{cfg['prompt_folder']}" - self.chat_log = memory.chat_log - self.task_asset_logs = [] - - # All the generated tasks in this run. - # Different from the ones in online buffer that can load from offline. - self.generated_task_assets = [] - self.generated_task_programs = [] - self.generated_task_names = [] - self.generated_tasks = [] - self.passed_tasks = [] # accepted ones - - def print_current_stats(self): - """ print the current statistics of the simulation design """ - print("=========================================================") - print(f"{self.cfg['prompt_folder']} Trial {self.curr_trials} SYNTAX_PASS_RATE: {(self.syntax_pass_rate / (self.curr_trials)) * 100:.1f}% RUNTIME_PASS_RATE: {(self.runtime_pass_rate / (self.curr_trials)) * 100:.1f}% ENV_PASS_RATE: {(self.env_pass_rate / (self.curr_trials)) * 100:.1f}%") - print("=========================================================") - - def save_stats(self): - """ save the final simulation statistics """ - self.diversity_score = compute_diversity_score_from_assets(self.task_asset_logs, self.curr_trials) - save_stat(self.cfg, self.cfg['model_output_dir'], self.generated_tasks, self.syntax_pass_rate / (self.curr_trials), - self.runtime_pass_rate / (self.data_pathcurr_trials), self.env_pass_rate / (self.curr_trials), self.diversity_score) - print("Model Folder: ", self.cfg['model_output_dir']) - print(f"Total {len(self.generated_tasks)} New Tasks:", [task['task-name'] for task in self.generated_tasks]) - try: - print(f"Added {len(self.passed_tasks)} Tasks:", self.passed_tasks) - except: - pass - - def example_task_creation(self): - """ create the task through interactions of agent and critic """ - self.task_creation_pass = True - mkdir_if_missing(self.cfg['model_output_dir']) - - try: - start_time = time.time() - - self.generated_task = {'task-name': 'TASK_NAME_TEMPLATE', 'task-description': 'TASK_STRING_TEMPLATE', 'assets-used': ['ASSET_1', 'ASSET_2', Ellipsis]} - print("generated_task\n", self.generated_task) - yield "Task Generated ==>", "", None, None - self.generated_asset = self.agent.propose_assets() - # self.generated_asset = {} - print("generated_asset\n", self.generated_asset) - yield "Task Generated ==> Asset Generated ==> ", "", None, None - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> ", "", None, None - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> ", "", None, None - - online_code_buffer = {} - for task_file in json.load(open(os.path.join('prompts/data', "generated_task_codes.json"))): - if os.path.exists("cliport/generated_tasks/" + task_file): - online_code_buffer[task_file] = open("cliport/generated_tasks/" + task_file).read() - - random_task_file = random.sample(list(online_code_buffer.keys()), 1)[0] - class_def = [line for line in online_code_buffer[random_task_file].split("\n") if line.startswith('class')] - task_name = class_def[0] - task_name = task_name[task_name.find("class "): task_name.rfind("(Task)")][6:] - self.curr_task_name = self.generated_task_name = task_name - - self.generated_code = online_code_buffer[random_task_file] - print("generated_code\n", self.generated_code) - print("curr_task_name\n", self.curr_task_name) - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generated ==> ", "", self.generated_code, None - - self.generated_tasks.append(self.generated_task) - self.generated_task_assets.append(self.generated_asset) - self.generated_task_programs.append(self.generated_code) - self.generated_task_names.append(self.generated_task_name) - except: - to_print = highlight(f"{str(traceback.format_exc())}", PythonLexer(), HtmlFormatter()) - print("Task Creation Exception:", highlight(f"{str(traceback.format_exc())}", PythonLexer(), TerminalFormatter())) - self.log = to_print - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generation Failed", self.log, "", None - self.task_creation_pass = False - return - - # self.curr_task_name = self.generated_task['task-name'] - print("task creation time {:.3f}".format(time.time() - start_time)) - - def task_creation(self): - """ create the task through interactions of agent and critic """ - self.task_creation_pass = True - mkdir_if_missing(self.cfg['model_output_dir']) - - try: - start_time = time.time() - self.generated_task = self.agent.propose_task(self.generated_task_names) - - # self.generated_task = {'task-name': 'TASK_NAME_TEMPLATE', 'task-description': 'TASK_STRING_TEMPLATE', 'assets-used': ['ASSET_1', 'ASSET_2', Ellipsis]} - print("generated_task\n", self.generated_task) - yield "Task Generated ==>", "", None, None - self.generated_asset = self.agent.propose_assets() - print("generated_asset\n", self.generated_asset) - yield "Task Generated ==> Asset Generated ==> ", "", None, None - self.agent.api_review() - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> ", "", None, None - self.critic.error_review(self.generated_task) - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> ", "", None, None - self.generated_code, self.curr_task_name = self.agent.implement_task() - self.task_asset_logs.append(self.generated_task["assets-used"]) - self.generated_task_name = self.generated_task["task-name"] - print("generated_code\n", self.generated_code) - print("curr_task_name\n", self.curr_task_name) - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generated ==> ", self.log, self.generated_code, None - - self.generated_tasks.append(self.generated_task) - self.generated_task_assets.append(self.generated_asset) - self.generated_task_programs.append(self.generated_code) - self.generated_task_names.append(self.generated_task_name) - except: - to_print = highlight(f"{str(traceback.format_exc())}", PythonLexer(), HtmlFormatter()) - print("Task Creation Exception:", highlight(f"{str(traceback.format_exc())}", PythonLexer(), TerminalFormatter())) - self.log = to_print - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generation Failed", self.log, "", None - self.task_creation_pass = False - return - - # self.curr_task_name = self.generated_task['task-name'] - print("task creation time {:.3f}".format(time.time() - start_time)) - - - def setup_env(self): - """ build the new task""" - env = Environment( - self.cfg['assets_root'], - disp=self.cfg['disp'], - shared_memory=self.cfg['shared_memory'], - hz=480, - record_cfg=self.cfg['record'] - ) - - task = eval(self.curr_task_name)() - task.mode = self.cfg['mode'] - record = self.cfg['record']['save_video'] - save_data = self.cfg['save_data'] - - # Initialize scripted oracle agent and dataset. - expert = task.oracle(env) - self.cfg['task'] = self.generated_task["task-name"] - data_path = os.path.join(self.cfg['data_dir'], "{}-{}".format(self.generated_task["task-name"], task.mode)) - dataset = RavensDataset(data_path, self.cfg, n_demos=0, augment=False) - print(f"Saving to: {data_path}") - print(f"Mode: {task.mode}") - - # Start video recording - if record: - env.start_rec(f'{dataset.n_episodes+1:06d}') - - return task, dataset, env, expert - - def run_one_episode(self, dataset, expert, env, task, episode, seed): - """ run the new task for one episode """ - add_to_txt( - self.chat_log, f"================= TRIAL: {self.curr_trials}", with_print=True) - record = self.cfg['record']['save_video'] - np.random.seed(seed) - random.seed(seed) - print('Oracle demo: {}/{} | Seed: {}'.format(dataset.n_episodes + 1, self.cfg['n'], seed)) - env.set_task(task) - obs = env.reset() - - info = env.info - reward = 0 - total_reward = 0 - - # Rollout expert policy - for _ in range(task.max_steps): - act = expert.act(obs, info) - episode.append((obs, act, reward, info)) - lang_goal = info['lang_goal'] - obs, reward, done, info = env.step(act) - total_reward += reward - print(f'Total Reward: {total_reward:.3f} | Done: {done} | Goal: {lang_goal}') - if done: - break - - episode.append((obs, None, reward, info)) - return total_reward - - def simulate_task(self): - """ simulate the created task and save demonstrations """ - total_cnt = 0. - reset_success_cnt = 0. - env_success_cnt = 0. - seed = 123 - self.curr_trials += 1 - - if p.isConnected(): - p.disconnect() - - if not self.task_creation_pass: - print("task creation failure => count as syntax exceptions.") - return - - # Check syntax and compilation-time error - try: - exec(self.generated_code, globals()) - task, dataset, env, expert = self.setup_env() - self.syntax_pass_rate += 1 - - except: - to_print = highlight(f"{str(traceback.format_exc())}", PythonLexer(), HtmlFormatter()) - save_text(self.cfg['model_output_dir'], self.generated_task_name + '_error', str(traceback.format_exc())) - print("========================================================") - print("Syntax Exception:", highlight(f"{str(traceback.format_exc())}", PythonLexer(), TerminalFormatter())) - self.log = to_print - - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generated ==> Code Syntax Parse Failed", self.log, self.generated_code, None - return - - try: - # Collect environment and collect data from oracle demonstrations. - env.generated_code = self.generated_code - # Set seeds. - episode = [] - - - """ run the new task for one episode """ - add_to_txt( - self.chat_log, f"================= TRIAL: {self.curr_trials}", with_print=True) - np.random.seed(seed) - random.seed(seed) - print('Oracle demo: {}/{} | Seed: {}'.format(dataset.n_episodes + 1, self.cfg['n'], seed)) - env.set_task(task) - obs = env.reset() - - info = env.info - reward = 0 - total_reward = 0 - # Rollout expert policy - - start_time = time.time() - print("start sim") - for i in range(task.max_steps): - act = expert.act(obs, info) - episode.append((obs, act, reward, info)) - lang_goal = info['lang_goal'] - - env.step(act) - - obs, reward, done, info = env.cur_obs, env.cur_reward, env.cur_done, env.cur_info - total_reward += reward - print(f'Total Reward: {total_reward:.3f} | Done: {done} | Goal: {lang_goal}') - - if done: - break - - end_time = time.time() - print("end sim, time used = ", end_time - start_time) - - if not os.path.exists(env.record_cfg['save_video_path']): - os.mkdir(env.record_cfg['save_video_path']) - self.video_path = os.path.join(env.record_cfg['save_video_path'], "123.mp4") - video_writer = imageio.get_writer(self.video_path, - fps=env.record_cfg['fps'], - format='FFMPEG', - codec='h264', ) - print(f"has {len(env.curr_video)} frames to save") - for color in env.curr_video: - video_writer.append_data(color) - video_writer.close() - print("save video to ", self.video_path) - - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generated ==> Simulation Running completed", self.log, self.generated_code, self.video_path - episode.append((obs, None, reward, info)) - - - # reset_success_cnt += 1 - # env_success_cnt += total_reward > 0.99 - # - # self.runtime_pass_rate += 1 - print("Runtime Test Pass!") - except: - to_print = highlight(f"{str(traceback.format_exc())}", PythonLexer(), HtmlFormatter()) - save_text(self.cfg['model_output_dir'], self.generated_task_name + '_error', str(traceback.format_exc())) - print("========================================================") - print("Runtime Exception:", highlight(f"{str(traceback.format_exc())}", PythonLexer(), TerminalFormatter())) - self.log = to_print - yield "Task Generated ==> Asset Generated ==> API Reviewed ==> Error Reviewed ==> Code Generated ==> Simulation Running Failed", self.log, self.generated_code, None - self.memory.save_run(self.generated_task) diff --git a/spaces/GeorgeOrville/bingo/next.config.js b/spaces/GeorgeOrville/bingo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco.py deleted file mode 100644 index 6277a97fe4874abfe9e3e6434d6012c5f41f8418..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained=None, - backbone=dict( - frozen_stages=-1, zero_init_residual=False, norm_cfg=norm_cfg), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg), - mask_head=dict(norm_cfg=norm_cfg))) -# optimizer -optimizer = dict(paramwise_cfg=dict(norm_decay_mult=0)) -optimizer_config = dict(_delete_=True, grad_clip=None) -# learning policy -lr_config = dict(warmup_ratio=0.1, step=[65, 71]) -runner = dict(type='EpochBasedRunner', max_epochs=73) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 62a0627ae2e9bb17974068e56ee660093e944e0d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/apcnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101b-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101b-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 5186bf614bc9ebffe47323ea61afbc9604be265b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101b-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(type='ResNet', depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 66b443abec3282242c0f794a2f91e066596e7ee9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/nonlocal_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py deleted file mode 100644 index 923731f74f80c11e196f6099b1c84875686cd441..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './ocrnet_hr18_512x1024_40k_cityscapes.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/run.sh b/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/run.sh deleted file mode 100644 index ee49cf4006584c7f24203a15c7a9a11babacd49d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_base/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_base_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/loading.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index fdfc496ba96828a435febbef958fdae499d034f7..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,153 +0,0 @@ -import os.path as osp - -import mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/adaptive_input.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/adaptive_input.py deleted file mode 100644 index 446534a9f8b87337a4dd752944ea386ff7cf7965..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/adaptive_input.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import List - -import torch -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class AdaptiveInput(nn.Module): - def __init__( - self, - vocab_size: int, - padding_idx: int, - initial_dim: int, - factor: float, - output_dim: int, - cutoff: List[int], - q_noise: float = 0, - qn_block_size: int = 8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - self.cutoff = cutoff - self.embedding_dim = output_dim - self.padding_idx = padding_idx - - self.embeddings = nn.ModuleList() - for i in range(len(self.cutoff)): - prev = self.cutoff[i - 1] if i > 0 else 0 - size = self.cutoff[i] - prev - dim = int(initial_dim // (factor ** i)) - seq = nn.Sequential( - nn.Embedding(size, dim, self.padding_idx), - quant_noise( - nn.Linear(dim, output_dim, bias=False), q_noise, qn_block_size - ), - ) - - self.embeddings.append(seq) - self.padding_idx = None - self.padding_idx = padding_idx - - def init_weights(m): - if isinstance(m, nn.Embedding): - nn.init.normal_(m.weight, mean=0, std=m.weight.shape[1] ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - elif hasattr(m, "weight"): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def weights_for_band(self, band: int): - return self.embeddings[band][0].weight, self.embeddings[band][1].weight - - def forward(self, input: torch.Tensor): - result = self._float_tensor.new(input.shape + (self.embedding_dim,)) - for i in range(len(self.cutoff)): - mask = input.lt(self.cutoff[i]) - if i > 0: - mask.mul_(input.ge(self.cutoff[i - 1])) - chunk_input = input[mask] - self.cutoff[i - 1] - else: - chunk_input = input[mask] - if mask.any(): - result[mask] = self.embeddings[i](chunk_input) - return result diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/setup.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/setup.py deleted file mode 100644 index 9d2c73345b8406195aaa6327cb3148bb92b65190..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/setup.py +++ /dev/null @@ -1,55 +0,0 @@ -from setuptools import setup, find_packages - -with open("README.md", "r") as f: - long_description = f.read() - -setup( - name="vakyansh-tts", - version="0.0.5", - description="Text to speech for Indic languages", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/Open-Speech-EkStep/vakyansh-tts.git", - keywords="nlp, tts, Indic languages, deep learning, text to speech", - # package_dir={'': 'src'}, - # packages=find_packages(where='src'), - packages=["tts_infer"], - python_requires=">=3.7, <4", - install_requires=[ - "Cython==0.29.24", - "layers==0.1.5", - "librosa==0.8.1", - "matplotlib==3.3.4", - "numpy==1.20.2", - "scipy==1.5.4", - "tensorboardX==2.4", - "tensorboard==2.7.0", - "tqdm==4.62.3", - "fastapi==0.70.0", - "uvicorn==0.15.0", - "gradio==2.5.2", - "wavio==0.0.4", - "pydload==1.0.9", - "mosestokenizer==1.2.1", - "indic-nlp-library==0.81" - ], - classifiers=[ - # How mature is this project? Common values are - # 3 - Alpha - # 4 - Beta - # 5 - Production/Stable - "Development Status :: 3 - Alpha", - # Indicate who your project is intended for - "Intended Audience :: Developers", - "Intended Audience :: Education", - "Intended Audience :: Science/Research", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Topic :: Text Processing :: Linguistic", - # Pick your license as you wish (should match "license" above) - "License :: OSI Approved :: MIT License", - # Specify the Python versions you support here. In particular, ensure - # that you indicate whether you support Python 2, Python 3 or both. - "Programming Language :: Python :: 3.7", - ], - include_package_data=True, -) diff --git a/spaces/Harveenchadha/en_to_indic_translation/legacy/translate.sh b/spaces/Harveenchadha/en_to_indic_translation/legacy/translate.sh deleted file mode 100644 index d0526d75dce51208e51de9e8de6d35302466c12c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/legacy/translate.sh +++ /dev/null @@ -1,70 +0,0 @@ -#!/bin/bash -echo `date` -infname=$1 -outfname=$2 -src_lang=$3 -tgt_lang=$4 -exp_dir=$5 -ref_fname=$6 - -if [ $src_lang == 'en' ]; then - SRC_PREFIX='TGT' - TGT_PREFIX='SRC' -else - SRC_PREFIX='SRC' - TGT_PREFIX='TGT' -fi - -#`dirname $0`/env.sh -SUBWORD_NMT_DIR='subword-nmt' -model_dir=$exp_dir/model -data_bin_dir=$exp_dir/final_bin - -### normalization and script conversion - -echo "Applying normalization and script conversion" -input_size=`python preprocess_translate.py $infname $outfname.norm $src_lang` -echo "Number of sentences in input: $input_size" - -### apply BPE to input file - -echo "Applying BPE" -python $SUBWORD_NMT_DIR/subword_nmt/apply_bpe.py \ - -c $exp_dir/vocab/bpe_codes.32k.${SRC_PREFIX}_${TGT_PREFIX} \ - --vocabulary $exp_dir/vocab/vocab.$SRC_PREFIX \ - --vocabulary-threshold 5 \ - < $outfname.norm \ - > $outfname.bpe - -# not needed for joint training -# echo "Adding language tags" -# python add_tags_translate.py $outfname._bpe $outfname.bpe $src_lang $tgt_lang - -### run decoder - -echo "Decoding" - -src_input_bpe_fname=$outfname.bpe -tgt_output_fname=$outfname -fairseq-interactive $data_bin_dir \ - -s $SRC_PREFIX -t $TGT_PREFIX \ - --distributed-world-size 1 \ - --path $model_dir/checkpoint_best.pt \ - --batch-size 64 --buffer-size 2500 --beam 5 --remove-bpe \ - --skip-invalid-size-inputs-valid-test \ - --input $src_input_bpe_fname > $tgt_output_fname.log 2>&1 - - -echo "Extracting translations, script conversion and detokenization" -python postprocess_translate.py $tgt_output_fname.log $tgt_output_fname $input_size $tgt_lang -if [ $src_lang == 'en' ]; then - # indicnlp tokenize the output files before evaluation - input_size=`python preprocess_translate.py $ref_fname $ref_fname.tok $tgt_lang` - input_size=`python preprocess_translate.py $tgt_output_fname $tgt_output_fname.tok $tgt_lang` - sacrebleu --tokenize none $ref_fname.tok < $tgt_output_fname.tok -else - # indic to en models - sacrebleu $ref_fname < $tgt_output_fname -fi -echo `date` -echo "Translation completed" diff --git a/spaces/HenryNavarre/CarlosDrummondAndradeGenerator/README.md b/spaces/HenryNavarre/CarlosDrummondAndradeGenerator/README.md deleted file mode 100644 index b0690638f269ee1cf962378547997377212a6882..0000000000000000000000000000000000000000 --- a/spaces/HenryNavarre/CarlosDrummondAndradeGenerator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CarlosDrummondAndradeGenerator -emoji: 📊 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/misc/coord.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/misc/coord.py deleted file mode 100644 index ee69b0c897b6b382ae673622e420f55e494f5b09..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/misc/coord.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch - -class CoordStage(object): - def __init__(self, n_embed, down_factor): - self.n_embed = n_embed - self.down_factor = down_factor - - def eval(self): - return self - - def encode(self, c): - """fake vqmodel interface""" - assert 0.0 <= c.min() and c.max() <= 1.0 - b,ch,h,w = c.shape - assert ch == 1 - - c = torch.nn.functional.interpolate(c, scale_factor=1/self.down_factor, - mode="area") - c = c.clamp(0.0, 1.0) - c = self.n_embed*c - c_quant = c.round() - c_ind = c_quant.to(dtype=torch.long) - - info = None, None, c_ind - return c_quant, None, info - - def decode(self, c): - c = c/self.n_embed - c = torch.nn.functional.interpolate(c, scale_factor=self.down_factor, - mode="nearest") - return c diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/utils.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/utils.py deleted file mode 100644 index 160318a9f6a3f525cfa9b2ab277f97828898bfb6..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/utils.py +++ /dev/null @@ -1,227 +0,0 @@ -import torch -from torchvision import transforms -from math import pi -import torchvision.transforms.functional as TF - - -# Define helper functions -def exists(val): - """Check if a variable exists""" - return val is not None - - -def uniq(arr): - return {el: True for el in arr}.keys() - - -def default(val, d): - """If a value exists, return it; otherwise, return a default value""" - return val if exists(val) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def cast_tuple(val, depth=1): - if isinstance(val, list): - val = tuple(val) - return val if isinstance(val, tuple) else (val,) * depth - - -def is_empty(t): - """Check if a tensor is empty""" - # Return True if the number of elements in the tensor is zero, else False - return t.nelement() == 0 - - -def masked_mean(t, mask, dim=1): - """ - Compute the mean of a tensor, masked by a given mask - - Args: - t (torch.Tensor): input tensor of shape (batch_size, seq_len, hidden_dim) - mask (torch.Tensor): mask tensor of shape (batch_size, seq_len) - dim (int): dimension along which to compute the mean (default=1) - - Returns: - torch.Tensor: masked mean tensor of shape (batch_size, hidden_dim) - """ - t = t.masked_fill(~mask[:, :, None], 0.0) - return t.sum(dim=1) / mask.sum(dim=1)[..., None] - - -def set_requires_grad(model, value): - """ - Set whether or not the model's parameters require gradients - - Args: - model (torch.nn.Module): the PyTorch model to modify - value (bool): whether or not to require gradients - """ - for param in model.parameters(): - param.requires_grad = value - - -def eval_decorator(fn): - """ - Decorator function to evaluate a given function - - Args: - fn (callable): function to evaluate - - Returns: - callable: the decorated function - """ - - def inner(model, *args, **kwargs): - was_training = model.training - model.eval() - out = fn(model, *args, **kwargs) - model.train(was_training) - return out - - return inner - - -def log(t, eps=1e-20): - """ - Compute the natural logarithm of a tensor - - Args: - t (torch.Tensor): input tensor - eps (float): small value to add to prevent taking the log of 0 (default=1e-20) - - Returns: - torch.Tensor: the natural logarithm of the input tensor - """ - return torch.log(t + eps) - - -def gumbel_noise(t): - """ - Generate Gumbel noise - - Args: - t (torch.Tensor): input tensor - - Returns: - torch.Tensor: a tensor of Gumbel noise with the same shape as the input tensor - """ - noise = torch.zeros_like(t).uniform_(0, 1) - return -log(-log(noise)) - - -def gumbel_sample(t, temperature=0.9, dim=-1): - """ - Sample from a Gumbel-softmax distribution - - Args: - t (torch.Tensor): input tensor of shape (batch_size, num_classes) - temperature (float): temperature for the Gumbel-softmax distribution (default=0.9) - dim (int): dimension along which to sample (default=-1) - - Returns: - torch.Tensor: a tensor of samples from the Gumbel-softmax distribution with the same shape as the input tensor - """ - return (t / max(temperature, 1e-10)) + gumbel_noise(t) - - -def top_k(logits, thres=0.5): - """ - Return a tensor where all but the top k values are set to negative infinity - - Args: - logits (torch.Tensor): input tensor of shape (batch_size, num_classes) - thres (float): threshold for the top k values (default=0.5) - - Returns: - torch.Tensor: a tensor with the same shape as the input tensor, where all but the top k values are set to negative infinity - """ - num_logits = logits.shape[-1] - k = max(int((1 - thres) * num_logits), 1) - val, ind = torch.topk(logits, k) - probs = torch.full_like(logits, float("-inf")) - probs.scatter_(-1, ind, val) - return probs - - -def gamma_func(mode="cosine", scale=0.15): - """Return a function that takes a single input r and returns a value based on the selected mode""" - - # Define a different function based on the selected mode - if mode == "linear": - return lambda r: 1 - r - elif mode == "cosine": - return lambda r: torch.cos(r * pi / 2) - elif mode == "square": - return lambda r: 1 - r**2 - elif mode == "cubic": - return lambda r: 1 - r**3 - elif mode == "scaled-cosine": - return lambda r: scale * (torch.cos(r * pi / 2)) - else: - # Raise an error if the selected mode is not implemented - raise NotImplementedError - - -class always: - """Helper class to always return a given value""" - - def __init__(self, val): - self.val = val - - def __call__(self, x, *args, **kwargs): - return self.val - - -class DivideMax(torch.nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - maxes = x.amax(dim=self.dim, keepdim=True).detach() - return x / maxes - -def replace_outliers(image, percentile=0.0001): - - lower_bound, upper_bound = torch.quantile(image, percentile), torch.quantile( - image, 1 - percentile - ) - mask = (image <= upper_bound) & (image >= lower_bound) - - valid_pixels = image[mask] - - image[~mask] = torch.clip(image[~mask], min(valid_pixels), max(valid_pixels)) - - return image - - -def process_image(image, dataset=None, image_type=None): - - if dataset == "HPA": - if image_type == 'nucleus': - normalize = (0.0655, 0.0650) - - elif image_type == 'protein': - normalize = (0.1732, 0.1208) - - elif dataset == "OpenCell": - - if image_type == 'nucleus': - normalize = (0.0272, 0.0244) - - elif image_type == 'protein': - normalize = (0.0486, 0.0671) - - t_forms = [] - - t_forms.append(transforms.RandomCrop(256)) - - # t_forms.append(transforms.Normalize(normalize[0],normalize[1])) - - - image = transforms.Compose(t_forms)(image) - - return image diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh deleted file mode 100644 index c10a6b8809b77bca2b2c02df8b8702725bdd51c7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_txt="" # ground truth transcript path -psd_txt="" # pseudo transcript path -get_best_wer=true -dec_name="decode" -graph_name="graph" -kenlm_path=/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o6.bin -phonemize_lexicon="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh -. /private/home/wnhsu/unsup_asr/fairseq-py-unsup/env.sh - -exp_root=$1 - -set -eu - -if [ ! -z $ref_txt ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on unsupervised metric)" - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:\::g' > $tra.txt - python local/unsup_select.py $psd_txt $tra.txt \ - --kenlm_path $kenlm_path --gt_tra $ref_txt --phonemize \ - --phonemize_lexicon "$phonemize_lexicon" - done | grep "score=" | sed 's/=/ /g' | sed 's/;//g' | sort -k3n | head -n1 - done -fi - - diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/raw_audio_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/raw_audio_dataset.py deleted file mode 100644 index f4e965493cdf94a1f92fa7dab45cc68973c8cdb5..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/audio/raw_audio_dataset.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import os -import sys -import io - -import numpy as np -import torch -import torch.nn.functional as F - -from .. import FairseqDataset -from ..data_utils import compute_mask_indices, get_buckets, get_bucketed_sizes -from fairseq.data.audio.audio_utils import ( - parse_path, - read_from_stored_zip, - is_sf_audio_data, -) -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - - -logger = logging.getLogger(__name__) - - -class RawAudioDataset(FairseqDataset): - def __init__( - self, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__() - - self.sample_rate = sample_rate - self.sizes = [] - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.min_sample_size = min_sample_size - self.pad = pad - self.shuffle = shuffle - self.normalize = normalize - self.compute_mask_indices = compute_mask_indices - if self.compute_mask_indices: - self.mask_compute_kwargs = mask_compute_kwargs - self._features_size_map = {} - self._C = mask_compute_kwargs["encoder_embed_dim"] - self._conv_feature_layers = eval(mask_compute_kwargs["conv_feature_layers"]) - - def __getitem__(self, index): - raise NotImplementedError() - - def __len__(self): - return len(self.sizes) - - def postprocess(self, feats, curr_sample_rate): - if feats.dim() == 2: - feats = feats.mean(-1) - - if curr_sample_rate != self.sample_rate: - raise Exception(f"sample rate: {curr_sample_rate}, need {self.sample_rate}") - - assert feats.dim() == 1, feats.dim() - - if self.normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - return feats - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav - - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end] - - def _compute_mask_indices(self, dims, padding_mask): - B, T, C = dims - mask_indices, mask_channel_indices = None, None - if self.mask_compute_kwargs["mask_prob"] > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_compute_kwargs["mask_prob"], - self.mask_compute_kwargs["mask_length"], - self.mask_compute_kwargs["mask_selection"], - self.mask_compute_kwargs["mask_other"], - min_masks=2, - no_overlap=self.mask_compute_kwargs["no_mask_overlap"], - min_space=self.mask_compute_kwargs["mask_min_space"], - ) - mask_indices = torch.from_numpy(mask_indices) - if self.mask_compute_kwargs["mask_channel_prob"] > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_compute_kwargs["mask_channel_prob"], - self.mask_compute_kwargs["mask_channel_length"], - self.mask_compute_kwargs["mask_channel_selection"], - self.mask_compute_kwargs["mask_channel_other"], - no_overlap=self.mask_compute_kwargs["no_mask_channel_overlap"], - min_space=self.mask_compute_kwargs["mask_channel_min_space"], - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices).unsqueeze(1).expand(-1, T, -1) - ) - - return mask_indices, mask_channel_indices - - @staticmethod - def _bucket_tensor(tensor, num_pad, value): - return F.pad(tensor, (0, num_pad), value=value) - - def collater(self, samples): - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - sources = [s["source"] for s in samples] - sizes = [len(s) for s in sources] - - if self.pad: - target_size = min(max(sizes), self.max_sample_size) - else: - target_size = min(min(sizes), self.max_sample_size) - - collated_sources = sources[0].new_zeros(len(sources), target_size) - padding_mask = ( - torch.BoolTensor(collated_sources.shape).fill_(False) if self.pad else None - ) - for i, (source, size) in enumerate(zip(sources, sizes)): - diff = size - target_size - if diff == 0: - collated_sources[i] = source - elif diff < 0: - assert self.pad - collated_sources[i] = torch.cat( - [source, source.new_full((-diff,), 0.0)] - ) - padding_mask[i, diff:] = True - else: - collated_sources[i] = self.crop_to_max_size(source, target_size) - - input = {"source": collated_sources} - out = {"id": torch.LongTensor([s["id"] for s in samples])} - if self.pad: - input["padding_mask"] = padding_mask - - if hasattr(self, "num_buckets") and self.num_buckets > 0: - assert self.pad, "Cannot bucket without padding first." - bucket = max(self._bucketed_sizes[s["id"]] for s in samples) - num_pad = bucket - collated_sources.size(-1) - if num_pad: - input["source"] = self._bucket_tensor(collated_sources, num_pad, 0) - input["padding_mask"] = self._bucket_tensor(padding_mask, num_pad, True) - - if self.compute_mask_indices: - B = input["source"].size(0) - T = self._get_mask_indices_dims(input["source"].size(-1)) - padding_mask_reshaped = input["padding_mask"].clone() - extra = padding_mask_reshaped.size(1) % T - if extra > 0: - padding_mask_reshaped = padding_mask_reshaped[:, :-extra] - padding_mask_reshaped = padding_mask_reshaped.view( - padding_mask_reshaped.size(0), T, -1 - ) - padding_mask_reshaped = padding_mask_reshaped.all(-1) - input["padding_count"] = padding_mask_reshaped.sum(-1).max().item() - mask_indices, mask_channel_indices = self._compute_mask_indices( - (B, T, self._C), - padding_mask_reshaped, - ) - input["mask_indices"] = mask_indices - input["mask_channel_indices"] = mask_channel_indices - out["sample_size"] = mask_indices.sum().item() - - out["net_input"] = input - return out - - def _get_mask_indices_dims(self, size, padding=0, dilation=1): - if size not in self._features_size_map: - L_in = size - for (_, kernel_size, stride) in self._conv_feature_layers: - L_out = L_in + 2 * padding - dilation * (kernel_size - 1) - 1 - L_out = 1 + L_out // stride - L_in = L_out - self._features_size_map[size] = L_out - return self._features_size_map[size] - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - if self.pad: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - - if self.shuffle: - order = [np.random.permutation(len(self))] - order.append( - np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - ) - return np.lexsort(order)[::-1] - else: - return np.arange(len(self)) - - def set_bucket_info(self, num_buckets): - self.num_buckets = num_buckets - if self.num_buckets > 0: - self._collated_sizes = np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - self.buckets = get_buckets( - self._collated_sizes, - self.num_buckets, - ) - self._bucketed_sizes = get_bucketed_sizes( - self._collated_sizes, self.buckets - ) - logger.info( - f"{len(self.buckets)} bucket(s) for the audio dataset: " - f"{self.buckets}" - ) - - -class FileAudioDataset(RawAudioDataset): - def __init__( - self, - manifest_path, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - text_compression_level=TextCompressionLevel.none, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - self.text_compressor = TextCompressor(level=text_compression_level) - - skipped = 0 - self.fnames = [] - sizes = [] - self.skipped_indices = set() - - with open(manifest_path, "r") as f: - self.root_dir = f.readline().strip() - for i, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_sample_size is not None and sz < min_sample_size: - skipped += 1 - self.skipped_indices.add(i) - continue - self.fnames.append(self.text_compressor.compress(items[0])) - sizes.append(sz) - logger.info(f"loaded {len(self.fnames)}, skipped {skipped} samples") - - self.sizes = np.array(sizes, dtype=np.int64) - - try: - import pyarrow - - self.fnames = pyarrow.array(self.fnames) - except: - logger.debug( - "Could not create a pyarrow array. Please install pyarrow for better performance" - ) - pass - - self.set_bucket_info(num_buckets) - - def __getitem__(self, index): - import soundfile as sf - fn = self.fnames[index] - fn = fn if isinstance(self.fnames, list) else fn.as_py() - fn = self.text_compressor.decompress(fn) - path_or_fp = os.path.join(self.root_dir, fn) - _path, slice_ptr = parse_path(path_or_fp) - if len(slice_ptr) == 2: - byte_data = read_from_stored_zip(_path, slice_ptr[0], slice_ptr[1]) - assert is_sf_audio_data(byte_data) - path_or_fp = io.BytesIO(byte_data) - - wav, curr_sample_rate = sf.read(path_or_fp, dtype="float32") - - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} - - -class BinarizedAudioDataset(RawAudioDataset): - def __init__( - self, - data_dir, - split, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - from fairseq.data import data_utils, Dictionary - - self.fnames_dict = Dictionary.load(os.path.join(data_dir, "dict.txt")) - - root_path = os.path.join(data_dir, f"{split}.root") - if os.path.exists(root_path): - with open(root_path, "r") as f: - self.root_dir = next(f).strip() - else: - self.root_dir = None - - fnames_path = os.path.join(data_dir, split) - self.fnames = data_utils.load_indexed_dataset(fnames_path, self.fnames_dict) - lengths_path = os.path.join(data_dir, f"{split}.lengths") - - with open(lengths_path, "r") as f: - for line in f: - sz = int(line.rstrip()) - assert ( - sz >= min_sample_size - ), f"Min sample size is not supported for binarized dataset, but found a sample with size {sz}" - self.sizes.append(sz) - - self.sizes = np.array(self.sizes, dtype=np.int64) - - self.set_bucket_info(num_buckets) - logger.info(f"loaded {len(self.fnames)} samples") - - def __getitem__(self, index): - import soundfile as sf - - fname = self.fnames_dict.string(self.fnames[index], separator="") - if self.root_dir: - fname = os.path.join(self.root_dir, fname) - - wav, curr_sample_rate = sf.read(fname) - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} diff --git a/spaces/Illumotion/Koboldcpp/examples/chat-13B.bat b/spaces/Illumotion/Koboldcpp/examples/chat-13B.bat deleted file mode 100644 index c5c8ac6efa81a552725538648592e3fc1563e1fa..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/chat-13B.bat +++ /dev/null @@ -1,57 +0,0 @@ -@setlocal disabledelayedexpansion enableextensions -@echo off - -cd /d "%~dp0.." -if not "%errorlevel%"=="0" ( - echo Unable to change directory. - pause - exit /b 1 -) - -if not defined MODEL set "MODEL=models\13B\ggml-model-q4_0.bin" -if not defined USER_NAME set "USER_NAME=User" -if not defined AI_NAME set "AI_NAME=ChatLLaMa" -rem Adjust to the number of CPU cores you want to use. -rem if not defined N_THREAD set "N_THREAD=8" -rem Number of tokens to predict (made it larger than default because we want a long interaction) -if not defined N_PREDICTS set "N_PREDICTS=2048" -if not defined GEN_OPTIONS set "GEN_OPTIONS=--ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647" - -rem Default main script paths -set "DEFAULT_MAIN_SCRIPT_PATHS=main.exe build\bin\main.exe" - -rem Get main script path from command line arguments -set "MAIN_SCRIPT_PATH=%~1" - -rem If the main script path was not specified, try the default paths -if not defined MAIN_SCRIPT_PATH ( - for %%i in (%DEFAULT_MAIN_SCRIPT_PATHS%) do ( - if exist "%%i" set "MAIN_SCRIPT_PATH=%%i" - ) -) - -rem If the main script path was not found, tell the user how to specify it -if not defined MAIN_SCRIPT_PATH ( - echo The main script could not be found. Please provide the path to the main script as 1st argument to this script, or place the main script in one of the default locations: - echo %DEFAULT_MAIN_SCRIPT_PATHS% - pause - exit /b 1 -) - -rem Default context, feel free to edit it -set "PROMPT_TEXT=Text transcript of a never ending dialog, where %USER_NAME% interacts with an AI assistant named %AI_NAME%. %AI_NAME% is helpful, kind, honest, friendly, good at writing and never fails to answer %USER_NAME%'s requests immediately and with details and precision. There are no annotations like (30 seconds passed...) or (to himself), just what %USER_NAME% and %AI_NAME% say aloud to each other. The dialog lasts for years, the entirety of it is shared below. It's 10000 pages long. The transcript only includes text, it does not include markup like HTML and Markdown." - -rem Set a temporary variable if N_THREAD is set -if defined N_THREAD ( - set "_N_THREAD=--threads %N_THREAD%" -) else ( - set "_N_THREAD=" -) - -rem Run the script -echo "%MAIN_SCRIPT_PATH%" %GEN_OPTIONS% %_N_THREAD% ^ - --model "%MODEL%" ^ - --n_predict %N_PREDICTS% ^ - --color --interactive ^ - --reverse-prompt "%USER_NAME%:" ^ - --prompt "%PROMPT_TEXT%" diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/README.md b/spaces/Ilzhabimantara/rvc-Blue-archives/README.md deleted file mode 100644 index 5243511755780e0a6ab8ce5e55538be791529369..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: rvc-Blue-archives -emoji: ':🎤' -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -license: mit -duplicated_from: Faridmaruf/rvc-Blue-archives ---- diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/switch.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/switch.tsx deleted file mode 100644 index 9d1e79dffe05b79b4208570f487e506513430355..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/switch.tsx +++ /dev/null @@ -1,29 +0,0 @@ -"use client" - -import * as React from "react" -import * as SwitchPrimitives from "@radix-ui/react-switch" - -import { cn } from "@/lib/utils" - -const Switch = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - -)) -Switch.displayName = SwitchPrimitives.Root.displayName - -export { Switch } diff --git a/spaces/JingyeChen22/TextDiffuser/app.py b/spaces/JingyeChen22/TextDiffuser/app.py deleted file mode 100644 index b70c47be365a1d5535e01b808bf38d21b4d5e7a6..0000000000000000000000000000000000000000 --- a/spaces/JingyeChen22/TextDiffuser/app.py +++ /dev/null @@ -1,1070 +0,0 @@ -# ------------------------------------------ -# TextDiffuser: Diffusion Models as Text Painters -# Paper Link: https://arxiv.org/abs/2305.10855 -# Code Link: https://github.com/microsoft/unilm/tree/master/textdiffuser -# Copyright (c) Microsoft Corporation. -# This file provides the inference script. -# ------------------------------------------ - -import os -import re -import zipfile - -if not os.path.exists('textdiffuser-ckpt'): - os.system('wget https://huggingface.co/datasets/JingyeChen22/TextDiffuser/resolve/main/textdiffuser-ckpt.zip') - with zipfile.ZipFile('textdiffuser-ckpt.zip', 'r') as zip_ref: - zip_ref.extractall('.') - -if not os.path.exists('images'): - os.system('wget https://huggingface.co/datasets/JingyeChen22/TextDiffuser/resolve/main/images.zip') - with zipfile.ZipFile('images.zip', 'r') as zip_ref: - zip_ref.extractall('.') - -os.system('wget https://huggingface.co/datasets/JingyeChen22/TextDiffuser/resolve/main/404.jpg') - - -if not os.path.exists('Arial.ttf'): - os.system('wget https://huggingface.co/datasets/JingyeChen22/TextDiffuser/resolve/main/Arial.ttf') - -import cv2 -import random -import logging -import argparse -import numpy as np - -from pathlib import Path -from tqdm.auto import tqdm -from typing import Optional -from packaging import version -from termcolor import colored -from PIL import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance # import for visualization -from huggingface_hub import HfFolder, Repository, create_repo, whoami - -import datasets -from datasets import load_dataset -from datasets import disable_caching - -import torch -import torch.utils.checkpoint -import torch.nn.functional as F - -import accelerate -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, deprecate -from diffusers.utils.import_utils import is_xformers_available - -import transformers -from transformers import CLIPTextModel, CLIPTokenizer - -from util import segmentation_mask_visualization, make_caption_pil, combine_image, transform_mask_pil, filter_segmentation_mask, inpainting_merge_image -from model.layout_generator import get_layout_from_prompt -from model.text_segmenter.unet import UNet - - -disable_caching() -check_min_version("0.15.0.dev0") -logger = get_logger(__name__, log_level="INFO") - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default='runwayml/stable-diffusion-v1-5', # no need to modify this - help="Path to pretrained model or model identifier from huggingface.co/models. Please do not modify this.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--mode", - type=str, - default="text-to-image", - # required=True, - choices=["text-to-image", "text-to-image-with-template", "text-inpainting"], - help="Three modes can be used.", - ) - parser.add_argument( - "--prompt", - type=str, - default="", - # required=True, - help="The text prompts provided by users.", - ) - parser.add_argument( - "--template_image", - type=str, - default="", - help="The template image should be given when using 【text-to-image-with-template】 mode.", - ) - parser.add_argument( - "--original_image", - type=str, - default="", - help="The original image should be given when using 【text-inpainting】 mode.", - ) - parser.add_argument( - "--text_mask", - type=str, - default="", - help="The text mask should be given when using 【text-inpainting】 mode.", - ) - parser.add_argument( - "--output_dir", - type=str, - default="output", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument( - "--seed", - type=int, - default=None, - help="A seed for reproducible training." - ) - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--classifier_free_scale", - type=float, - default=7.5, # following stable diffusion (https://github.com/CompVis/stable-diffusion) - help="Classifier free scale following https://arxiv.org/abs/2207.12598.", - ) - parser.add_argument( - "--drop_caption", - action="store_true", - help="Whether to drop captions during training following https://arxiv.org/abs/2207.12598.." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help="Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ) - parser.add_argument( - "--push_to_hub", - action="store_true", - help="Whether or not to push the model to the Hub." - ) - parser.add_argument( - "--hub_token", - type=str, - default=None, - help="The token to use to push to the Model Hub." - ) - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default='fp16', - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--local_rank", - type=int, - default=-1, - help="For distributed training: local_rank" - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=5, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default='textdiffuser-ckpt/diffusion_backbone', # should be specified during inference - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", - action="store_true", - help="Whether or not to use xformers." - ) - parser.add_argument( - "--font_path", - type=str, - default='Arial.ttf', - help="The path of font for visualization." - ) - parser.add_argument( - "--sample_steps", - type=int, - default=50, # following stable diffusion (https://github.com/CompVis/stable-diffusion) - help="Diffusion steps for sampling." - ) - parser.add_argument( - "--vis_num", - type=int, - default=4, # please decreases the number if out-of-memory error occurs - help="Number of images to be sample. Please decrease it when encountering out of memory error." - ) - parser.add_argument( - "--binarization", - action="store_true", - help="Whether to binarize the template image." - ) - parser.add_argument( - "--use_pillow_segmentation_mask", - type=bool, - default=True, - help="In the 【text-to-image】 mode, please specify whether to use the segmentation masks provided by PILLOW" - ) - parser.add_argument( - "--character_segmenter_path", - type=str, - default='textdiffuser-ckpt/text_segmenter.pth', - help="checkpoint of character-level segmenter" - ) - args = parser.parse_args() - - print(f'{colored("[√]", "green")} Arguments are loaded.') - print(args) - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - return args - - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - - -args = parse_args() -logging_dir = os.path.join(args.output_dir, args.logging_dir) - -print(f'{colored("[√]", "green")} Logging dir is set to {logging_dir}.') - -accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - -accelerator = Accelerator( - gradient_accumulation_steps=1, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - project_config=accelerator_project_config, -) - -# Make one log on every process with the configuration for debugging. -logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, -) -logger.info(accelerator.state, main_process_only=False) -if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() -else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - -# Handle the repository creation -if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - print(args.output_dir) - -# Load scheduler, tokenizer and models. -tokenizer15 = CLIPTokenizer.from_pretrained( - 'runwayml/stable-diffusion-v1-5', subfolder="tokenizer", revision=args.revision -) -tokenizer21 = CLIPTokenizer.from_pretrained( - 'stabilityai/stable-diffusion-2-1', subfolder="tokenizer", revision=args.revision -) - -text_encoder15 = CLIPTextModel.from_pretrained( - 'runwayml/stable-diffusion-v1-5', subfolder="text_encoder", revision=args.revision -) -text_encoder21 = CLIPTextModel.from_pretrained( - 'stabilityai/stable-diffusion-2-1', subfolder="text_encoder", revision=args.revision -) - -vae15 = AutoencoderKL.from_pretrained('runwayml/stable-diffusion-v1-5', subfolder="vae", revision=args.revision).cuda() -unet15 = UNet2DConditionModel.from_pretrained( - './textdiffuser-ckpt/diffusion_backbone_1.5', subfolder="unet", revision=None -).cuda() - -vae21 = AutoencoderKL.from_pretrained('stabilityai/stable-diffusion-2-1', subfolder="vae", revision=args.revision).cuda() -unet21 = UNet2DConditionModel.from_pretrained( - './textdiffuser-ckpt/diffusion_backbone_2.1', subfolder="unet", revision=None -).cuda() - -scheduler15 = DDPMScheduler.from_pretrained('runwayml/stable-diffusion-v1-5', subfolder="scheduler") -scheduler21 = DDPMScheduler.from_pretrained('stabilityai/stable-diffusion-2-1', subfolder="scheduler") - - - -# Freeze vae and text_encoder -vae15.requires_grad_(False) -vae21.requires_grad_(False) -text_encoder15.requires_grad_(False) -text_encoder21.requires_grad_(False) - -if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - -# `accelerate` 0.16.0 will have better support for customized saving -if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - - for i, model in enumerate(models): - model.save_pretrained(os.path.join(output_dir, "unet")) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - - for i in range(len(models)): - # pop models so that they are not loaded again - model = models.pop() - - # load diffusers style into model - load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - -# setup schedulers -# sample_num = args.vis_num - -def to_tensor(image): - if isinstance(image, Image.Image): - image = np.array(image) - elif not isinstance(image, np.ndarray): - raise TypeError("Error") - - image = image.astype(np.float32) / 255.0 - image = np.transpose(image, (2, 0, 1)) - tensor = torch.from_numpy(image) - - return tensor - - -import unicodedata - - -def full2half(text): - half = [] - for char in text: - code = ord(char) - if code == 0x3000: - half.append(chr(0x0020)) - elif 0xFF01 <= code <= 0xFF5E: - half.append(chr(code - 0xFEE0)) - else: - half.append(char) - return ''.join(half) - -def has_chinese_char(string): - pattern = re.compile('[\u4e00-\u9fa5]') - if pattern.search(string): - return True - else: - return False - -image_404 = Image.open('404.jpg') - -def text_to_image(prompt,slider_step,slider_guidance,slider_batch, version): - print(f'【version】{version}') - if version == 'Stable Diffusion v2.1': - vae = vae21 - unet = unet21 - text_encoder = text_encoder21 - tokenizer = tokenizer21 - scheduler = scheduler21 - slider_batch = min(slider_batch, 1) - size = 768 - elif version == 'Stable Diffusion v1.5': - vae = vae15 - unet = unet15 - text_encoder = text_encoder15 - tokenizer = tokenizer15 - scheduler = scheduler15 - size = 512 - else: - assert False, 'Version Not Found' - - if has_chinese_char(prompt): - print('trigger') - return image_404, None - - prompt = full2half(prompt) - prompt = prompt.replace('"', "'") - prompt = prompt.replace('‘', "'") - prompt = prompt.replace('’', "'") - prompt = prompt.replace('“', "'") - prompt = prompt.replace('”', "'") - prompt = re.sub(r"[^a-zA-Z0-9'\" ]+", "", prompt) - - if slider_step>=50: - slider_step = 50 - - args.prompt = prompt - sample_num = slider_batch - seed = random.randint(0, 10000000) - set_seed(seed) - scheduler.set_timesteps(slider_step) - - noise = torch.randn((sample_num, 4, size//8, size//8)).to("cuda") # (b, 4, 64, 64) - input = noise # (b, 4, 64, 64) - - captions = [args.prompt] * sample_num - captions_nocond = [""] * sample_num - print(f'{colored("[√]", "green")} Prompt is loaded: {args.prompt}.') - - # encode text prompts - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states = text_encoder(inputs)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states: {encoder_hidden_states.shape}.') - - inputs_nocond = tokenizer( - captions_nocond, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states_nocond = text_encoder(inputs_nocond)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states_nocond: {encoder_hidden_states_nocond.shape}.') - - #### text-to-image #### - render_image, segmentation_mask_from_pillow = get_layout_from_prompt(args) - - segmentation_mask = torch.Tensor(np.array(segmentation_mask_from_pillow)).cuda() # (512, 512) - - segmentation_mask = filter_segmentation_mask(segmentation_mask) - segmentation_mask = torch.nn.functional.interpolate(segmentation_mask.unsqueeze(0).unsqueeze(0).float(), size=(size//2, size//2), mode='nearest') - segmentation_mask = segmentation_mask.squeeze(1).repeat(sample_num, 1, 1).long().to('cuda') # (1, 1, 256, 256) - print(f'{colored("[√]", "green")} character-level segmentation_mask: {segmentation_mask.shape}.') - - feature_mask = torch.ones(sample_num, 1, size//8, size//8).to('cuda') # (b, 1, 64, 64) - masked_image = torch.zeros(sample_num, 3, size, size).to('cuda') # (b, 3, 512, 512) - masked_feature = vae.encode(masked_image).latent_dist.sample() # (b, 4, 64, 64) - masked_feature = masked_feature * vae.config.scaling_factor - print(f'{colored("[√]", "green")} feature_mask: {feature_mask.shape}.') - print(f'{colored("[√]", "green")} masked_feature: {masked_feature.shape}.') - - # diffusion process - intermediate_images = [] - for t in tqdm(scheduler.timesteps): - with torch.no_grad(): - noise_pred_cond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noise_pred_uncond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states_nocond, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noisy_residual = noise_pred_uncond + slider_guidance * (noise_pred_cond - noise_pred_uncond) # b, 4, 64, 64 - prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample - input = prev_noisy_sample - intermediate_images.append(prev_noisy_sample) - - # decode and visualization - input = 1 / vae.config.scaling_factor * input - sample_images = vae.decode(input.float(), return_dict=False)[0] # (b, 3, 512, 512) - - image_pil = render_image.resize((size,size)) - segmentation_mask = segmentation_mask[0].squeeze().cpu().numpy() - character_mask_pil = Image.fromarray(((segmentation_mask!=0)*255).astype('uint8')).resize((size,size)) - character_mask_highlight_pil = segmentation_mask_visualization(args.font_path,segmentation_mask) - character_mask_highlight_pil = character_mask_highlight_pil.resize((size, size)) - caption_pil = make_caption_pil(args.font_path, captions) - - # save pred_img - pred_image_list = [] - for image in sample_images.float(): - image = (image / 2 + 0.5).clamp(0, 1).unsqueeze(0) - image = image.cpu().permute(0, 2, 3, 1).numpy()[0] - image = Image.fromarray((image * 255).round().astype("uint8")).convert('RGB') - pred_image_list.append(image) - - blank_pil = combine_image(args, size, None, pred_image_list, image_pil, character_mask_pil, character_mask_highlight_pil, caption_pil) - - intermediate_result = Image.new('RGB', (size*3, size)) - intermediate_result.paste(image_pil, (0, 0)) - intermediate_result.paste(character_mask_pil, (size, 0)) - intermediate_result.paste(character_mask_highlight_pil, (size*2, 0)) - - return blank_pil, intermediate_result - - -# load character-level segmenter -segmenter = UNet(3, 96, True).cuda() -segmenter = torch.nn.DataParallel(segmenter) -segmenter.load_state_dict(torch.load(args.character_segmenter_path)) -segmenter.eval() -print(f'{colored("[√]", "green")} Text segmenter is successfully loaded.') - - - - -def text_to_image_with_template(prompt,template_image,slider_step,slider_guidance,slider_batch, binary, version): - - if version == 'Stable Diffusion v2.1': - vae = vae21 - unet = unet21 - text_encoder = text_encoder21 - tokenizer = tokenizer21 - scheduler = scheduler21 - slider_batch = min(slider_batch, 1) - size = 768 - elif version == 'Stable Diffusion v1.5': - vae = vae15 - unet = unet15 - text_encoder = text_encoder15 - tokenizer = tokenizer15 - scheduler = scheduler15 - size = 512 - else: - assert False, 'Version Not Found' - - if has_chinese_char(prompt): - print('trigger') - return image_404, None - - if slider_step>=50: - slider_step = 50 - - orig_template_image = template_image.resize((size,size)).convert('RGB') - args.prompt = prompt - sample_num = slider_batch - # If passed along, set the training seed now. - # seed = slider_seed - seed = random.randint(0, 10000000) - set_seed(seed) - scheduler.set_timesteps(slider_step) - - noise = torch.randn((sample_num, 4, size//8, size//8)).to("cuda") # (b, 4, 64, 64) - input = noise # (b, 4, 64, 64) - - captions = [args.prompt] * sample_num - captions_nocond = [""] * sample_num - print(f'{colored("[√]", "green")} Prompt is loaded: {args.prompt}.') - - # encode text prompts - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states = text_encoder(inputs)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states: {encoder_hidden_states.shape}.') - - inputs_nocond = tokenizer( - captions_nocond, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states_nocond = text_encoder(inputs_nocond)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states_nocond: {encoder_hidden_states_nocond.shape}.') - - #### text-to-image-with-template #### - template_image = template_image.resize((256,256)).convert('RGB') - - # whether binarization is needed - print(f'{colored("[Warning]", "red")} args.binarization is set to {binary}. You may need it when using handwritten images as templates.') - - if binary: - gray = ImageOps.grayscale(template_image) - binary = gray.point(lambda x: 255 if x > 96 else 0, '1') - template_image = binary.convert('RGB') - - # to_tensor = transforms.ToTensor() - image_tensor = to_tensor(template_image).unsqueeze(0).cuda().sub_(0.5).div_(0.5) # (b, 3, 256, 256) - - with torch.no_grad(): - segmentation_mask = segmenter(image_tensor) # (b, 96, 256, 256) - segmentation_mask = segmentation_mask.max(1)[1].squeeze(0) # (256, 256) - segmentation_mask = filter_segmentation_mask(segmentation_mask) # (256, 256) - - segmentation_mask = torch.nn.functional.interpolate(segmentation_mask.unsqueeze(0).unsqueeze(0).float(), size=(size//2, size//2), mode='nearest') # (b, 1, 256, 256) - segmentation_mask = segmentation_mask.squeeze(1).repeat(sample_num, 1, 1).long().to('cuda') # (b, 1, 256, 256) - print(f'{colored("[√]", "green")} Character-level segmentation_mask: {segmentation_mask.shape}.') - - feature_mask = torch.ones(sample_num, 1, size//8, size//8).to('cuda') # (b, 1, 64, 64) - masked_image = torch.zeros(sample_num, 3, size, size).to('cuda') # (b, 3, 512, 512) - masked_feature = vae.encode(masked_image).latent_dist.sample() # (b, 4, 64, 64) - masked_feature = masked_feature * vae.config.scaling_factor # (b, 4, 64, 64) - - # diffusion process - intermediate_images = [] - for t in tqdm(scheduler.timesteps): - with torch.no_grad(): - noise_pred_cond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noise_pred_uncond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states_nocond, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noisy_residual = noise_pred_uncond + slider_guidance * (noise_pred_cond - noise_pred_uncond) # b, 4, 64, 64 - prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample - input = prev_noisy_sample - intermediate_images.append(prev_noisy_sample) - - # decode and visualization - input = 1 / vae.config.scaling_factor * input - sample_images = vae.decode(input.float(), return_dict=False)[0] # (b, 3, 512, 512) - - image_pil = None - segmentation_mask = segmentation_mask[0].squeeze().cpu().numpy() - character_mask_pil = Image.fromarray(((segmentation_mask!=0)*255).astype('uint8')).resize((size,size)) - character_mask_highlight_pil = segmentation_mask_visualization(args.font_path,segmentation_mask) - character_mask_highlight_pil = character_mask_highlight_pil.resize((size, size)) - caption_pil = make_caption_pil(args.font_path, captions) - - # save pred_img - pred_image_list = [] - for image in sample_images.float(): - image = (image / 2 + 0.5).clamp(0, 1).unsqueeze(0) - image = image.cpu().permute(0, 2, 3, 1).numpy()[0] - image = Image.fromarray((image * 255).round().astype("uint8")).convert('RGB') - pred_image_list.append(image) - - blank_pil = combine_image(args, size, None, pred_image_list, image_pil, character_mask_pil, character_mask_highlight_pil, caption_pil) - - intermediate_result = Image.new('RGB', (size*3, size)) - intermediate_result.paste(orig_template_image, (0, 0)) - intermediate_result.paste(character_mask_pil, (size, 0)) - intermediate_result.paste(character_mask_highlight_pil, (size*2, 0)) - - return blank_pil, intermediate_result - - -def text_inpainting(prompt,orig_image,mask_image,slider_step,slider_guidance,slider_batch, version): - - if version == 'Stable Diffusion v2.1': - vae = vae21 - unet = unet21 - text_encoder = text_encoder21 - tokenizer = tokenizer21 - scheduler = scheduler21 - slider_batch = min(slider_batch, 1) - size = 768 - elif version == 'Stable Diffusion v1.5': - vae = vae15 - unet = unet15 - text_encoder = text_encoder15 - tokenizer = tokenizer15 - scheduler = scheduler15 - size = 512 - else: - assert False, 'Version Not Found' - - if has_chinese_char(prompt): - print('trigger') - return image_404, None - - if slider_step>=50: - slider_step = 50 - - args.prompt = prompt - sample_num = slider_batch - # If passed along, set the training seed now. - # seed = slider_seed - seed = random.randint(0, 10000000) - set_seed(seed) - scheduler.set_timesteps(slider_step) - - noise = torch.randn((sample_num, 4, size//8, size//8)).to("cuda") # (b, 4, 64, 64) - input = noise # (b, 4, 64, 64) - - captions = [args.prompt] * sample_num - captions_nocond = [""] * sample_num - print(f'{colored("[√]", "green")} Prompt is loaded: {args.prompt}.') - - # encode text prompts - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states = text_encoder(inputs)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states: {encoder_hidden_states.shape}.') - - inputs_nocond = tokenizer( - captions_nocond, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids # (b, 77) - encoder_hidden_states_nocond = text_encoder(inputs_nocond)[0].cuda() # (b, 77, 768) - print(f'{colored("[√]", "green")} encoder_hidden_states_nocond: {encoder_hidden_states_nocond.shape}.') - - mask_image = cv2.resize(mask_image, (size,size)) - # mask_image = mask_image.resize((512,512)).convert('RGB') - text_mask = np.array(mask_image) - threshold = 128 - _, text_mask = cv2.threshold(text_mask, threshold, 255, cv2.THRESH_BINARY) - text_mask = Image.fromarray(text_mask).convert('RGB').resize((256,256)) - text_mask.save('text_mask.png') - text_mask_tensor = to_tensor(text_mask).unsqueeze(0).cuda().sub_(0.5).div_(0.5) - with torch.no_grad(): - segmentation_mask = segmenter(text_mask_tensor) - - segmentation_mask = segmentation_mask.max(1)[1].squeeze(0) - segmentation_mask = filter_segmentation_mask(segmentation_mask) - segmentation_mask = torch.nn.functional.interpolate(segmentation_mask.unsqueeze(0).unsqueeze(0).float(), size=(size//2, size//2), mode='nearest') - - image_mask = transform_mask_pil(mask_image, size) - image_mask = torch.from_numpy(image_mask).cuda().unsqueeze(0).unsqueeze(0) - - orig_image = orig_image.convert('RGB').resize((size,size)) - image = orig_image - image_tensor = to_tensor(image).unsqueeze(0).cuda().sub_(0.5).div_(0.5) - masked_image = image_tensor * (1-image_mask) - masked_feature = vae.encode(masked_image).latent_dist.sample().repeat(sample_num, 1, 1, 1) - masked_feature = masked_feature * vae.config.scaling_factor - - image_mask = torch.nn.functional.interpolate(image_mask, size=(size//2, size//2), mode='nearest').repeat(sample_num, 1, 1, 1) - segmentation_mask = segmentation_mask * image_mask - feature_mask = torch.nn.functional.interpolate(image_mask, size=(size//8, size//8), mode='nearest') - - # diffusion process - intermediate_images = [] - for t in tqdm(scheduler.timesteps): - with torch.no_grad(): - noise_pred_cond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noise_pred_uncond = unet(sample=input, timestep=t, encoder_hidden_states=encoder_hidden_states_nocond, segmentation_mask=segmentation_mask, feature_mask=feature_mask, masked_feature=masked_feature).sample # b, 4, 64, 64 - noisy_residual = noise_pred_uncond + slider_guidance * (noise_pred_cond - noise_pred_uncond) # b, 4, 64, 64 - prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample - input = prev_noisy_sample - intermediate_images.append(prev_noisy_sample) - - # decode and visualization - input = 1 / vae.config.scaling_factor * input - sample_images = vae.decode(input.float(), return_dict=False)[0] # (b, 3, 512, 512) - - image_pil = None - segmentation_mask = segmentation_mask[0].squeeze().cpu().numpy() - character_mask_pil = Image.fromarray(((segmentation_mask!=0)*255).astype('uint8')).resize((512,512)) - character_mask_highlight_pil = segmentation_mask_visualization(args.font_path,segmentation_mask) - character_mask_highlight_pil = character_mask_highlight_pil.resize((size, size)) - caption_pil = make_caption_pil(args.font_path, captions) - - # save pred_img - pred_image_list = [] - for image in sample_images.float(): - image = (image / 2 + 0.5).clamp(0, 1).unsqueeze(0) - image = image.cpu().permute(0, 2, 3, 1).numpy()[0] - image = Image.fromarray((image * 255).round().astype("uint8")).convert('RGB') - - # need to merge - - # image = inpainting_merge_image(orig_image, Image.fromarray(mask_image).convert('L'), image) - - pred_image_list.append(image) - - character_mask_pil.save('character_mask_pil.png') - character_mask_highlight_pil.save('character_mask_highlight_pil.png') - - - blank_pil = combine_image(args, size, None, pred_image_list, image_pil, character_mask_pil, character_mask_highlight_pil, caption_pil) - - - background = orig_image.resize((512, 512)) - alpha = Image.new('L', background.size, int(255 * 0.2)) - background.putalpha(alpha) - # foreground - foreground = Image.fromarray(mask_image).convert('L').resize((512, 512)) - threshold = 200 - alpha = foreground.point(lambda x: 0 if x > threshold else 255, '1') - foreground.putalpha(alpha) - merge_image = Image.alpha_composite(foreground.convert('RGBA'), background.convert('RGBA')).convert('RGB') - - intermediate_result = Image.new('RGB', (512*3, 512)) - intermediate_result.paste(merge_image, (0, 0)) - intermediate_result.paste(character_mask_pil, (512, 0)) - intermediate_result.paste(character_mask_highlight_pil, (512*2, 0)) - - return blank_pil, intermediate_result - -import gradio as gr - -with gr.Blocks() as demo: - - gr.HTML( - """ -
        -

        - TextDiffuser: Diffusion Models as Text Painters -

        -

        - NeurIPS 2023 -

        -

        - [arXiv] - [Code] - [ProjectPage] -

        -

        - We propose TextDiffuser, a flexible and controllable framework to generate images with visually appealing text that is coherent with backgrounds. - Main features include: (a) Text-to-Image: The user provides a prompt and encloses the keywords with single quotes (e.g., a text image of ‘hello’). The model first determines the layout of the keywords and then draws the image based on the layout and prompt. (b) Text-to-Image with Templates: The user provides a prompt and a template image containing text, which can be a printed, handwritten, or scene text image. These template images can be used to determine the layout of the characters. (c) Text Inpainting: The user provides an image and specifies the region to be modified along with the desired text content. The model is able to modify the original text or add text to areas without text. -

        -

        - 🔥 News: We further trained TextDiffuser based on Stable Diffusion v2.1 pre-trained model, enlarging the resolution from 512x512 to 768x768 to enhance the legibility of small text. Additionally, we fine-tuned the model with images with high aesthetical score, enabling generating images with richer details. -

        - - - textdiffuser -
        - """) - - with gr.Tab("Text-to-Image"): - with gr.Row(): - with gr.Column(scale=1): - prompt = gr.Textbox(label="Input your prompt here. Please enclose keywords with 'single quotes', you may refer to the examples below. The current version only supports input in English characters.", placeholder="Placeholder 'Team' hat") - radio = gr.Radio(["Stable Diffusion v2.1", "Stable Diffusion v1.5"], label="Pre-trained Model", value="Stable Diffusion v1.5") - slider_step = gr.Slider(minimum=1, maximum=50, value=20, step=1, label="Sampling step", info="The sampling step for TextDiffuser.") - slider_guidance = gr.Slider(minimum=1, maximum=9, value=7.5, step=0.5, label="Scale of classifier-free guidance", info="The scale of classifier-free guidance and is set to 7.5 in default.") - slider_batch = gr.Slider(minimum=1, maximum=4, value=4, step=1, label="Batch size", info="The number of images to be sampled. Maximum number is set to 1 for SD v2.1 to avoid OOM.") - # slider_seed = gr.Slider(minimum=1, maximum=10000, label="Seed", randomize=True) - button = gr.Button("Generate") - - with gr.Column(scale=1): - output = gr.Image(label='Generated image') - - with gr.Accordion("Intermediate results", open=False): - gr.Markdown("Layout, segmentation mask, and details of segmentation mask from left to right.") - intermediate_results = gr.Image(label='') - - gr.Markdown("## Prompt Examples") - gr.Examples( - [ - ["Distinguished poster of 'SPIDERMAN'. Trending on ArtStation and Pixiv. A vibrant digital oil painting. A highly detailed fantasy character illustration by Wayne Reynolds and Charles Monet and Gustave Dore and Carl Critchlow and Bram Sels"], - ["A detailed portrait of a fox guardian with a shield with 'Kung Fu' written on it, by victo ngai and justin gerard, digital art, realistic painting, very detailed, fantasy, high definition, cinematic light, dnd, trending on artstation"], - ["portrait of a 'dragon', concept art, sumi - e style, intricate linework, green smoke, artstation, trending, highly detailed, smooth, focus, art by yoji shinkawa,"], - ["elderly woman dressed in extremely colorful clothes with many strange patterns posing for a high fashion photoshoot of 'FASHION', haute couture, golden hour, artstation, by J. C. Leyendecker and Peter Paul Rubens"], - ["epic digital art of a luxury yacht named 'Time Machine' driving through very dark hard edged city towers from tron movie, faint tall mountains in background, wlop, pixiv"], - ["A poster of 'Adventurer'. A beautiful so tall boy with big eyes and small nose is in the jungle, he wears normal clothes and shows his full length, which we see from the front, unreal engine, cozy indoor lighting, artstation, detailed"], - ["A poster of 'AI BABY'. Cute and adorable cartoon it baby, fantasy, dreamlike, surrealism, super cute, trending on artstation"], - ["'Team' hat"], - ["Thanksgiving 'Fam' Mens T Shirt"], - ["A storefront with 'Hello World' written on it."], - ["A poster titled 'Quails of North America', showing different kinds of quails."], - ["A storefront with 'Deep Learning' written on it."], - ["An antique bottle labeled 'Energy Tonic'"], - ["A TV show poster titled 'Tango argentino'"], - ["A TV show poster with logo 'The Dry' on it"], - ["Stupid 'History' eBook Tales of Stupidity Strangeness"], - ["Photos of 'Sampa Hostel'"], - ["A large recipe book titled 'Recipes from Peru'."], - ["New York Skyline with 'Diffusion' written with fireworks on the sky"], - ["Books with the word 'Science' printed on them"], - ["A globe with the words 'Planet Earth' written in bold letters with continents in bright colors"], - ["A logo for the company 'EcoGrow', where the letters look like plants"], - ], - prompt, - examples_per_page=100 - ) - - button.click(text_to_image, inputs=[prompt,slider_step,slider_guidance,slider_batch,radio], outputs=[output,intermediate_results]) - - with gr.Tab("Text-to-Image-with-Template"): - with gr.Row(): - with gr.Column(scale=1): - prompt = gr.Textbox(label='Input your prompt here.') - template_image = gr.Image(label='Template image', type="pil") - radio = gr.Radio(["Stable Diffusion v2.1", "Stable Diffusion v1.5"], label="Pre-trained Model", value="Stable Diffusion v1.5") - slider_step = gr.Slider(minimum=1, maximum=50, value=20, step=1, label="Sampling step", info="The sampling step for TextDiffuser.") - slider_guidance = gr.Slider(minimum=1, maximum=9, value=7.5, step=0.5, label="Scale of classifier-free guidance", info="The scale of classifier-free guidance and is set to 7.5 in default.") - slider_batch = gr.Slider(minimum=1, maximum=4, value=4, step=1, label="Batch size", info="The number of images to be sampled. Maximum number is set to 1 for SD v2.1 to avoid OOM.") - # binary = gr.Radio(["park", "zoo", "road"], label="Location", info="Where did they go?") - binary = gr.Checkbox(label="Binarization", bool=True, info="Whether to binarize the template image? You may need it when using handwritten images as templates.") - button = gr.Button("Generate") - - with gr.Column(scale=1): - output = gr.Image(label='Generated image') - - with gr.Accordion("Intermediate results", open=False): - gr.Markdown("Template image, segmentation mask, and details of segmentation mask from left to right.") - intermediate_results = gr.Image(label='') - - gr.Markdown("## Prompt and Template-Image Examples") - gr.Examples( - [ - ["summer garden, artwork, highly detailed, sharp focus, realist, digital painting, artstation, concept art, art by jay oh, greg rutkowski, wlop", './images/text-to-image-with-template/6.jpg', False], - ["a hand-drawn blueprint for a time machine with the caption 'Time traveling device'", './images/text-to-image-with-template/5.jpg', False], - ["a book called summer vibe written by diffusion model", './images/text-to-image-with-template/7.jpg', False], - ["a work company", './images/text-to-image-with-template/8.jpg', False], - ["a book of AI in next century written by AI robot ", './images/text-to-image-with-template/9.jpg', False], - ["A board saying having a dog named shark at the beach was a mistake", './images/text-to-image-with-template/1.jpg', False], - ["an elephant holds a newspaper that is written elephant take over the world", './images/text-to-image-with-template/2.jpg', False], - ["a mouse with a flashlight saying i am afraid of the dark", './images/text-to-image-with-template/4.jpg', False], - ["a birthday cake of happy birthday to xyz", './images/text-to-image-with-template/10.jpg', False], - ["a poster of monkey music festival", './images/text-to-image-with-template/11.jpg', False], - ["a meme of are you kidding", './images/text-to-image-with-template/12.jpg', False], - ["a 3d model of a 1980s-style computer with the text my old habit on the screen", './images/text-to-image-with-template/13.jpg', True], - ["a board of hello world", './images/text-to-image-with-template/15.jpg', True], - ["a microsoft bag", './images/text-to-image-with-template/16.jpg', True], - ["a dog holds a paper saying please adopt me", './images/text-to-image-with-template/17.jpg', False], - ["a hello world banner", './images/text-to-image-with-template/18.jpg', False], - ["a stop pizza", './images/text-to-image-with-template/19.jpg', False], - ["a dress with text do not read the next sentence", './images/text-to-image-with-template/20.jpg', False], - ], - [prompt,template_image, binary], - examples_per_page=100 - ) - - button.click(text_to_image_with_template, inputs=[prompt,template_image,slider_step,slider_guidance,slider_batch,binary,radio], outputs=[output,intermediate_results]) - - with gr.Tab("Text-Inpainting"): - with gr.Row(): - with gr.Column(scale=1): - prompt = gr.Textbox(label='Input your prompt here.') - with gr.Row(): - orig_image = gr.Image(label='Original image', type="pil") - mask_image = gr.Image(label='Mask image', type="numpy") - radio = gr.Radio(["Stable Diffusion v2.1", "Stable Diffusion v1.5"], label="Pre-trained Model", value="Stable Diffusion v1.5") - slider_step = gr.Slider(minimum=1, maximum=50, value=20, step=1, label="Sampling step", info="The sampling step for TextDiffuser.") - slider_guidance = gr.Slider(minimum=1, maximum=9, value=7.5, step=0.5, label="Scale of classifier-free guidance", info="The scale of classifier-free guidance and is set to 7.5 in default.") - slider_batch = gr.Slider(minimum=1, maximum=4, value=4, step=1, label="Batch size", info="The number of images to be sampled. Maximum number is set to 1 for SD v2.1 to avoid OOM.") - button = gr.Button("Generate") - with gr.Column(scale=1): - output = gr.Image(label='Generated image') - with gr.Accordion("Intermediate results", open=False): - gr.Markdown("Masked image, segmentation mask, and details of segmentation mask from left to right.") - intermediate_results = gr.Image(label='') - - gr.Markdown("## Prompt, Original Image, and Mask Examples") - gr.Examples( - [ - ["eye on security protection", './images/text-inpainting/1.jpg', './images/text-inpainting/1mask.jpg'], - ["a logo of poppins", './images/text-inpainting/2.jpg', './images/text-inpainting/2mask.jpg'], - ["tips for middle space living ", './images/text-inpainting/3.jpg', './images/text-inpainting/3mask.jpg'], - ["george is a proud big sister", './images/text-inpainting/5.jpg', './images/text-inpainting/5mask.jpg'], - ["we are the great people", './images/text-inpainting/6.jpg', './images/text-inpainting/6mask.jpg'], - ["tech house interesting terrace party", './images/text-inpainting/7.jpg', './images/text-inpainting/7mask.jpg'], - ["2023", './images/text-inpainting/8.jpg', './images/text-inpainting/8mask.jpg'], - ["wear protective equipment necessary", './images/text-inpainting/9.jpg', './images/text-inpainting/9mask.jpg'], - ["a good day in the hometown", './images/text-inpainting/10.jpg', './images/text-inpainting/10mask.jpg'], - ["a boy paints good morning on a board", './images/text-inpainting/11.jpg', './images/text-inpainting/11mask.jpg'], - ["the word my gift on a basketball", './images/text-inpainting/13.jpg', './images/text-inpainting/13mask.jpg'], - ["a logo of mono", './images/text-inpainting/14.jpg', './images/text-inpainting/14mask.jpg'], - ["a board saying assyrian on unflagging fry devastates", './images/text-inpainting/15.jpg', './images/text-inpainting/15mask.jpg'], - ["a board saying session", './images/text-inpainting/16.jpg', './images/text-inpainting/16mask.jpg'], - ["rankin dork", './images/text-inpainting/17mask.jpg', './images/text-inpainting/17.jpg'], - ["a coin of mem", './images/text-inpainting/18mask.jpg', './images/text-inpainting/18.jpg'], - ["a board without text", './images/text-inpainting/19.jpg', './images/text-inpainting/19mask.jpg'], - ["a board without text", './images/text-inpainting/20.jpg', './images/text-inpainting/20mask.jpg'], - - ], - [prompt,orig_image,mask_image], - ) - - - button.click(text_inpainting, inputs=[prompt,orig_image,mask_image,slider_step,slider_guidance,slider_batch,radio], outputs=[output, intermediate_results]) - - - - gr.HTML( - """ -
        -

        - Version: 1.0 -

        -

        - Contact: - For help or issues using TextDiffuser, please email Jingye Chen (qwerty.chen@connect.ust.hk), Yupan Huang (huangyp28@mail2.sysu.edu.cn) or submit a GitHub issue. For other communications related to TextDiffuser, please contact Lei Cui (lecu@microsoft.com) or Furu Wei (fuwei@microsoft.com). -

        -

        - Disclaimer: - Please note that the demo is intended for academic and research purposes ONLY. Any use of the demo for generating inappropriate content is strictly prohibited. The responsibility for any misuse or inappropriate use of the demo lies solely with the users who generated such content, and this demo shall not be held liable for any such use. -

        -
        - """ - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/Kuachi/ai-voice/text/__init__.py b/spaces/Kuachi/ai-voice/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/Kuachi/ai-voice/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Kunal7/squats-analysis/utils.py b/spaces/Kunal7/squats-analysis/utils.py deleted file mode 100644 index a04d2c66ac6708beae63bee4e72f3b1c040e646f..0000000000000000000000000000000000000000 --- a/spaces/Kunal7/squats-analysis/utils.py +++ /dev/null @@ -1,168 +0,0 @@ -import cv2 -import mediapipe as mp -import numpy as np - -correct = cv2.imread('right.png') -correct = cv2.cvtColor(correct, cv2.COLOR_BGR2RGB) -incorrect = cv2.imread('wrong.png') -incorrect = cv2.cvtColor(incorrect, cv2.COLOR_BGR2RGB) - -def draw_rounded_rect(img, rect_start, rect_end, corner_width, box_color): - - x1, y1 = rect_start - x2, y2 = rect_end - w = corner_width - - # draw filled rectangles - cv2.rectangle(img, (x1 + w, y1), (x2 - w, y1 + w), box_color, -1) - cv2.rectangle(img, (x1 + w, y2 - w), (x2 - w, y2), box_color, -1) - cv2.rectangle(img, (x1, y1 + w), (x1 + w, y2 - w), box_color, -1) - cv2.rectangle(img, (x2 - w, y1 + w), (x2, y2 - w), box_color, -1) - cv2.rectangle(img, (x1 + w, y1 + w), (x2 - w, y2 - w), box_color, -1) - - - # draw filled ellipses - cv2.ellipse(img, (x1 + w, y1 + w), (w, w), - angle = 0, startAngle = -90, endAngle = -180, color = box_color, thickness = -1) - - cv2.ellipse(img, (x2 - w, y1 + w), (w, w), - angle = 0, startAngle = 0, endAngle = -90, color = box_color, thickness = -1) - - cv2.ellipse(img, (x1 + w, y2 - w), (w, w), - angle = 0, startAngle = 90, endAngle = 180, color = box_color, thickness = -1) - - cv2.ellipse(img, (x2 - w, y2 - w), (w, w), - angle = 0, startAngle = 0, endAngle = 90, color = box_color, thickness = -1) - - return img - - - - -def draw_dotted_line(frame, lm_coord, start, end, line_color): - pix_step = 0 - - for i in range(start, end+1, 8): - cv2.circle(frame, (lm_coord[0], i+pix_step), 2, line_color, -1, lineType=cv2.LINE_AA) - - return frame - -def draw_text( - img, - msg, - width = 7, - font=cv2.FONT_HERSHEY_SIMPLEX, - pos=(0, 0), - font_scale=1, - font_thickness=2, - text_color=(0, 255, 0), - text_color_bg=(0, 0, 0), - box_offset=(20, 10), - overlay_image = False, - overlay_type = None -): - - offset = box_offset - x, y = pos - text_size, _ = cv2.getTextSize(msg, font, font_scale, font_thickness) - text_w, text_h = text_size - - rec_start = tuple(p - o for p, o in zip(pos, offset)) - rec_end = tuple(m + n - o for m, n, o in zip((x + text_w, y + text_h), offset, (25, 0))) - - resize_height = 0 - - if overlay_image: - resize_height = rec_end[1] - rec_start[1] - # print("Height: ", resize_height) - # print("Width: ", rec_end[0] - rec_start[0]) - img = draw_rounded_rect(img, rec_start, (rec_end[0]+resize_height, rec_end[1]), width, text_color_bg) - if overlay_type == "correct": - overlay_res = cv2.resize(correct, (resize_height, resize_height), interpolation = cv2.INTER_AREA) - elif overlay_type == "incorrect": - overlay_res = cv2.resize(incorrect, (resize_height, resize_height), interpolation = cv2.INTER_AREA) - - img[rec_start[1]:rec_start[1]+resize_height, rec_start[0]+width:rec_start[0]+width+resize_height] = overlay_res - - else: - img = draw_rounded_rect(img, rec_start, rec_end, width, text_color_bg) - - - cv2.putText( - img, - msg, - (int(rec_start[0]+resize_height + 8), int(y + text_h + font_scale - 1)), - font, - font_scale, - text_color, - font_thickness, - cv2.LINE_AA, - ) - - - - return text_size - - - -def find_angle(p1, p2, ref_pt = np.array([0,0])): - p1_ref = p1 - ref_pt - p2_ref = p2 - ref_pt - - cos_theta = (np.dot(p1_ref,p2_ref)) / (1.0 * np.linalg.norm(p1_ref) * np.linalg.norm(p2_ref)) - theta = np.arccos(np.clip(cos_theta, -1.0, 1.0)) - - degree = int(180 / np.pi) * theta - - return int(degree) - - - - - -def get_landmark_array(pose_landmark, key, frame_width, frame_height): - - denorm_x = int(pose_landmark[key].x * frame_width) - denorm_y = int(pose_landmark[key].y * frame_height) - - return np.array([denorm_x, denorm_y]) - - - - -def get_landmark_features(kp_results, dict_features, feature, frame_width, frame_height): - - if feature == 'nose': - return get_landmark_array(kp_results, dict_features[feature], frame_width, frame_height) - - elif feature == 'left' or 'right': - shldr_coord = get_landmark_array(kp_results, dict_features[feature]['shoulder'], frame_width, frame_height) - elbow_coord = get_landmark_array(kp_results, dict_features[feature]['elbow'], frame_width, frame_height) - wrist_coord = get_landmark_array(kp_results, dict_features[feature]['wrist'], frame_width, frame_height) - hip_coord = get_landmark_array(kp_results, dict_features[feature]['hip'], frame_width, frame_height) - knee_coord = get_landmark_array(kp_results, dict_features[feature]['knee'], frame_width, frame_height) - ankle_coord = get_landmark_array(kp_results, dict_features[feature]['ankle'], frame_width, frame_height) - foot_coord = get_landmark_array(kp_results, dict_features[feature]['foot'], frame_width, frame_height) - - return shldr_coord, elbow_coord, wrist_coord, hip_coord, knee_coord, ankle_coord, foot_coord - - else: - raise ValueError("feature needs to be either 'nose', 'left' or 'right") - - -def get_mediapipe_pose( - static_image_mode = False, - model_complexity = 1, - smooth_landmarks = True, - min_detection_confidence = 0.5, - min_tracking_confidence = 0.5 - - ): - pose = mp.solutions.pose.Pose( - static_image_mode = static_image_mode, - model_complexity = model_complexity, - smooth_landmarks = smooth_landmarks, - min_detection_confidence = min_detection_confidence, - min_tracking_confidence = min_tracking_confidence - ) - return pose \ No newline at end of file diff --git a/spaces/KyanChen/FunSR/datasets/wrappers.py b/spaces/KyanChen/FunSR/datasets/wrappers.py deleted file mode 100644 index 8537fefdadb842b21f28eafa3359870136b3f09b..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/datasets/wrappers.py +++ /dev/null @@ -1,248 +0,0 @@ -import functools -import random -import math -from PIL import Image - -import numpy as np -import torch -from torch.utils.data import Dataset -from torchvision import transforms - -from datasets import register -from utils import to_pixel_samples - - -@register('liff_test_warp') -class LIIFTestWarp(Dataset): - def __init__(self, dataset, scale_ratio, val_mode=False, sample_q=None): - self.dataset = dataset - self.scale_ratio = scale_ratio - self.val_mode = val_mode - self.sample_q = sample_q - print('hr_scale: ', int(scale_ratio*32)) - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - img_lr, img_hr = self.dataset[idx] - if img_hr.shape[-1] < 256: - img_hr = transforms.Resize([256, 256])(img_hr) - - img_hr = transforms.Resize([self.scale_ratio*32, self.scale_ratio*32])(img_hr) - - hr_coord, hr_rgb = to_pixel_samples(img_hr.contiguous()) - - if self.sample_q is not None: - sample_lst = np.random.choice(len(hr_coord), self.sample_q, replace=False) - hr_coord = hr_coord[sample_lst] - hr_rgb = hr_rgb[sample_lst] - - cell = torch.ones_like(hr_coord) - cell[:, 0] *= 2 / img_hr.shape[-2] - cell[:, 1] *= 2 / img_hr.shape[-1] - - return { - 'inp': img_lr, - 'coord': hr_coord, - 'cell': cell, - 'gt': hr_rgb - } - -@register('sr-implicit-paired') -class SRImplicitPaired(Dataset): - - def __init__(self, dataset, inp_size=None, augment=False, sample_q=None): - self.dataset = dataset - self.inp_size = inp_size - self.augment = augment - self.sample_q = sample_q - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - img_lr, img_hr = self.dataset[idx] - if img_hr.shape[-1] < 256: - img_hr = transforms.Resize([256, 256])(img_hr) - - s = img_hr.shape[-2] // img_lr.shape[-2] # assume int scale - if self.inp_size is None: - h_lr, w_lr = img_lr.shape[-2:] - img_hr = img_hr[:, :h_lr * s, :w_lr * s] - crop_lr, crop_hr = img_lr, img_hr - else: - w_lr = self.inp_size - x0 = random.randint(0, img_lr.shape[-2] - w_lr) - y0 = random.randint(0, img_lr.shape[-1] - w_lr) - crop_lr = img_lr[:, x0: x0 + w_lr, y0: y0 + w_lr] - w_hr = w_lr * s - x1 = x0 * s - y1 = y0 * s - crop_hr = img_hr[:, x1: x1 + w_hr, y1: y1 + w_hr] - - if self.augment: - hflip = random.random() < 0.5 - vflip = random.random() < 0.5 - dflip = random.random() < 0.5 - - def augment(x): - if hflip: - x = x.flip(-2) - if vflip: - x = x.flip(-1) - if dflip: - x = x.transpose(-2, -1) - return x - - crop_lr = augment(crop_lr) - crop_hr = augment(crop_hr) - - hr_coord, hr_rgb = to_pixel_samples(crop_hr.contiguous()) - - if self.sample_q is not None: - sample_lst = np.random.choice( - len(hr_coord), self.sample_q, replace=False) - hr_coord = hr_coord[sample_lst] - hr_rgb = hr_rgb[sample_lst] - - cell = torch.ones_like(hr_coord) - cell[:, 0] *= 2 / crop_hr.shape[-2] - cell[:, 1] *= 2 / crop_hr.shape[-1] - - return { - 'inp': crop_lr, - 'coord': hr_coord, - 'cell': cell, - 'gt': hr_rgb - } - - -def resize_fn(img, size): - return transforms.ToTensor()( - transforms.Resize(size, Image.BICUBIC)( - transforms.ToPILImage()(img))) - - -@register('sr-implicit-downsampled') -class SRImplicitDownsampled(Dataset): - - def __init__(self, dataset, inp_size=None, scale_min=1, scale_max=None, - augment=False, sample_q=None): - self.dataset = dataset - self.inp_size = inp_size - self.scale_min = scale_min - if scale_max is None: - scale_max = scale_min - self.scale_max = scale_max - self.augment = augment - self.sample_q = sample_q - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - img = self.dataset[idx] - s = random.uniform(self.scale_min, self.scale_max) - - if self.inp_size is None: - h_lr = math.floor(img.shape[-2] / s + 1e-9) - w_lr = math.floor(img.shape[-1] / s + 1e-9) - img = img[:, :round(h_lr * s), :round(w_lr * s)] # assume round int - img_down = resize_fn(img, (h_lr, w_lr)) - crop_lr, crop_hr = img_down, img - else: - w_lr = self.inp_size - w_hr = round(w_lr * s) - x0 = random.randint(0, img.shape[-2] - w_hr) - y0 = random.randint(0, img.shape[-1] - w_hr) - crop_hr = img[:, x0: x0 + w_hr, y0: y0 + w_hr] - crop_lr = resize_fn(crop_hr, w_lr) - - if self.augment: - hflip = random.random() < 0.5 - vflip = random.random() < 0.5 - dflip = random.random() < 0.5 - - def augment(x): - if hflip: - x = x.flip(-2) - if vflip: - x = x.flip(-1) - if dflip: - x = x.transpose(-2, -1) - return x - - crop_lr = augment(crop_lr) - crop_hr = augment(crop_hr) - - hr_coord, hr_rgb = to_pixel_samples(crop_hr.contiguous()) - - if self.sample_q is not None: - sample_lst = np.random.choice( - len(hr_coord), self.sample_q, replace=False) - hr_coord = hr_coord[sample_lst] - hr_rgb = hr_rgb[sample_lst] - - cell = torch.ones_like(hr_coord) - cell[:, 0] *= 2 / crop_hr.shape[-2] - cell[:, 1] *= 2 / crop_hr.shape[-1] - - return { - 'inp': crop_lr, - 'coord': hr_coord, - 'cell': cell, - 'gt': hr_rgb - } - - -@register('sr-implicit-uniform-varied') -class SRImplicitUniformVaried(Dataset): - - def __init__(self, dataset, size_min, size_max=None, - augment=False, gt_resize=None, sample_q=None): - self.dataset = dataset - self.size_min = size_min - if size_max is None: - size_max = size_min - self.size_max = size_max - self.augment = augment - self.gt_resize = gt_resize - self.sample_q = sample_q - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, idx): - img_lr, img_hr = self.dataset[idx] - # p = idx / (len(self.dataset) - 1) - p = random.random() - w_hr = round(self.size_min + (self.size_max - self.size_min) * p) - img_hr = resize_fn(img_hr, w_hr) - - if self.augment: - if random.random() < 0.5: - img_lr = img_lr.flip(-1) - img_hr = img_hr.flip(-1) - - if self.gt_resize is not None: - img_hr = resize_fn(img_hr, self.gt_resize) - - hr_coord, hr_rgb = to_pixel_samples(img_hr) - - if self.sample_q is not None: - sample_lst = np.random.choice( - len(hr_coord), self.sample_q, replace=False) - hr_coord = hr_coord[sample_lst] - hr_rgb = hr_rgb[sample_lst] - - cell = torch.ones_like(hr_coord) - cell[:, 0] *= 2 / img_hr.shape[-2] - cell[:, 1] *= 2 / img_hr.shape[-1] - - return { - 'inp': img_lr, - 'coord': hr_coord, - 'cell': cell, - 'gt': hr_rgb - } diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_ssdd_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_ssdd_config.py deleted file mode 100644 index 8a72527bcba2ca3af2e6b88de95ef5a58a681382..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/rsprompter_anchor_ssdd_config.py +++ /dev/null @@ -1,347 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False) - -sub_model_train = [ - 'panoptic_head', - 'data_preprocessor' -] - -sub_model_optim = { - 'panoptic_head': {'lr_mult': 1}, -} - -max_epochs = 1000 - -optimizer = dict( - type='AdamW', - sub_model=sub_model_optim, - lr=0.0005, - weight_decay=1e-3 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=1e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ), -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - val_evaluator=evaluator_, -) - - -image_size = (1024, 1024) - -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32, - pad_mask=True, - mask_pad_value=0, -) - -num_things_classes = 1 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes -prompt_shape = (30, 4) - -model_cfg = dict( - type='SegSAMAnchorPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - need_train_names=sub_model_train, - data_preprocessor=data_preprocessor, - backbone=dict( - type='vit_h', - checkpoint='pretrain/sam/sam_vit_h_4b8939.pth', - # type='vit_b', - # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth', - ), - panoptic_head=dict( - type='SAMAnchorInstanceHead', - neck=dict( - type='SAMAggregatorNeck', - in_channels=[1280] * 32, - # in_channels=[768] * 12, - inner_channels=32, - selected_channels=range(8, 32, 2), - # selected_channels=range(4, 12, 2), - out_channels=256, - up_sample_scale=4, - ), - rpn_head=dict( - type='mmdet.RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='mmdet.AnchorGenerator', - scales=[2, 4, 8, 16, 32, 64], - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32]), - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - roi_head=dict( - type='SAMAnchorPromptRoIHead', - bbox_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - bbox_head=dict( - type='mmdet.Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=num_classes, - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='mmdet.SmoothL1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[8, 16, 32]), - mask_head=dict( - type='SAMPromptMaskHead', - per_query_point=prompt_shape[1], - with_sincos=True, - class_agnostic=True, - loss_mask=dict( - type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=512, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=256, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=1024, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5) - ) - ) -) - -task_name = 'ssdd_ins' -exp_name = 'E20230629_2' -logger = dict( - type='WandbLogger', - project=task_name, - group='sam-anchor', - name=exp_name -) - - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valsegm_map_0', - save_top_k=3, - filename='epoch_{epoch}-map_{valsegm_map_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - strategy="auto", - # strategy="ddp", - # strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=8, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=5, - check_val_every_n_epoch=5, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=0, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 2 -train_num_workers = 2 -test_batch_size_per_gpu = 2 -test_num_workers = 2 -persistent_workers = True - -data_parent = '/mnt/search01/dataset/cky_data/SSDD' -dataset_type = 'SSDDInsSegDataset' - - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - # ann_file='NWPU_instances_val.json', - # data_prefix=dict(img_path='positive image set'), - ann_file='annotations/SSDD_instances_val.json', - data_prefix=dict(img_path='imgs'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args)) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - # ann_file='NWPU_instances_train.json', - # data_prefix=dict(img_path='positive image set'), - ann_file='annotations/SSDD_instances_train.json', - data_prefix=dict(img_path='imgs'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - # test_loader=val_loader - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/apis/model.py b/spaces/KyanChen/RSPrompter/mmpretrain/apis/model.py deleted file mode 100644 index eba475e7f791f42eb9aec384afec947f72722f27..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/apis/model.py +++ /dev/null @@ -1,408 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import fnmatch -import os.path as osp -import re -import warnings -from os import PathLike -from pathlib import Path -from typing import List, Tuple, Union - -from mmengine.config import Config -from modelindex.load_model_index import load -from modelindex.models.Model import Model - - -class ModelHub: - """A hub to host the meta information of all pre-defined models.""" - _models_dict = {} - __mmpretrain_registered = False - - @classmethod - def register_model_index(cls, - model_index_path: Union[str, PathLike], - config_prefix: Union[str, PathLike, None] = None): - """Parse the model-index file and register all models. - - Args: - model_index_path (str | PathLike): The path of the model-index - file. - config_prefix (str | PathLike | None): The prefix of all config - file paths in the model-index file. - """ - model_index = load(str(model_index_path)) - model_index.build_models_with_collections() - - for metainfo in model_index.models: - model_name = metainfo.name.lower() - if metainfo.name in cls._models_dict: - raise ValueError( - 'The model name {} is conflict in {} and {}.'.format( - model_name, osp.abspath(metainfo.filepath), - osp.abspath(cls._models_dict[model_name].filepath))) - metainfo.config = cls._expand_config_path(metainfo, config_prefix) - cls._models_dict[model_name] = metainfo - - @classmethod - def get(cls, model_name): - """Get the model's metainfo by the model name. - - Args: - model_name (str): The name of model. - - Returns: - modelindex.models.Model: The metainfo of the specified model. - """ - cls._register_mmpretrain_models() - # lazy load config - metainfo = copy.deepcopy(cls._models_dict.get(model_name.lower())) - if metainfo is None: - raise ValueError( - f'Failed to find model "{model_name}". please use ' - '`mmpretrain.list_models` to get all available names.') - if isinstance(metainfo.config, str): - metainfo.config = Config.fromfile(metainfo.config) - return metainfo - - @staticmethod - def _expand_config_path(metainfo: Model, - config_prefix: Union[str, PathLike] = None): - if config_prefix is None: - config_prefix = osp.dirname(metainfo.filepath) - - if metainfo.config is None or osp.isabs(metainfo.config): - config_path: str = metainfo.config - else: - config_path = osp.abspath(osp.join(config_prefix, metainfo.config)) - - return config_path - - @classmethod - def _register_mmpretrain_models(cls): - # register models in mmpretrain - if not cls.__mmpretrain_registered: - from importlib_metadata import distribution - root = distribution('mmpretrain').locate_file('mmpretrain') - model_index_path = root / '.mim' / 'model-index.yml' - ModelHub.register_model_index( - model_index_path, config_prefix=root / '.mim') - cls.__mmpretrain_registered = True - - @classmethod - def has(cls, model_name): - """Whether a model name is in the ModelHub.""" - return model_name in cls._models_dict - - -def get_model(model: Union[str, Config], - pretrained: Union[str, bool] = False, - device=None, - device_map=None, - offload_folder=None, - url_mapping: Tuple[str, str] = None, - **kwargs): - """Get a pre-defined model or create a model from config. - - Args: - model (str | Config): The name of model, the config file path or a - config instance. - pretrained (bool | str): When use name to specify model, you can - use ``True`` to load the pre-defined pretrained weights. And you - can also use a string to specify the path or link of weights to - load. Defaults to False. - device (str | torch.device | None): Transfer the model to the target - device. Defaults to None. - device_map (str | dict | None): A map that specifies where each - submodule should go. It doesn't need to be refined to each - parameter/buffer name, once a given module name is inside, every - submodule of it will be sent to the same device. You can use - `device_map="auto"` to automatically generate the device map. - Defaults to None. - offload_folder (str | None): If the `device_map` contains any value - `"disk"`, the folder where we will offload weights. - url_mapping (Tuple[str, str], optional): The mapping of pretrained - checkpoint link. For example, load checkpoint from a local dir - instead of download by ``('https://.*/', './checkpoint')``. - Defaults to None. - **kwargs: Other keyword arguments of the model config. - - Returns: - mmengine.model.BaseModel: The result model. - - Examples: - Get a ResNet-50 model and extract images feature: - - >>> import torch - >>> from mmpretrain import get_model - >>> inputs = torch.rand(16, 3, 224, 224) - >>> model = get_model('resnet50_8xb32_in1k', pretrained=True, backbone=dict(out_indices=(0, 1, 2, 3))) - >>> feats = model.extract_feat(inputs) - >>> for feat in feats: - ... print(feat.shape) - torch.Size([16, 256]) - torch.Size([16, 512]) - torch.Size([16, 1024]) - torch.Size([16, 2048]) - - Get Swin-Transformer model with pre-trained weights and inference: - - >>> from mmpretrain import get_model, inference_model - >>> model = get_model('swin-base_16xb64_in1k', pretrained=True) - >>> result = inference_model(model, 'demo/demo.JPEG') - >>> print(result['pred_class']) - 'sea snake' - """ # noqa: E501 - if device_map is not None: - from .utils import dispatch_model - dispatch_model._verify_require() - - metainfo = None - if isinstance(model, Config): - config = copy.deepcopy(model) - if pretrained is True and 'load_from' in config: - pretrained = config.load_from - elif isinstance(model, (str, PathLike)) and Path(model).suffix == '.py': - config = Config.fromfile(model) - if pretrained is True and 'load_from' in config: - pretrained = config.load_from - elif isinstance(model, str): - metainfo = ModelHub.get(model) - config = metainfo.config - if pretrained is True and metainfo.weights is not None: - pretrained = metainfo.weights - else: - raise TypeError('model must be a name, a path or a Config object, ' - f'but got {type(config)}') - - if pretrained is True: - warnings.warn('Unable to find pre-defined checkpoint of the model.') - pretrained = None - elif pretrained is False: - pretrained = None - - if kwargs: - config.merge_from_dict({'model': kwargs}) - config.model.setdefault('data_preprocessor', - config.get('data_preprocessor', None)) - - from mmengine.registry import DefaultScope - - from mmpretrain.registry import MODELS - with DefaultScope.overwrite_default_scope('mmpretrain'): - model = MODELS.build(config.model) - - dataset_meta = {} - if pretrained: - # Mapping the weights to GPU may cause unexpected video memory leak - # which refers to https://github.com/open-mmlab/mmdetection/pull/6405 - from mmengine.runner import load_checkpoint - if url_mapping is not None: - pretrained = re.sub(url_mapping[0], url_mapping[1], pretrained) - checkpoint = load_checkpoint(model, pretrained, map_location='cpu') - if 'dataset_meta' in checkpoint.get('meta', {}): - # mmpretrain 1.x - dataset_meta = checkpoint['meta']['dataset_meta'] - elif 'CLASSES' in checkpoint.get('meta', {}): - # mmcls 0.x - dataset_meta = {'classes': checkpoint['meta']['CLASSES']} - - if len(dataset_meta) == 0 and 'test_dataloader' in config: - from mmpretrain.registry import DATASETS - dataset_class = DATASETS.get(config.test_dataloader.dataset.type) - dataset_meta = getattr(dataset_class, 'METAINFO', {}) - - if device_map is not None: - model = dispatch_model( - model, device_map=device_map, offload_folder=offload_folder) - elif device is not None: - model.to(device) - - model._dataset_meta = dataset_meta # save the dataset meta - model._config = config # save the config in the model - model._metainfo = metainfo # save the metainfo in the model - model.eval() - return model - - -def init_model(config, checkpoint=None, device=None, **kwargs): - """Initialize a classifier from config file (deprecated). - - It's only for compatibility, please use :func:`get_model` instead. - - Args: - config (str | :obj:`mmengine.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str | torch.device | None): Transfer the model to the target - device. Defaults to None. - **kwargs: Other keyword arguments of the model config. - - Returns: - nn.Module: The constructed model. - """ - return get_model(config, checkpoint, device, **kwargs) - - -def list_models(pattern=None, exclude_patterns=None, task=None) -> List[str]: - """List all models available in MMPretrain. - - Args: - pattern (str | None): A wildcard pattern to match model names. - Defaults to None. - exclude_patterns (list | None): A list of wildcard patterns to - exclude names from the matched names. Defaults to None. - task (str | none): The evaluation task of the model. - - Returns: - List[str]: a list of model names. - - Examples: - List all models: - - >>> from mmpretrain import list_models - >>> list_models() - - List ResNet-50 models on ImageNet-1k dataset: - - >>> from mmpretrain import list_models - >>> list_models('resnet*in1k') - ['resnet50_8xb32_in1k', - 'resnet50_8xb32-fp16_in1k', - 'resnet50_8xb256-rsb-a1-600e_in1k', - 'resnet50_8xb256-rsb-a2-300e_in1k', - 'resnet50_8xb256-rsb-a3-100e_in1k'] - - List Swin-Transformer models trained from stratch and exclude - Swin-Transformer-V2 models: - - >>> from mmpretrain import list_models - >>> list_models('swin', exclude_patterns=['swinv2', '*-pre']) - ['swin-base_16xb64_in1k', - 'swin-base_3rdparty_in1k', - 'swin-base_3rdparty_in1k-384', - 'swin-large_8xb8_cub-384px', - 'swin-small_16xb64_in1k', - 'swin-small_3rdparty_in1k', - 'swin-tiny_16xb64_in1k', - 'swin-tiny_3rdparty_in1k'] - - List all EVA models for image classification task. - - >>> from mmpretrain import list_models - >>> list_models('eva', task='Image Classification') - ['eva-g-p14_30m-in21k-pre_3rdparty_in1k-336px', - 'eva-g-p14_30m-in21k-pre_3rdparty_in1k-560px', - 'eva-l-p14_mim-in21k-pre_3rdparty_in1k-196px', - 'eva-l-p14_mim-in21k-pre_3rdparty_in1k-336px', - 'eva-l-p14_mim-pre_3rdparty_in1k-196px', - 'eva-l-p14_mim-pre_3rdparty_in1k-336px'] - """ - ModelHub._register_mmpretrain_models() - matches = set(ModelHub._models_dict.keys()) - - if pattern is not None: - # Always match keys with any postfix. - matches = set(fnmatch.filter(matches, pattern + '*')) - - exclude_patterns = exclude_patterns or [] - for exclude_pattern in exclude_patterns: - exclude = set(fnmatch.filter(matches, exclude_pattern + '*')) - matches = matches - exclude - - if task is not None: - task_matches = [] - for key in matches: - metainfo = ModelHub._models_dict[key] - if metainfo.results is None and task == 'null': - task_matches.append(key) - elif metainfo.results is None: - continue - elif task in [result.task for result in metainfo.results]: - task_matches.append(key) - matches = task_matches - - return sorted(list(matches)) - - -def inference_model(model, *args, **kwargs): - """Inference an image with the inferencer. - - Automatically select inferencer to inference according to the type of - model. It's a shortcut for a quick start, and for advanced usage, please - use the correspondding inferencer class. - - Here is the mapping from task to inferencer: - - - Image Classification: :class:`ImageClassificationInferencer` - - Image Retrieval: :class:`ImageRetrievalInferencer` - - Image Caption: :class:`ImageCaptionInferencer` - - Visual Question Answering: :class:`VisualQuestionAnsweringInferencer` - - Visual Grounding: :class:`VisualGroundingInferencer` - - Text-To-Image Retrieval: :class:`TextToImageRetrievalInferencer` - - Image-To-Text Retrieval: :class:`ImageToTextRetrievalInferencer` - - NLVR: :class:`NLVRInferencer` - - Args: - model (BaseModel | str | Config): The loaded model, the model - name or the config of the model. - *args: Positional arguments to call the inferencer. - **kwargs: Other keyword arguments to initialize and call the - correspondding inferencer. - - Returns: - result (dict): The inference results. - """ # noqa: E501 - from mmengine.model import BaseModel - - if isinstance(model, BaseModel): - metainfo = getattr(model, '_metainfo', None) - else: - metainfo = ModelHub.get(model) - - from inspect import signature - - from .image_caption import ImageCaptionInferencer - from .image_classification import ImageClassificationInferencer - from .image_retrieval import ImageRetrievalInferencer - from .multimodal_retrieval import (ImageToTextRetrievalInferencer, - TextToImageRetrievalInferencer) - from .nlvr import NLVRInferencer - from .visual_grounding import VisualGroundingInferencer - from .visual_question_answering import VisualQuestionAnsweringInferencer - task_mapping = { - 'Image Classification': ImageClassificationInferencer, - 'Image Retrieval': ImageRetrievalInferencer, - 'Image Caption': ImageCaptionInferencer, - 'Visual Question Answering': VisualQuestionAnsweringInferencer, - 'Visual Grounding': VisualGroundingInferencer, - 'Text-To-Image Retrieval': TextToImageRetrievalInferencer, - 'Image-To-Text Retrieval': ImageToTextRetrievalInferencer, - 'NLVR': NLVRInferencer, - } - - inferencer_type = None - - if metainfo is not None and metainfo.results is not None: - tasks = set(result.task for result in metainfo.results) - inferencer_type = [ - task_mapping.get(task) for task in tasks if task in task_mapping - ] - if len(inferencer_type) > 1: - inferencer_names = [cls.__name__ for cls in inferencer_type] - warnings.warn('The model supports multiple tasks, auto select ' - f'{inferencer_names[0]}, you can also use other ' - f'inferencer {inferencer_names} directly.') - inferencer_type = inferencer_type[0] - - if inferencer_type is None: - raise NotImplementedError('No available inferencer for the model') - - init_kwargs = { - k: kwargs.pop(k) - for k in list(kwargs) - if k in signature(inferencer_type).parameters.keys() - } - - inferencer = inferencer_type(model, **init_kwargs) - return inferencer(*args, **kwargs)[0] diff --git a/spaces/LanguageBind/LanguageBind/languagebind/audio/tokenization_audio.py b/spaces/LanguageBind/LanguageBind/languagebind/audio/tokenization_audio.py deleted file mode 100644 index 6bc40be3f96c20bf2581e23f8249f3cd5566ebe1..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/languagebind/audio/tokenization_audio.py +++ /dev/null @@ -1,77 +0,0 @@ -from transformers import CLIPTokenizer -from transformers.utils import logging - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "merges_file": "merges.txt", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "lb203/LanguageBind-Audio": "https://huggingface.co/lb203/LanguageBind-Audio/resolve/main/vocab.json", - }, - "merges_file": { - "lb203/LanguageBind-Audio": "https://huggingface.co/lb203/LanguageBind-Audio/resolve/main/merges.txt", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "lb203/LanguageBind-Audio": 77, -} - - -PRETRAINED_INIT_CONFIGURATION = { - "lb203/LanguageBind-Audio": {}, -} - -class LanguageBindAudioTokenizer(CLIPTokenizer): - """ - Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding. - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - unk_token (`str`, *optional*, defaults to `<|endoftext|>`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - bos_token (`str`, *optional*, defaults to `<|startoftext|>`): - The beginning of sequence token. - eos_token (`str`, *optional*, defaults to `<|endoftext|>`): - The end of sequence token. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - merges_file, - errors="replace", - unk_token="<|endoftext|>", - bos_token="<|startoftext|>", - eos_token="<|endoftext|>", - pad_token="<|endoftext|>", # hack to enable padding - **kwargs, - ): - super(LanguageBindAudioTokenizer, self).__init__( - vocab_file, - merges_file, - errors, - unk_token, - bos_token, - eos_token, - pad_token, # hack to enable padding - **kwargs,) \ No newline at end of file diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/attentions.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/extract/extract_f0_rmvpe_dml.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/extract/extract_f0_rmvpe_dml.py deleted file mode 100644 index f10cfe7018e97821e8c78d2c776d10fc347fb0fd..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/extract/extract_f0_rmvpe_dml.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import sys -import traceback - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging - -import numpy as np - -from lib.infer.infer_libs.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) - -exp_dir = sys.argv[1] -import torch_directml - -device = torch_directml.device(torch_directml.default_device()) -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def compute_f0(self, path, f0_method): - x = load_audio(path, self.fs) - # p_len = x.shape[0] // self.hop - if f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from lib.infer.infer_libs.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=False, device=device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method): - if len(paths) == 0: - printt("no-f0-todo") - else: - printt("todo-f0-%s" % len(paths)) - n = max(len(paths) // 5, 1) # 每个进程最多打印5条 - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if idx % n == 0: - printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path)) - if ( - os.path.exists(opt_path1 + ".npy") == True - and os.path.exists(opt_path2 + ".npy") == True - ): - continue - featur_pit = self.compute_f0(inp_path, f0_method) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - except: - printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc())) - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - try: - featureInput.go(paths, "rmvpe") - except: - printt("f0_all_fail-%s" % (traceback.format_exc())) - # ps = [] - # for i in range(n_p): - # p = Process( - # target=featureInput.go, - # args=( - # paths[i::n_p], - # f0method, - # ), - # ) - # ps.append(p) - # p.start() - # for i in range(n_p): - # ps[i].join() diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/audioEffects.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/audioEffects.py deleted file mode 100644 index f9d12584eb31f2b19d5e66cdc1a69ab73d5b6f60..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/audioEffects.py +++ /dev/null @@ -1,33 +0,0 @@ -from pedalboard import Pedalboard, Compressor, Reverb, NoiseGate -from pedalboard.io import AudioFile -import sys -import os -now_dir = os.getcwd() -sys.path.append(now_dir) -from assets.i18n.i18n import I18nAuto -i18n = I18nAuto() - -def process_audio(input_path, output_path, reverb_enabled, compressor_enabled, noise_gate_enabled, ): - print(reverb_enabled) - print(compressor_enabled) - print(noise_gate_enabled) - effects = [] - if reverb_enabled: - effects.append(Reverb(room_size=0.01)) - if compressor_enabled: - effects.append(Compressor(threshold_db=-10, ratio=25)) - if noise_gate_enabled: - effects.append(NoiseGate(threshold_db=-16, ratio=1.5, release_ms=250)) - - board = Pedalboard(effects) - - with AudioFile(input_path) as f: - with AudioFile(output_path, 'w', f.samplerate, f.num_channels) as o: - while f.tell() < f.frames: - chunk = f.read(f.samplerate) - effected = board(chunk, f.samplerate, reset=False) - o.write(effected) - - result = i18n("Processed audio saved at: ") + output_path - print(result) - return output_path \ No newline at end of file diff --git a/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/modules.py b/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RCVAICOVER/src/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Lucifer741/emoji-predictor/app.py b/spaces/Lucifer741/emoji-predictor/app.py deleted file mode 100644 index 211cf63270b78c0d5b63741b194eb166ea71c8cf..0000000000000000000000000000000000000000 --- a/spaces/Lucifer741/emoji-predictor/app.py +++ /dev/null @@ -1,122 +0,0 @@ -import gradio as gr -import torch -import os - -from PIL import Image -from pathlib import Path -from more_itertools import chunked - -from transformers import CLIPProcessor, CLIPModel - -checkpoint = "vincentclaes/emoji-predictor" -x_, _, files = next(os.walk("./emojis")) -no_of_emojis = range(len(files)) -emojis_as_images = [Image.open(f"emojis/{i}.png") for i in no_of_emojis] -K = 4 - -processor = CLIPProcessor.from_pretrained(checkpoint) -model = CLIPModel.from_pretrained(checkpoint) - - -def concat_images(*images): - """Generate composite of all supplied images. - https://stackoverflow.com/a/71315656/1771155 - """ - # Get the widest width. - width = max(image.width for image in images) - # Add up all the heights. - height = max(image.height for image in images) - # set the correct size of width and heigtht of composite. - composite = Image.new('RGB', (2*width, 2*height)) - assert K == 4, "We expect 4 suggestions, other numbers won't work." - for i, image in enumerate(images): - if i == 0: - composite.paste(image, (0, 0)) - elif i == 1: - composite.paste(image, (width, 0)) - elif i == 2: - composite.paste(image, (0, height)) - elif i == 3: - composite.paste(image, (width, height)) - return composite - - -def get_emoji(text, model=model, processor=processor, emojis=emojis_as_images, K=4): - inputs = processor(text=text, images=emojis, return_tensors="pt", padding=True, truncation=True) - outputs = model(**inputs) - - logits_per_text = outputs.logits_per_text - # we take the softmax to get the label probabilities - probs = logits_per_text.softmax(dim=1) - # top K number of options - predictions_suggestions_for_chunk = [torch.topk(prob, K).indices.tolist() for prob in probs][0] - predictions_suggestions_for_chunk - - images = [Image.open(f"emojis/{i}.png") for i in predictions_suggestions_for_chunk] - images_concat = concat_images(*images) - return images_concat - - -text = gr.inputs.Textbox(placeholder="Enter a text and we will try to predict an emoji...") -title = "Predicting an Emoji" -description = """You provide a sentence and our few-shot fine tuned CLIP model will suggest 4 from the following emoji's: -\n❤️ 😍 😂 💕 🔥 😊 😎 ✨ 💙 😘 📷 🇺🇸 ☀ 💜 😉 💯 😁 🎄 📸 😜 ☹️ 😭 😔 😡 💢 😤 😳 🙃 😩 😠 🙈 🙄\n -""" -article = """ -\n -++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -\n -#### Let's connect on Linkedin: https://www.linkedin.com/in/vincent-claes-0b346337/ -\n -# Context -I fine tuned Open Ai's CLIP model on both text (tweets) and images of emoji's!\n -The current model you can play with is fine-tuned on 15 samples per emoji. - -- model: https://huggingface.co/vincentclaes/emoji-predictor \n -- dataset: https://huggingface.co/datasets/vincentclaes/emoji-predictor \n -- profile: https://huggingface.co/vincentclaes \n - -# Precision - -Below you can find a table with the precision for predictions and suggestions -for a range of samples per emoji we fine-tuned CLIP on. - -### Prediction vs. Suggestion -- The column "Prediction" indicates the precision for predicting the right emoji. - -- Since there can be some confusion about the right emoji for a tweet, -I also tried to present 4 suggestions. If 1 of the 4 suggestions is the same as the label, -I consider it a valid prediction. See the column "Suggestion". - -- Randomly predicting an emoji would have a precision of 1/32 or 0.0325. -- Randomly suggesting an emoji would have a precision of 4/32 or 0.12. - - - | Samples | Prediction | Suggestion | - |--------- |------------ |------------ | - | 0 | 0.13 | 0.33 | - | 1 | 0.11 | 0.30 | - | 5 | 0.14 | 0.38 | - | 10 | 0.20 | 0.45 | - | 15 | 0.22 | 0.51 | - | 20 | 0.19 | 0.49 | - | 25 | 0.24 | 0.54 | - | 50 | 0.23 | 0.53 | - | 100 | 0.25 | 0.57 | - | 250 | 0.29 | 0.62 | - | 500 | 0.29 | 0.63 | - - - - -""" -examples = [ - "I'm so happy for you!", - "I'm not feeling great today.", - "This makes me angry!", - "Can I follow you?", - "I'm so bored right now ...", -] -gr.Interface(fn=get_emoji, inputs=text, outputs=gr.Image(shape=(72,72)), - examples=examples, title=title, description=description, - article=article).launch() diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/generator.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/generator.py deleted file mode 100644 index 6e24cadc882caab9ee439bb3dd288e536878565a..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/generator.py +++ /dev/null @@ -1,233 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from models.networks.base_network import BaseNetwork -from models.networks.normalization import get_nonspade_norm_layer -from models.networks.architecture import ResnetBlock as ResnetBlock -from models.networks.architecture import SPADEResnetBlock as SPADEResnetBlock -from models.networks.architecture import SPADEResnetBlock_non_spade as SPADEResnetBlock_non_spade - - -class SPADEGenerator(BaseNetwork): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.set_defaults(norm_G="spectralspadesyncbatch3x3") - parser.add_argument( - "--num_upsampling_layers", - choices=("normal", "more", "most"), - default="normal", - help="If 'more', adds upsampling layer between the two middle resnet blocks. If 'most', also add one more upsampling + resnet layer at the end of the generator", - ) - - return parser - - def __init__(self, opt): - super().__init__() - self.opt = opt - nf = opt.ngf - - self.sw, self.sh = self.compute_latent_vector_size(opt) - - print("The size of the latent vector size is [%d,%d]" % (self.sw, self.sh)) - - if opt.use_vae: - # In case of VAE, we will sample from random z vector - self.fc = nn.Linear(opt.z_dim, 16 * nf * self.sw * self.sh) - else: - # Otherwise, we make the network deterministic by starting with - # downsampled segmentation map instead of random z - if self.opt.no_parsing_map: - self.fc = nn.Conv2d(3, 16 * nf, 3, padding=1) - else: - self.fc = nn.Conv2d(self.opt.semantic_nc, 16 * nf, 3, padding=1) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "1": - self.head_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt) - else: - self.head_0 = SPADEResnetBlock_non_spade(16 * nf, 16 * nf, opt) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "2": - self.G_middle_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt) - self.G_middle_1 = SPADEResnetBlock(16 * nf, 16 * nf, opt) - - else: - self.G_middle_0 = SPADEResnetBlock_non_spade(16 * nf, 16 * nf, opt) - self.G_middle_1 = SPADEResnetBlock_non_spade(16 * nf, 16 * nf, opt) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "3": - self.up_0 = SPADEResnetBlock(16 * nf, 8 * nf, opt) - else: - self.up_0 = SPADEResnetBlock_non_spade(16 * nf, 8 * nf, opt) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "4": - self.up_1 = SPADEResnetBlock(8 * nf, 4 * nf, opt) - else: - self.up_1 = SPADEResnetBlock_non_spade(8 * nf, 4 * nf, opt) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "5": - self.up_2 = SPADEResnetBlock(4 * nf, 2 * nf, opt) - else: - self.up_2 = SPADEResnetBlock_non_spade(4 * nf, 2 * nf, opt) - - if self.opt.injection_layer == "all" or self.opt.injection_layer == "6": - self.up_3 = SPADEResnetBlock(2 * nf, 1 * nf, opt) - else: - self.up_3 = SPADEResnetBlock_non_spade(2 * nf, 1 * nf, opt) - - final_nc = nf - - if opt.num_upsampling_layers == "most": - self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt) - final_nc = nf // 2 - - self.conv_img = nn.Conv2d(final_nc, 3, 3, padding=1) - - self.up = nn.Upsample(scale_factor=2) - - def compute_latent_vector_size(self, opt): - if opt.num_upsampling_layers == "normal": - num_up_layers = 5 - elif opt.num_upsampling_layers == "more": - num_up_layers = 6 - elif opt.num_upsampling_layers == "most": - num_up_layers = 7 - else: - raise ValueError("opt.num_upsampling_layers [%s] not recognized" % opt.num_upsampling_layers) - - sw = opt.load_size // (2 ** num_up_layers) - sh = round(sw / opt.aspect_ratio) - - return sw, sh - - def forward(self, input, degraded_image, z=None): - seg = input - - if self.opt.use_vae: - # we sample z from unit normal and reshape the tensor - if z is None: - z = torch.randn(input.size(0), self.opt.z_dim, dtype=torch.float32, device=input.get_device()) - x = self.fc(z) - x = x.view(-1, 16 * self.opt.ngf, self.sh, self.sw) - else: - # we downsample segmap and run convolution - if self.opt.no_parsing_map: - x = F.interpolate(degraded_image, size=(self.sh, self.sw), mode="bilinear") - else: - x = F.interpolate(seg, size=(self.sh, self.sw), mode="nearest") - x = self.fc(x) - - x = self.head_0(x, seg, degraded_image) - - x = self.up(x) - x = self.G_middle_0(x, seg, degraded_image) - - if self.opt.num_upsampling_layers == "more" or self.opt.num_upsampling_layers == "most": - x = self.up(x) - - x = self.G_middle_1(x, seg, degraded_image) - - x = self.up(x) - x = self.up_0(x, seg, degraded_image) - x = self.up(x) - x = self.up_1(x, seg, degraded_image) - x = self.up(x) - x = self.up_2(x, seg, degraded_image) - x = self.up(x) - x = self.up_3(x, seg, degraded_image) - - if self.opt.num_upsampling_layers == "most": - x = self.up(x) - x = self.up_4(x, seg, degraded_image) - - x = self.conv_img(F.leaky_relu(x, 2e-1)) - x = F.tanh(x) - - return x - - -class Pix2PixHDGenerator(BaseNetwork): - @staticmethod - def modify_commandline_options(parser, is_train): - parser.add_argument( - "--resnet_n_downsample", type=int, default=4, help="number of downsampling layers in netG" - ) - parser.add_argument( - "--resnet_n_blocks", - type=int, - default=9, - help="number of residual blocks in the global generator network", - ) - parser.add_argument( - "--resnet_kernel_size", type=int, default=3, help="kernel size of the resnet block" - ) - parser.add_argument( - "--resnet_initial_kernel_size", type=int, default=7, help="kernel size of the first convolution" - ) - # parser.set_defaults(norm_G='instance') - return parser - - def __init__(self, opt): - super().__init__() - input_nc = 3 - - # print("xxxxx") - # print(opt.norm_G) - norm_layer = get_nonspade_norm_layer(opt, opt.norm_G) - activation = nn.ReLU(False) - - model = [] - - # initial conv - model += [ - nn.ReflectionPad2d(opt.resnet_initial_kernel_size // 2), - norm_layer(nn.Conv2d(input_nc, opt.ngf, kernel_size=opt.resnet_initial_kernel_size, padding=0)), - activation, - ] - - # downsample - mult = 1 - for i in range(opt.resnet_n_downsample): - model += [ - norm_layer(nn.Conv2d(opt.ngf * mult, opt.ngf * mult * 2, kernel_size=3, stride=2, padding=1)), - activation, - ] - mult *= 2 - - # resnet blocks - for i in range(opt.resnet_n_blocks): - model += [ - ResnetBlock( - opt.ngf * mult, - norm_layer=norm_layer, - activation=activation, - kernel_size=opt.resnet_kernel_size, - ) - ] - - # upsample - for i in range(opt.resnet_n_downsample): - nc_in = int(opt.ngf * mult) - nc_out = int((opt.ngf * mult) / 2) - model += [ - norm_layer( - nn.ConvTranspose2d(nc_in, nc_out, kernel_size=3, stride=2, padding=1, output_padding=1) - ), - activation, - ] - mult = mult // 2 - - # final output conv - model += [ - nn.ReflectionPad2d(3), - nn.Conv2d(nc_out, opt.output_nc, kernel_size=7, padding=0), - nn.Tanh(), - ] - - self.model = nn.Sequential(*model) - - def forward(self, input, degraded_image, z=None): - return self.model(degraded_image) - diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/english.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/Monosmarinos/Pix2Pix-Video/share_btn.py b/spaces/Monosmarinos/Pix2Pix-Video/share_btn.py deleted file mode 100644 index 66e0de15dce2d65f4cd0ef512c7bd8adad0beb77..0000000000000000000000000000000000000000 --- a/spaces/Monosmarinos/Pix2Pix-Video/share_btn.py +++ /dev/null @@ -1,73 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `vid-pix2pix-${{videoId}}.wav`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const captionTxt = gradioEl.querySelector('#prompt-in textarea').value; - const inputVidEl = gradioEl.querySelector('#input-vid video'); - const outputVideo = gradioEl.querySelector('#video-output video'); - - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getVideoBlobFile(inputVidEl); - const urlInputVid = await uploadFile(inputFile); - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Video input: -${urlInputVid} - -#### Pix2Pix result: -${dataOutputVid} -`; - const params = new URLSearchParams({ - title: captionTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/Pix2Pix-Video/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_modality-transform_6e_st_mj.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_modality-transform_6e_st_mj.py deleted file mode 100644 index a1e77843e41a5078a2d890649f143e3ce5b7087d..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/nrtr_modality-transform_6e_st_mj.py +++ /dev/null @@ -1,56 +0,0 @@ -_base_ = [ - '../_base_/datasets/mjsynth.py', - '../_base_/datasets/synthtext.py', - '../_base_/datasets/cute80.py', - '../_base_/datasets/iiit5k.py', - '../_base_/datasets/svt.py', - '../_base_/datasets/svtp.py', - '../_base_/datasets/icdar2013.py', - '../_base_/datasets/icdar2015.py', - '../_base_/default_runtime.py', - '../_base_/schedules/schedule_adam_base.py', - '_base_nrtr_modality-transform.py', -] - -# optimizer settings -train_cfg = dict(max_epochs=6) -# learning policy -param_scheduler = [ - dict(type='MultiStepLR', milestones=[3, 4], end=6), -] - -# dataset settings -train_list = [_base_.mjsynth_textrecog_train, _base_.synthtext_textrecog_train] -test_list = [ - _base_.cute80_textrecog_test, _base_.iiit5k_textrecog_test, - _base_.svt_textrecog_test, _base_.svtp_textrecog_test, - _base_.icdar2013_textrecog_test, _base_.icdar2015_textrecog_test -] - -train_dataset = dict( - type='ConcatDataset', datasets=train_list, pipeline=_base_.train_pipeline) -test_dataset = dict( - type='ConcatDataset', datasets=test_list, pipeline=_base_.test_pipeline) - -train_dataloader = dict( - batch_size=384, - num_workers=24, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=train_dataset) - -test_dataloader = dict( - batch_size=1, - num_workers=4, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=test_dataset) - -val_dataloader = test_dataloader - -val_evaluator = dict( - dataset_prefixes=['CUTE80', 'IIIT5K', 'SVT', 'SVTP', 'IC13', 'IC15']) -test_evaluator = val_evaluator - -auto_scale_lr = dict(base_batch_size=384) diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow-small_5e_st_mj.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow-small_5e_st_mj.py deleted file mode 100644 index a72201c9cc10a36a281b7ea830d61041fbff5831..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow-small_5e_st_mj.py +++ /dev/null @@ -1,23 +0,0 @@ -_base_ = ['satrn_shallow_5e_st_mj.py'] - -model = dict( - backbone=dict(type='ShallowCNN', input_channels=3, hidden_dim=256), - encoder=dict( - type='SATRNEncoder', - n_layers=6, - n_head=8, - d_k=256 // 8, - d_v=256 // 8, - d_model=256, - n_position=100, - d_inner=256 * 4, - dropout=0.1), - decoder=dict( - type='NRTRDecoder', - n_layers=6, - d_embedding=256, - n_head=8, - d_model=256, - d_inner=256 * 4, - d_k=256 // 8, - d_v=256 // 8)) diff --git a/spaces/MrVicente/RA-BART/kgs_binding/kg_base_wrapper.py b/spaces/MrVicente/RA-BART/kgs_binding/kg_base_wrapper.py deleted file mode 100644 index 6a77cdd9693e54d24e7664fec1632e288ef9d4b5..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/kgs_binding/kg_base_wrapper.py +++ /dev/null @@ -1,80 +0,0 @@ - -############################# -# Imports -############################# - -# Python modules -from abc import ABC, abstractmethod -from typing import Tuple, Optional, List - -# Remote modules -from nltk.stem import WordNetLemmatizer - -# Local modules - -############################# -# Constants -############################# - -class KGBaseHandler(ABC): - def __init__(self): - super().__init__() - self.st = WordNetLemmatizer() - - def normalize_noun(self, ent): - try: - noun = self.st.lemmatize(ent, pos='n') - noun = self.st.lemmatize(noun, pos='v') - except Exception as _: - noun = ent[:-1] if ent[-1] == 's' else ent - return noun - - def normalize_nouns(self, ent): - local_ent = ent[:] - nouns = local_ent.split(' ') - if len(nouns) == 1: - return ' '.join([self.normalize_noun(e) for e in nouns]) - return local_ent - - def ignore_less_relevant_connection(self, relations): - if len(relations) >= 2: - for r in relations: - if r != 'related_to': - return r - return relations[0] - - @abstractmethod - def get_relation_types(self) -> List[str]: - pass - - @abstractmethod - def exists_relation_between(self, concept, other_concept) -> bool: - pass - - @abstractmethod - def relation_between(self, concept, other_concept) -> Tuple[Optional[str], Optional[str]]: - pass - - @abstractmethod - def get_related_concepts(self, concept) -> Optional[List[str]]: - pass - - @abstractmethod - def does_concept_exist(self, concept) -> bool: - pass - -class NoKnowledge(KGBaseHandler): - def __init__(self): - super(NoKnowledge, self).__init__() - - def get_relation_types(self) -> List[str]: - return [] - - def exists_relation_between(self, concept, other_concept) -> bool: - return False - - def relation_between(self, concept, other_concept) -> Tuple[Optional[str], Optional[str]]: - return (None, None) - - def does_concept_exist(self, concept) -> bool: - return False diff --git a/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller_test.py b/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller_test.py deleted file mode 100644 index eeaa191c04d40fcc108ed7b00dec86d30d5a2a0b..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller_test.py +++ /dev/null @@ -1,308 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for official.staging.training.controller.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os - -from absl.testing import parameterized -import numpy as np -import tensorflow as tf - -from tensorflow.python.distribute import combinations -from tensorflow.python.distribute import strategy_combinations -from official.staging.training import controller -from official.staging.training import standard_runnable - - -def all_strategy_combinations(): - """Gets combinations of distribution strategies.""" - return combinations.combine( - strategy=[ - strategy_combinations.one_device_strategy, - strategy_combinations.tpu_strategy, - strategy_combinations.one_device_strategy_gpu, - strategy_combinations.mirrored_strategy_with_gpu_and_cpu, - ], - mode="eager", - ) - - -def create_model(): - x = tf.keras.layers.Input(shape=(3,), name="input") - y = tf.keras.layers.Dense(4, name="dense")(x) - model = tf.keras.Model(x, y) - return model - - -def summaries_with_matching_keyword(keyword, summary_dir): - """Yields summary protos matching given keyword from event file.""" - event_paths = tf.io.gfile.glob(os.path.join(summary_dir, "events*")) - for event in tf.compat.v1.train.summary_iterator(event_paths[-1]): - if event.summary is not None: - for value in event.summary.value: - if keyword in value.tag: - tf.compat.v1.logging.error(event) - yield event.summary - - -def check_eventfile_for_keyword(keyword, summary_dir): - """Checks event files for the keyword.""" - return any(summaries_with_matching_keyword(keyword, summary_dir)) - - -def dataset_fn(ctx): - del ctx - inputs = np.zeros((10, 3), dtype=np.float32) - targets = np.zeros((10, 4), dtype=np.float32) - dataset = tf.data.Dataset.from_tensor_slices((inputs, targets)) - dataset = dataset.repeat(100) - dataset = dataset.batch(10, drop_remainder=True) - return dataset - - -class TestRunnable(standard_runnable.StandardTrainable, - standard_runnable.StandardEvaluable): - """Implements the training and evaluation APIs for the test model.""" - - def __init__(self): - standard_runnable.StandardTrainable.__init__(self) - standard_runnable.StandardEvaluable.__init__(self) - self.strategy = tf.distribute.get_strategy() - self.model = create_model() - self.optimizer = tf.keras.optimizers.RMSprop() - self.global_step = self.optimizer.iterations - self.train_loss = tf.keras.metrics.Mean("train_loss", dtype=tf.float32) - self.eval_loss = tf.keras.metrics.Mean("eval_loss", dtype=tf.float32) - - def build_train_dataset(self): - return self.strategy.experimental_distribute_datasets_from_function( - dataset_fn) - - def train_step(self, iterator): - - def _replicated_step(inputs): - """Replicated training step.""" - inputs, targets = inputs - with tf.GradientTape() as tape: - outputs = self.model(inputs) - loss = tf.math.reduce_sum(outputs - targets) - grads = tape.gradient(loss, self.model.variables) - self.optimizer.apply_gradients(zip(grads, self.model.variables)) - self.train_loss.update_state(loss) - - self.strategy.run(_replicated_step, args=(next(iterator),)) - - def train_loop_end(self): - return { - "loss": self.train_loss.result(), - } - - def build_eval_dataset(self): - return self.strategy.experimental_distribute_datasets_from_function( - dataset_fn) - - def eval_begin(self): - self.eval_loss.reset_states() - - def eval_step(self, iterator): - - def _replicated_step(inputs): - """Replicated evaluation step.""" - inputs, targets = inputs - outputs = self.model(inputs) - loss = tf.math.reduce_sum(outputs - targets) - self.eval_loss.update_state(loss) - - self.strategy.run(_replicated_step, args=(next(iterator),)) - - def eval_end(self): - return { - "eval_loss": self.eval_loss.result(), - } - - -class ControllerTest(tf.test.TestCase, parameterized.TestCase): - - def setUp(self): - super(ControllerTest, self).setUp() - self.model_dir = self.get_temp_dir() - - def test_no_checkpoint(self): - test_runnable = TestRunnable() - # No checkpoint manager and no strategy. - test_controller = controller.Controller( - train_fn=test_runnable.train, - eval_fn=test_runnable.evaluate, - global_step=test_runnable.global_step, - train_steps=10, - steps_per_loop=2, - summary_dir=os.path.join(self.model_dir, "summaries/train"), - summary_interval=2, - eval_summary_dir=os.path.join(self.model_dir, "summaries/eval"), - eval_steps=2, - eval_interval=5) - test_controller.train(evaluate=True) - self.assertEqual(test_runnable.global_step.numpy(), 10) - # Loss and accuracy values should be written into summaries. - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/train"))) - self.assertTrue( - check_eventfile_for_keyword( - "loss", os.path.join(self.model_dir, "summaries/train"))) - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/eval"))) - self.assertTrue( - check_eventfile_for_keyword( - "eval_loss", os.path.join(self.model_dir, "summaries/eval"))) - # No checkpoint, so global step starts from 0. - test_runnable.global_step.assign(0) - test_controller.train(evaluate=True) - self.assertEqual(test_runnable.global_step.numpy(), 10) - - def test_no_checkpoint_and_summaries(self): - test_runnable = TestRunnable() - # No checkpoint + summary directories. - test_controller = controller.Controller( - train_fn=test_runnable.train, - eval_fn=test_runnable.evaluate, - global_step=test_runnable.global_step, - train_steps=10, - steps_per_loop=2, - eval_steps=2, - eval_interval=5) - test_controller.train(evaluate=True) - self.assertEqual(test_runnable.global_step.numpy(), 10) - - @combinations.generate(all_strategy_combinations()) - def test_train_and_evaluate(self, strategy): - with strategy.scope(): - test_runnable = TestRunnable() - - checkpoint = tf.train.Checkpoint( - model=test_runnable.model, optimizer=test_runnable.optimizer) - checkpoint_manager = tf.train.CheckpointManager( - checkpoint, - self.model_dir, - max_to_keep=None, - step_counter=test_runnable.global_step, - checkpoint_interval=10) - test_controller = controller.Controller( - strategy=strategy, - train_fn=test_runnable.train, - eval_fn=test_runnable.evaluate, - global_step=test_runnable.global_step, - train_steps=10, - steps_per_loop=2, - summary_dir=os.path.join(self.model_dir, "summaries/train"), - summary_interval=2, - checkpoint_manager=checkpoint_manager, - eval_summary_dir=os.path.join(self.model_dir, "summaries/eval"), - eval_steps=2, - eval_interval=5) - test_controller.train(evaluate=True) - - # Checkpoints are saved. - self.assertNotEmpty(tf.io.gfile.glob(os.path.join(self.model_dir, "ckpt*"))) - - # Loss and accuracy values should be written into summaries. - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/train"))) - self.assertTrue( - check_eventfile_for_keyword( - "loss", os.path.join(self.model_dir, "summaries/train"))) - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/eval"))) - self.assertTrue( - check_eventfile_for_keyword( - "eval_loss", os.path.join(self.model_dir, "summaries/eval"))) - - @combinations.generate(all_strategy_combinations()) - def test_train_only(self, strategy): - with strategy.scope(): - test_runnable = TestRunnable() - - checkpoint = tf.train.Checkpoint( - model=test_runnable.model, optimizer=test_runnable.optimizer) - checkpoint_manager = tf.train.CheckpointManager( - checkpoint, - self.model_dir, - max_to_keep=None, - step_counter=test_runnable.global_step, - checkpoint_interval=10) - test_controller = controller.Controller( - strategy=strategy, - train_fn=test_runnable.train, - global_step=test_runnable.global_step, - train_steps=10, - steps_per_loop=2, - summary_dir=os.path.join(self.model_dir, "summaries/train"), - summary_interval=2, - checkpoint_manager=checkpoint_manager, - eval_summary_dir=os.path.join(self.model_dir, "summaries/eval"), - ) - test_controller.train(evaluate=False) - - # Checkpoints are saved. - self.assertNotEmpty(tf.io.gfile.glob(os.path.join(self.model_dir, "ckpt*"))) - - # Only train summaries are written. - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/train"))) - self.assertTrue( - check_eventfile_for_keyword( - "loss", os.path.join(self.model_dir, "summaries/train"))) - self.assertFalse( - tf.io.gfile.exists(os.path.join(self.model_dir, "summaries/eval"))) - - @combinations.generate(all_strategy_combinations()) - def test_evaluate_only(self, strategy): - with strategy.scope(): - test_runnable = TestRunnable() - - checkpoint = tf.train.Checkpoint(model=test_runnable.model) - checkpoint.save(os.path.join(self.model_dir, "ckpt")) - - checkpoint_manager = tf.train.CheckpointManager( - checkpoint, - self.model_dir, - max_to_keep=None, - step_counter=test_runnable.global_step) - test_controller = controller.Controller( - strategy=strategy, - eval_fn=test_runnable.evaluate, - global_step=test_runnable.global_step, - checkpoint_manager=checkpoint_manager, - summary_dir=os.path.join(self.model_dir, "summaries/train"), - eval_summary_dir=os.path.join(self.model_dir, "summaries/eval"), - eval_steps=2, - eval_interval=5) - test_controller.evaluate() - - # Only eval summaries are written - self.assertFalse( - tf.io.gfile.exists(os.path.join(self.model_dir, "summaries/train"))) - self.assertNotEmpty( - tf.io.gfile.listdir(os.path.join(self.model_dir, "summaries/eval"))) - self.assertTrue( - check_eventfile_for_keyword( - "eval_loss", os.path.join(self.model_dir, "summaries/eval"))) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/__init__.py b/spaces/NCTCMumbai/NCTC/models/research/autoencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Nikhil0987/omm/home.py b/spaces/Nikhil0987/omm/home.py deleted file mode 100644 index b3a01a86af1b3afafac4316429bba6bec90132ed..0000000000000000000000000000000000000000 --- a/spaces/Nikhil0987/omm/home.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st -from streamlit_option_menu import option_menu -# from chat import Chat -from convo import Convo -from remainder import rem -from transformers import pipeline - - -visualqna = pipeline(model="dandelin/vilt-b32-finetuned-vqa") - - -def load_image(): - with st.sidebar: - if img := st.text_input("Enter Image URL") or st.selectbox("Select Image", ("https://images.unsplash.com/photo-1593466144596-8abd50ad2c52?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3434&q=80", "https://images.unsplash.com/photo-1566438480900-0609be27a4be?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3394&q=80")): - if st.button("Load Image"): - st.write("Image Uploaded!") - st.image(img) - else: - st.warning("Please enter an image URL and click 'Load Image' before asking a question.") - return img - - - -def visual_qna(): - st.title("Visual Q&A") - img = load_image() - if img: - if query := st.chat_input("Enter your message"): - response = visualqna(question=query, image=img) - with st.chat_message("assistant"): - st.write(response) - else: - st.warning("Please enter an image URL and click 'Load Image' before asking a question.") - - -def homepage(): - st.title("Home") - # st.header("Home Page") - st.subheader("Welcome to the Home Page") - -def dashboard(): - # st.title("Dashboard") - # st.header("Dashboard") - - with st.sidebar: - selected = option_menu("Menu", ["Home", "Dashboar","Remainder","Visual Q&A","Conversation","Logout"]) - - if selected == "Home": - homepage() - - elif selected == "Dashboard": - "gfjfvjhvjhv" - # elif selected == "Chat": - # Chat() - elif selected == "Conversation": - Convo() - elif selected == "Logout": - st.session_state["user"] = "visitor" - st.experimental_rerun() - elif selected == "Remainder": - rem() - elif selected == 'Visual Q&A': - visual_qna() - \ No newline at end of file diff --git a/spaces/NoCrypt/pixelization/app.py b/spaces/NoCrypt/pixelization/app.py deleted file mode 100644 index d248d99b421089f4e58132a00d9dd0391a89d60a..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/pixelization/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import gradio as gr -import functools -from pixelization import Model -import torch -import argparse -import huggingface_hub -import os - -TOKEN = os.environ['TOKEN'] - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--theme', type=str, default='default') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - -def main(): - args = parse_args() - - - # DL MODEL - # PIX_MODEL - os.environ['PIX_MODEL'] = huggingface_hub.hf_hub_download("NoCrypt/pixelization_models", "pixelart_vgg19.pth", token=TOKEN); - # NET_MODEL - os.environ['NET_MODEL'] = huggingface_hub.hf_hub_download("NoCrypt/pixelization_models", "160_net_G_A.pth", token=TOKEN); - # ALIAS_MODEL - os.environ['ALIAS_MODEL'] = huggingface_hub.hf_hub_download("NoCrypt/pixelization_models", "alias_net.pth", token=TOKEN); - - # For local testing - # PIX_MODEL - # os.environ['PIX_MODEL'] = "pixelart_vgg19.pth" - # # NET_MODEL - # os.environ['NET_MODEL'] = "160_net_G_A.pth" - # # ALIAS_MODEL - # os.environ['ALIAS_MODEL'] = "alias_net.pth" - - - use_cpu = True - m = Model(device = "cpu" if use_cpu else "cuda") - m.load() - - # To use GPU: Change use_cpu to false, and checkout my comment on networks.py at line 107 & 108 - # + Use torch with cuda support (Change in requirements.txt) - - gr.Interface(m.pixelize_modified, - [ - gr.components.Image(type='pil', label='Input'), - gr.components.Slider(minimum=4, maximum=32, value=4, step=1, label='Pixel Size'), - gr.components.Checkbox(True, label="Upscale after") - ], - gr.components.Image(type='pil', label='Output'), - title="Pixelization", - description=''' -Demo for [WuZongWei6/Pixelization](https://github.com/WuZongWei6/Pixelization) - -Models that are used is private to comply with License. - -Code forked from [arenatemp/pixelization_inference](https://github.com/arenatemp/pixelization_inference) and [AUTOMATIC1111/stable-diffusion-webui-pixelization](https://github.com/AUTOMATIC1111/stable-diffusion-webui-pixelization), modified to work with spaces. - - ''', - theme='NoCrypt/miku', - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py deleted file mode 100644 index b878321e445093f187e7af5310622a6ac456c30d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import fairseq -import soundfile as sf - - -class Wav2VecFeatureReader: - """ - Wrapper class to run inference on Wav2Vec 2.0 model. - Helps extract features for a given audio file. - """ - - def __init__(self, checkpoint_path, layer): - state = fairseq.checkpoint_utils.load_checkpoint_to_cpu( - checkpoint_path - ) - - w2v_args = state["args"] - self.task = fairseq.tasks.setup_task(w2v_args) - model = self.task.build_model(w2v_args) - model.load_state_dict(state["model"], strict=True) - model.eval() - model.cuda() - self.model = model - self.layer = layer - - def read_audio(self, fname): - wav, sr = sf.read(fname) - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - assert sr == self.task.cfg.sample_rate, sr - return wav - - def get_feats(self, file_path): - x = self.read_audio(file_path) - with torch.no_grad(): - source = torch.from_numpy(x).view(1, -1).float().cuda() - res = self.model( - source=source, mask=False, features_only=True, layer=self.layer - ) - return res["layer_results"][self.layer][0].squeeze(1) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation_moe/score.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation_moe/score.py deleted file mode 100644 index 9a529a985019710ea202cb6bf28ae071c0ce4135..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/translation_moe/score.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Scoring script for computing pairwise BLEU and multi-ref BLEU over a set of -candidate hypotheses. - -See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" -(Shen et al., 2019) `_. -""" - -import argparse -import random -import sys -from itertools import chain - -import numpy as np -from sacrebleu import compute_bleu, corpus_bleu as _corpus_bleu - - -def main(): - parser = argparse.ArgumentParser(sys.argv[0]) - parser.add_argument( - "--sys", nargs="*", default="", metavar="FILE", help="path to system output" - ) - parser.add_argument("--ref", default="", metavar="FILE", help="path to references") - parser.add_argument( - "--output", - default="", - metavar="FILE", - help="print outputs into a pretty format", - ) - args = parser.parse_args() - - if args.sys: - src, tgt, hypos, log_probs = load_sys(args.sys) - print("pairwise BLEU: %.2f" % pairwise(hypos)) - if args.output: - merge(src, tgt, hypos, log_probs, args.output) - - if args.ref: - _, _, refs = load_ref(args.ref) - if args.sys: - multi_ref(refs, hypos) - else: - intra_ref(refs) - - -def dictolist(d): - a = sorted(d.items(), key=lambda i: i[0]) - return [i[1] for i in a] - - -def load_sys(paths): - src, tgt, hypos, log_probs = {}, {}, {}, {} - for path in paths: - with open(path) as f: - for line in f: - line = line.rstrip() - # S: source - # T: target - # D: detokenized system output - if line.startswith(("S-", "T-", "D-")): - i = int(line[line.find("-") + 1 : line.find("\t")]) - if line.startswith("S-"): - src[i] = line.split("\t")[1] - if line.startswith("T-"): - tgt[i] = line.split("\t")[1] - if line.startswith("D-"): - if i not in hypos: - hypos[i] = [] - log_probs[i] = [] - hypos[i].append(line.split("\t")[2]) - log_probs[i].append(float(line.split("\t")[1])) - return dictolist(src), dictolist(tgt), dictolist(hypos), dictolist(log_probs) - - -def load_ref(path): - with open(path) as f: - lines = f.readlines() - src, tgt, refs = [], [], [] - i = 0 - while i < len(lines): - if lines[i].startswith("S-"): - src.append(lines[i].split("\t")[1].rstrip()) - i += 1 - elif lines[i].startswith("T-"): - tgt.append(lines[i].split("\t")[1].rstrip()) - i += 1 - else: - a = [] - while i < len(lines) and lines[i].startswith("R"): - a.append(lines[i].split("\t")[1].rstrip()) - i += 1 - refs.append(a) - return src, tgt, refs - - -def merge(src, tgt, hypos, log_probs, path): - with open(path, "w") as f: - for s, t, hs, lps in zip(src, tgt, hypos, log_probs): - f.write(s + "\n") - f.write(t + "\n") - f.write("\n") - for h, lp in zip(hs, lps): - f.write("\t%f\t%s\n" % (lp, h.strip())) - f.write("------------------------------------------------------\n") - - -def corpus_bleu(sys_stream, ref_streams): - bleu = _corpus_bleu(sys_stream, ref_streams, tokenize="none") - return bleu.score - - -def sentence_bleu(hypothesis, reference): - bleu = _corpus_bleu(hypothesis, reference) - for i in range(1, 4): - bleu.counts[i] += 1 - bleu.totals[i] += 1 - bleu = compute_bleu( - bleu.counts, - bleu.totals, - bleu.sys_len, - bleu.ref_len, - smooth_method="exp", - ) - return bleu.score - - -def pairwise(sents): - _ref, _hypo = [], [] - for s in sents: - for i in range(len(s)): - for j in range(len(s)): - if i != j: - _ref.append(s[i]) - _hypo.append(s[j]) - return corpus_bleu(_hypo, [_ref]) - - -def multi_ref(refs, hypos): - _ref, _hypo = [], [] - ref_cnt = 0 - assert len(refs) == len(hypos) - - # count number of refs covered - for rs, hs in zip(refs, hypos): - a = set() - for h in hs: - s = [sentence_bleu(h, r) for r in rs] - j = np.argmax(s) - _ref.append(rs[j]) - _hypo.append(h) - best = [k for k in range(len(rs)) if s[k] == s[j]] - a.add(random.choice(best)) - ref_cnt += len(a) - print("#refs covered: %.2f" % (ref_cnt / len(refs))) - - # transpose refs and hypos - refs = list(zip(*refs)) - hypos = list(zip(*hypos)) - - # compute multi-ref corpus BLEU (leave-one-out to be comparable to intra_ref) - k = len(hypos) - m = len(refs) - flat_hypos = [hypos[j][i] for i in range(len(hypos[0])) for j in range(k)] - duplicated_refs = [[ref for ref in refs_i for _ in range(k)] for refs_i in refs] - loo_bleus = [] - for held_out_ref in range(m): - remaining_refs = ( - duplicated_refs[:held_out_ref] + duplicated_refs[held_out_ref + 1 :] - ) - assert len(remaining_refs) == m - 1 - loo_bleus.append(corpus_bleu(flat_hypos, remaining_refs)) - print("average multi-reference BLEU (leave-one-out): %.2f" % np.mean(loo_bleus)) - - -def intra_ref(refs): - print("ref pairwise BLEU: %.2f" % pairwise(refs)) - refs = list(zip(*refs)) - m = len(refs) - concat_h = [] - concat_rest = [[] for j in range(m - 1)] - for i, h in enumerate(refs): - rest = refs[:i] + refs[i + 1 :] - concat_h.append(h) - for j in range(m - 1): - concat_rest[j].extend(rest[j]) - concat_h = list(chain.from_iterable(concat_h)) - bleu = corpus_bleu(concat_h, concat_rest) - print("multi-reference BLEU (leave-one-out): %.2f" % bleu) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py deleted file mode 100644 index f2b3966d2d6b103f3dc2ff170c12ab9663875684..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import logging -import os -from argparse import Namespace -from pathlib import Path - -import torch -from fairseq.data import ( - encoders, - Dictionary, - ResamplingDataset, - TransformEosLangPairDataset, - ConcatDataset, -) -from fairseq.data.iterators import GroupedEpochBatchIterator -from fairseq.data.audio.multi_modality_dataset import ( - MultiModalityDataset, - LangPairMaskDataset, - ModalityDatasetItem, -) -from fairseq.data.audio.speech_to_text_dataset import SpeechToTextDataset, SpeechToTextDatasetCreator -from fairseq.data.audio.speech_to_text_joint_dataset import ( - S2TJointDataConfig, - SpeechToTextJointDatasetCreator, -) -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.tasks.translation import load_langpair_dataset - -logger = logging.getLogger(__name__) -LANG_TAG_TEMPLATE = "" - - -@register_task("speech_text_joint_to_text") -class SpeechTextJointToTextTask(SpeechToTextTask): - """ - Task for joint training speech and text to text. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - super(SpeechTextJointToTextTask, cls).add_args(parser) - ### - parser.add_argument( - "--parallel-text-data", - default="", - help="path to parallel text data directory", - ) - parser.add_argument( - "--max-tokens-text", - type=int, - metavar="N", - help="maximum tokens for encoder text input ", - ) - parser.add_argument( - "--max-positions-text", - type=int, - metavar="N", - default=400, - help="maximum tokens for per encoder text input ", - ) - parser.add_argument( - "--langpairs", - default=None, - metavar="S", - help='language pairs for text training, separated with ","', - ) - parser.add_argument( - "--speech-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for speech dataset with transcripts ", - ) - parser.add_argument( - "--text-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for text set ", - ) - parser.add_argument( - "--update-mix-data", - action="store_true", - help="use mixed data in one update when update-freq > 1", - ) - parser.add_argument( - "--load-speech-only", - action="store_true", - help="load speech data only", - ) - parser.add_argument( - "--mask-text-ratio", - type=float, - metavar="V", - default=0.0, - help="mask V source tokens for text only mode", - ) - parser.add_argument( - "--mask-text-type", - default="random", - choices=["random", "tail"], - help="mask text typed", - ) - parser.add_argument( - "--noise-token", - default="", - help="noise token for masking src text tokens if mask-text-ratio > 0", - ) - parser.add_argument( - "--infer-target-lang", - default="", - metavar="S", - help="target language for inference", - ) - - def __init__(self, args, src_dict, tgt_dict, infer_tgt_lang_id=None): - super().__init__(args, tgt_dict) - self.src_dict = src_dict - self.data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - assert self.tgt_dict.pad() == self.src_dict.pad() - assert self.tgt_dict.eos() == self.src_dict.eos() - self.speech_only = args.load_speech_only - self._infer_tgt_lang_id = infer_tgt_lang_id - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries).""" - data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - tgt_dict_path = Path(args.data) / data_cfg.vocab_filename - src_dict_path = Path(args.data) / data_cfg.src_vocab_filename - if (not os.path.isfile(src_dict_path)) or (not os.path.isfile(tgt_dict_path)): - raise FileNotFoundError("Dict not found: {}".format(args.data)) - src_dict = Dictionary.load(src_dict_path.as_posix()) - tgt_dict = Dictionary.load(tgt_dict_path.as_posix()) - - print("| src dictionary: {} types".format(len(src_dict))) - print("| tgt dictionary: {} types".format(len(tgt_dict))) - - if args.parallel_text_data != "": - if not os.path.isabs(args.parallel_text_data): - args.parallel_text_data = os.path.join( - args.data, args.parallel_text_data - ) - - if args.langpairs is None: - raise Exception( - "Could not infer language pair, please provide it explicitly" - ) - infer_tgt_lang_id = None - if args.infer_target_lang != "" and data_cfg.prepend_tgt_lang_tag_no_change: - tgt_lang_tag = SpeechToTextDataset.LANG_TAG_TEMPLATE.format( - args.infer_target_lang - ) - infer_tgt_lang_id = tgt_dict.index(tgt_lang_tag) - assert infer_tgt_lang_id != tgt_dict.unk() - return cls(args, src_dict, tgt_dict, infer_tgt_lang_id=infer_tgt_lang_id) - - def load_langpair_dataset(self, prepend_tgt_lang_tag=False, sampling_alpha=1.0, epoch=0): - lang_pairs = [] - text_dataset = None - split = "train" - for lp in self.args.langpairs.split(","): - src, tgt = lp.split("-") - text_dataset = load_langpair_dataset( - self.args.parallel_text_data, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=True, - dataset_impl=None, - upsample_primary=1, - left_pad_source=False, - left_pad_target=False, - max_source_positions=self.args.max_positions_text, - max_target_positions=self.args.max_target_positions, - load_alignments=False, - truncate_source=False, - ) - if prepend_tgt_lang_tag: - # TODO - text_dataset = TransformEosLangPairDataset( - text_dataset, - src_eos=self.src_dict.eos(), - tgt_bos=self.tgt_dict.eos(), # 'prev_output_tokens' starts with eos - new_tgt_bos=self.tgt_dict.index(LANG_TAG_TEMPLATE.format(tgt)), - ) - lang_pairs.append(text_dataset) - if len(lang_pairs) > 1: - if sampling_alpha != 1.0: - size_ratios = SpeechToTextDatasetCreator.get_size_ratios( - self.args.langpairs.split(","), - [len(s) for s in lang_pairs], - alpha=sampling_alpha, - ) - lang_pairs = [ - ResamplingDataset( - d, size_ratio=r, epoch=epoch, replace=(r >= 1.0) - ) - for d, r in zip(lang_pairs, size_ratios) - ] - return ConcatDataset(lang_pairs) - return text_dataset - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self._infer_tgt_lang_id, - ) - - def build_src_tokenizer(self, args): - logger.info(f"src-pre-tokenizer: {self.data_cfg.src_pre_tokenizer}") - return encoders.build_tokenizer(Namespace(**self.data_cfg.src_pre_tokenizer)) - - def build_src_bpe(self, args): - logger.info(f"tokenizer: {self.data_cfg.src_bpe_tokenizer}") - return encoders.build_bpe(Namespace(**self.data_cfg.src_bpe_tokenizer)) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - src_pre_tokenizer = self.build_src_tokenizer(self.args) - src_bpe_tokenizer = self.build_src_bpe(self.args) - ast_dataset = SpeechToTextJointDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - src_dict=None if self.speech_only else self.src_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - src_pre_tokenizer=src_pre_tokenizer, - src_bpe_tokenizer=src_bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - ) - noise_token_id = -1 - text_dataset = None - if self.args.parallel_text_data != "" and is_train_split: - text_dataset = self.load_langpair_dataset( - self.data_cfg.prepend_tgt_lang_tag_no_change, - 1.0, - epoch=epoch, - ) - if self.args.mask_text_ratio > 0: - # add mask - noise_token_id = ( - self.src_dict.unk() - if self.args.noise_token == "" - else self.src_dict.index(self.args.noise_token) - ) - text_dataset = LangPairMaskDataset( - text_dataset, - src_bos=self.src_dict.bos(), - src_eos=self.src_dict.eos(), - noise_id=noise_token_id, - mask_ratio=self.args.mask_text_ratio, - mask_type=self.args.mask_text_type, - ) - - if text_dataset is not None: - mdsets = [ - ModalityDatasetItem( - "sup_speech", - ast_dataset, - (self.args.max_source_positions, self.args.max_target_positions), - self.args.max_tokens, - self.args.batch_size, - ), - ModalityDatasetItem( - "text", - text_dataset, - (self.args.max_positions_text, self.args.max_target_positions), - self.args.max_tokens_text - if self.args.max_tokens_text is not None - else self.args.max_tokens, - self.args.batch_size, - ), - ] - ast_dataset = MultiModalityDataset(mdsets) - self.datasets[split] = ast_dataset - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.tgt_dict - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - return None if self.speech_only else self.src_dict - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=0, - data_buffer_size=0, - disable_iterator_cache=False, - ): - - if not isinstance(dataset, MultiModalityDataset): - return super(SpeechTextJointToTextTask, self).get_batch_iterator( - dataset, - max_tokens, - max_sentences, - max_positions, - ignore_invalid_inputs, - required_batch_size_multiple, - seed, - num_shards, - shard_id, - num_workers, - epoch, - data_buffer_size, - disable_iterator_cache, - ) - - mult_ratio = [self.args.speech_sample_ratio, self.args.text_sample_ratio] - assert len(dataset.datasets) == 2 - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - batch_samplers = dataset.get_batch_samplers( - mult_ratio, required_batch_size_multiple, seed - ) - - # return a reusable, sharded iterator - epoch_iter = GroupedEpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_samplers=batch_samplers, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - mult_rate=1 if self.args.update_mix_data else max(self.args.update_freq), - buffer_size=data_buffer_size, - ) - self.dataset_to_epoch_iter[dataset] = {} # refresh it every epoch - return epoch_iter diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py deleted file mode 100644 index 9fb3462b7f9abf6feaa499976bfed526ebd17e31..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/evaluation/panoptic_evaluation.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import io -import itertools -import json -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -from typing import Optional -from PIL import Image -from tabulate import tabulate - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - -logger = logging.getLogger(__name__) - - -class COCOPanopticEvaluator(DatasetEvaluator): - """ - Evaluate Panoptic Quality metrics on COCO using PanopticAPI. - It saves panoptic segmentation prediction in `output_dir` - - It contains a synchronize call and has to be called from all workers. - """ - - def __init__(self, dataset_name: str, output_dir: Optional[str] = None): - """ - Args: - dataset_name: name of the dataset - output_dir: output directory to save results for evaluation. - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._thing_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - self._stuff_contiguous_id_to_dataset_id = { - v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items() - } - - self._output_dir = output_dir - if self._output_dir is not None: - PathManager.mkdirs(self._output_dir) - - def reset(self): - self._predictions = [] - - def _convert_category_id(self, segment_info): - isthing = segment_info.pop("isthing", None) - if isthing is None: - # the model produces panoptic category id directly. No more conversion needed - return segment_info - if isthing is True: - segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[ - segment_info["category_id"] - ] - return segment_info - - def process(self, inputs, outputs): - from panopticapi.utils import id2rgb - - for input, output in zip(inputs, outputs): - panoptic_img, segments_info = output["panoptic_seg"] - panoptic_img = panoptic_img.cpu().numpy() - if segments_info is None: - # If "segments_info" is None, we assume "panoptic_img" is a - # H*W int32 image storing the panoptic_id in the format of - # category_id * label_divisor + instance_id. We reserve -1 for - # VOID label, and add 1 to panoptic_img since the official - # evaluation script uses 0 for VOID label. - label_divisor = self._metadata.label_divisor - segments_info = [] - for panoptic_label in np.unique(panoptic_img): - if panoptic_label == -1: - # VOID region. - continue - pred_class = panoptic_label // label_divisor - isthing = ( - pred_class in self._metadata.thing_dataset_id_to_contiguous_id.values() - ) - segments_info.append( - { - "id": int(panoptic_label) + 1, - "category_id": int(pred_class), - "isthing": bool(isthing), - } - ) - # Official evaluation script uses 0 for VOID label. - panoptic_img += 1 - - file_name = os.path.basename(input["file_name"]) - file_name_png = os.path.splitext(file_name)[0] + ".png" - with io.BytesIO() as out: - Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG") - segments_info = [self._convert_category_id(x) for x in segments_info] - self._predictions.append( - { - "image_id": input["image_id"], - "file_name": file_name_png, - "png_string": out.getvalue(), - "segments_info": segments_info, - } - ) - - def evaluate(self): - comm.synchronize() - - self._predictions = comm.gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not comm.is_main_process(): - return - - # PanopticApi requires local files - gt_json = PathManager.get_local_path(self._metadata.panoptic_json) - gt_folder = PathManager.get_local_path(self._metadata.panoptic_root) - - with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir: - logger.info("Writing all panoptic predictions to {} ...".format(pred_dir)) - for p in self._predictions: - with open(os.path.join(pred_dir, p["file_name"]), "wb") as f: - f.write(p.pop("png_string")) - - with open(gt_json, "r") as f: - json_data = json.load(f) - json_data["annotations"] = self._predictions - - output_dir = self._output_dir or pred_dir - predictions_json = os.path.join(output_dir, "predictions.json") - with PathManager.open(predictions_json, "w") as f: - f.write(json.dumps(json_data)) - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - gt_json, - PathManager.get_local_path(predictions_json), - gt_folder=gt_folder, - pred_folder=pred_dir, - ) - - res = {} - res["PQ"] = 100 * pq_res["All"]["pq"] - res["SQ"] = 100 * pq_res["All"]["sq"] - res["RQ"] = 100 * pq_res["All"]["rq"] - res["PQ_th"] = 100 * pq_res["Things"]["pq"] - res["SQ_th"] = 100 * pq_res["Things"]["sq"] - res["RQ_th"] = 100 * pq_res["Things"]["rq"] - res["PQ_st"] = 100 * pq_res["Stuff"]["pq"] - res["SQ_st"] = 100 * pq_res["Stuff"]["sq"] - res["RQ_st"] = 100 * pq_res["Stuff"]["rq"] - - results = OrderedDict({"panoptic_seg": res}) - _print_panoptic_results(pq_res) - - return results - - -def _print_panoptic_results(pq_res): - headers = ["", "PQ", "SQ", "RQ", "#categories"] - data = [] - for name in ["All", "Things", "Stuff"]: - row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]] - data.append(row) - table = tabulate( - data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center" - ) - logger.info("Panoptic Evaluation Results:\n" + table) - - -if __name__ == "__main__": - from detectron2.utils.logger import setup_logger - - logger = setup_logger() - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--gt-json") - parser.add_argument("--gt-dir") - parser.add_argument("--pred-json") - parser.add_argument("--pred-dir") - args = parser.parse_args() - - from panopticapi.evaluation import pq_compute - - with contextlib.redirect_stdout(io.StringIO()): - pq_res = pq_compute( - args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir - ) - _print_panoptic_results(pq_res) diff --git a/spaces/OptorAI/site/README.md b/spaces/OptorAI/site/README.md deleted file mode 100644 index b5b5ca4b556b09c2217164f809ed18c679628bf9..0000000000000000000000000000000000000000 --- a/spaces/OptorAI/site/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Site -emoji: 🌐OPTOR -colorFrom: indigo -colorTo: yellow -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/PKaushik/Human-Part-Segmentation/README.md b/spaces/PKaushik/Human-Part-Segmentation/README.md deleted file mode 100644 index 3e1d917c1952f8d7e8248754331467ae3164e20e..0000000000000000000000000000000000000000 --- a/spaces/PKaushik/Human-Part-Segmentation/README.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Human Part Segmentation -emoji: 👤 -colorFrom: gray -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false -tags: -- computer-vision -- image-segmentation -license: cc0-1.0 -duplicated_from: keras-io/Human-Part-Segmentation ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Panel-Org/panel-demo-image-classification/Dockerfile b/spaces/Panel-Org/panel-demo-image-classification/Dockerfile deleted file mode 100644 index c33a0787f9bfc4eb7088822ae9e724bad601c068..0000000000000000000000000000000000000000 --- a/spaces/Panel-Org/panel-demo-image-classification/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt -RUN python3 -m pip install --no-cache-dir --upgrade pip -RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"] - -RUN mkdir /.cache -RUN chmod 777 /.cache -RUN mkdir .chroma -RUN chmod 777 .chroma \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/ui/app.py b/spaces/PeepDaSlan9/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
        {utils.format_directory(OUTPUT_DIR)}
        - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/Plurigrid/LifeSim/src/app/agents/pick.ts b/spaces/Plurigrid/LifeSim/src/app/agents/pick.ts deleted file mode 100644 index 48dc2995f08d8c3774a9b7b35b808064313361a7..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/app/agents/pick.ts +++ /dev/null @@ -1,2 +0,0 @@ - -export const pick = (items: string[]) => items[Math.floor(Math.random()*items.length)] diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000 --- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/losses.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/losses.py deleted file mode 100644 index be293e739bdc2d91273f30fb789befe7c8b49a43..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/adversarial/losses.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility module to handle adversarial losses without requiring to mess up the main training loop. -""" - -import typing as tp - -import flashy -import torch -import torch.nn as nn -import torch.nn.functional as F - - -ADVERSARIAL_LOSSES = ['mse', 'hinge', 'hinge2'] - - -AdvLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor], torch.Tensor]] -FeatLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] - - -class AdversarialLoss(nn.Module): - """Adversary training wrapper. - - Args: - adversary (nn.Module): The adversary module will be used to estimate the logits given the fake and real samples. - We assume here the adversary output is ``Tuple[List[torch.Tensor], List[List[torch.Tensor]]]`` - where the first item is a list of logits and the second item is a list of feature maps. - optimizer (torch.optim.Optimizer): Optimizer used for training the given module. - loss (AdvLossType): Loss function for generator training. - loss_real (AdvLossType): Loss function for adversarial training on logits from real samples. - loss_fake (AdvLossType): Loss function for adversarial training on logits from fake samples. - loss_feat (FeatLossType): Feature matching loss function for generator training. - normalize (bool): Whether to normalize by number of sub-discriminators. - - Example of usage: - adv_loss = AdversarialLoss(adversaries, optimizer, loss, loss_real, loss_fake) - for real in loader: - noise = torch.randn(...) - fake = model(noise) - adv_loss.train_adv(fake, real) - loss, _ = adv_loss(fake, real) - loss.backward() - """ - def __init__(self, - adversary: nn.Module, - optimizer: torch.optim.Optimizer, - loss: AdvLossType, - loss_real: AdvLossType, - loss_fake: AdvLossType, - loss_feat: tp.Optional[FeatLossType] = None, - normalize: bool = True): - super().__init__() - self.adversary: nn.Module = adversary - flashy.distrib.broadcast_model(self.adversary) - self.optimizer = optimizer - self.loss = loss - self.loss_real = loss_real - self.loss_fake = loss_fake - self.loss_feat = loss_feat - self.normalize = normalize - - def _save_to_state_dict(self, destination, prefix, keep_vars): - # Add the optimizer state dict inside our own. - super()._save_to_state_dict(destination, prefix, keep_vars) - destination[prefix + 'optimizer'] = self.optimizer.state_dict() - return destination - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - # Load optimizer state. - self.optimizer.load_state_dict(state_dict.pop(prefix + 'optimizer')) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def get_adversary_pred(self, x): - """Run adversary model, validating expected output format.""" - logits, fmaps = self.adversary(x) - assert isinstance(logits, list) and all([isinstance(t, torch.Tensor) for t in logits]), \ - f'Expecting a list of tensors as logits but {type(logits)} found.' - assert isinstance(fmaps, list), f'Expecting a list of features maps but {type(fmaps)} found.' - for fmap in fmaps: - assert isinstance(fmap, list) and all([isinstance(f, torch.Tensor) for f in fmap]), \ - f'Expecting a list of tensors as feature maps but {type(fmap)} found.' - return logits, fmaps - - def train_adv(self, fake: torch.Tensor, real: torch.Tensor) -> torch.Tensor: - """Train the adversary with the given fake and real example. - - We assume the adversary output is the following format: Tuple[List[torch.Tensor], List[List[torch.Tensor]]]. - The first item being the logits and second item being a list of feature maps for each sub-discriminator. - - This will automatically synchronize gradients (with `flashy.distrib.eager_sync_model`) - and call the optimizer. - """ - loss = torch.tensor(0., device=fake.device) - all_logits_fake_is_fake, _ = self.get_adversary_pred(fake.detach()) - all_logits_real_is_fake, _ = self.get_adversary_pred(real.detach()) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake, logit_real_is_fake in zip(all_logits_fake_is_fake, all_logits_real_is_fake): - loss += self.loss_fake(logit_fake_is_fake) + self.loss_real(logit_real_is_fake) - - if self.normalize: - loss /= n_sub_adversaries - - self.optimizer.zero_grad() - with flashy.distrib.eager_sync_model(self.adversary): - loss.backward() - self.optimizer.step() - - return loss - - def forward(self, fake: torch.Tensor, real: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Return the loss for the generator, i.e. trying to fool the adversary, - and feature matching loss if provided. - """ - adv = torch.tensor(0., device=fake.device) - feat = torch.tensor(0., device=fake.device) - with flashy.utils.readonly(self.adversary): - all_logits_fake_is_fake, all_fmap_fake = self.get_adversary_pred(fake) - all_logits_real_is_fake, all_fmap_real = self.get_adversary_pred(real) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake in all_logits_fake_is_fake: - adv += self.loss(logit_fake_is_fake) - if self.loss_feat: - for fmap_fake, fmap_real in zip(all_fmap_fake, all_fmap_real): - feat += self.loss_feat(fmap_fake, fmap_real) - - if self.normalize: - adv /= n_sub_adversaries - feat /= n_sub_adversaries - - return adv, feat - - -def get_adv_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_loss - elif loss_type == 'hinge': - return hinge_loss - elif loss_type == 'hinge2': - return hinge2_loss - raise ValueError('Unsupported loss') - - -def get_fake_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_fake_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_fake_loss - raise ValueError('Unsupported loss') - - -def get_real_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_real_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_real_loss - raise ValueError('Unsupported loss') - - -def mse_real_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def mse_fake_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(0., device=x.device).expand_as(x)) - - -def hinge_real_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def hinge_fake_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(-x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def mse_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def hinge_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return -x.mean() - - -def hinge2_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0]) - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -class FeatureMatchingLoss(nn.Module): - """Feature matching loss for adversarial training. - - Args: - loss (nn.Module): Loss to use for feature matching (default=torch.nn.L1). - normalize (bool): Whether to normalize the loss. - by number of feature maps. - """ - def __init__(self, loss: nn.Module = torch.nn.L1Loss(), normalize: bool = True): - super().__init__() - self.loss = loss - self.normalize = normalize - - def forward(self, fmap_fake: tp.List[torch.Tensor], fmap_real: tp.List[torch.Tensor]) -> torch.Tensor: - assert len(fmap_fake) == len(fmap_real) and len(fmap_fake) > 0 - feat_loss = torch.tensor(0., device=fmap_fake[0].device) - feat_scale = torch.tensor(0., device=fmap_fake[0].device) - n_fmaps = 0 - for (feat_fake, feat_real) in zip(fmap_fake, fmap_real): - assert feat_fake.shape == feat_real.shape - n_fmaps += 1 - feat_loss += self.loss(feat_fake, feat_real) - feat_scale += torch.mean(torch.abs(feat_real)) - - if self.normalize: - feat_loss /= n_fmaps - - return feat_loss diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/models/cond_transformer.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/models/cond_transformer.py deleted file mode 100644 index e4c63730fa86ac1b92b37af14c14fb696595b1ab..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/models/cond_transformer.py +++ /dev/null @@ -1,352 +0,0 @@ -import os, math -import torch -import torch.nn.functional as F -import pytorch_lightning as pl - -from main import instantiate_from_config -from taming.modules.util import SOSProvider - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class Net2NetTransformer(pl.LightningModule): - def __init__(self, - transformer_config, - first_stage_config, - cond_stage_config, - permuter_config=None, - ckpt_path=None, - ignore_keys=[], - first_stage_key="image", - cond_stage_key="depth", - downsample_cond_size=-1, - pkeep=1.0, - sos_token=0, - unconditional=False, - ): - super().__init__() - self.be_unconditional = unconditional - self.sos_token = sos_token - self.first_stage_key = first_stage_key - self.cond_stage_key = cond_stage_key - self.init_first_stage_from_ckpt(first_stage_config) - self.init_cond_stage_from_ckpt(cond_stage_config) - if permuter_config is None: - permuter_config = {"target": "taming.modules.transformer.permuter.Identity"} - self.permuter = instantiate_from_config(config=permuter_config) - self.transformer = instantiate_from_config(config=transformer_config) - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.downsample_cond_size = downsample_cond_size - self.pkeep = pkeep - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - for k in sd.keys(): - for ik in ignore_keys: - if k.startswith(ik): - self.print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def init_first_stage_from_ckpt(self, config): - model = instantiate_from_config(config) - model = model.eval() - model.train = disabled_train - self.first_stage_model = model - - def init_cond_stage_from_ckpt(self, config): - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__" or self.be_unconditional: - print(f"Using no cond stage. Assuming the training is intended to be unconditional. " - f"Prepending {self.sos_token} as a sos token.") - self.be_unconditional = True - self.cond_stage_key = self.first_stage_key - self.cond_stage_model = SOSProvider(self.sos_token) - else: - model = instantiate_from_config(config) - model = model.eval() - model.train = disabled_train - self.cond_stage_model = model - - def forward(self, x, c): - # one step to produce the logits - _, z_indices = self.encode_to_z(x) - _, c_indices = self.encode_to_c(c) - - if self.training and self.pkeep < 1.0: - mask = torch.bernoulli(self.pkeep*torch.ones(z_indices.shape, - device=z_indices.device)) - mask = mask.round().to(dtype=torch.int64) - r_indices = torch.randint_like(z_indices, self.transformer.config.vocab_size) - a_indices = mask*z_indices+(1-mask)*r_indices - else: - a_indices = z_indices - - cz_indices = torch.cat((c_indices, a_indices), dim=1) - - # target includes all sequence elements (no need to handle first one - # differently because we are conditioning) - target = z_indices - # make the prediction - logits, _ = self.transformer(cz_indices[:, :-1]) - # cut off conditioning outputs - output i corresponds to p(z_i | z_{ -1: - c = F.interpolate(c, size=(self.downsample_cond_size, self.downsample_cond_size)) - quant_c, _, [_,_,indices] = self.cond_stage_model.encode(c) - if len(indices.shape) > 2: - indices = indices.view(c.shape[0], -1) - return quant_c, indices - - @torch.no_grad() - def decode_to_img(self, index, zshape): - index = self.permuter(index, reverse=True) - bhwc = (zshape[0],zshape[2],zshape[3],zshape[1]) - quant_z = self.first_stage_model.quantize.get_codebook_entry( - index.reshape(-1), shape=bhwc) - x = self.first_stage_model.decode(quant_z) - return x - - @torch.no_grad() - def log_images(self, batch, temperature=None, top_k=None, callback=None, lr_interface=False, **kwargs): - log = dict() - - N = 4 - if lr_interface: - x, c = self.get_xc(batch, N, diffuse=False, upsample_factor=8) - else: - x, c = self.get_xc(batch, N) - x = x.to(device=self.device) - c = c.to(device=self.device) - - quant_z, z_indices = self.encode_to_z(x) - quant_c, c_indices = self.encode_to_c(c) - - # create a "half"" sample - z_start_indices = z_indices[:,:z_indices.shape[1]//2] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1]-z_start_indices.shape[1], - temperature=temperature if temperature is not None else 1.0, - sample=True, - top_k=top_k if top_k is not None else 100, - callback=callback if callback is not None else lambda k: None) - x_sample = self.decode_to_img(index_sample, quant_z.shape) - - # sample - z_start_indices = z_indices[:, :0] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1], - temperature=temperature if temperature is not None else 1.0, - sample=True, - top_k=top_k if top_k is not None else 100, - callback=callback if callback is not None else lambda k: None) - x_sample_nopix = self.decode_to_img(index_sample, quant_z.shape) - - # det sample - z_start_indices = z_indices[:, :0] - index_sample = self.sample(z_start_indices, c_indices, - steps=z_indices.shape[1], - sample=False, - callback=callback if callback is not None else lambda k: None) - x_sample_det = self.decode_to_img(index_sample, quant_z.shape) - - # reconstruction - x_rec = self.decode_to_img(z_indices, quant_z.shape) - - log["inputs"] = x - log["reconstructions"] = x_rec - - if self.cond_stage_key in ["objects_bbox", "objects_center_points"]: - figure_size = (x_rec.shape[2], x_rec.shape[3]) - dataset = kwargs["pl_module"].trainer.datamodule.datasets["validation"] - label_for_category_no = dataset.get_textual_label_for_category_no - plotter = dataset.conditional_builders[self.cond_stage_key].plot - log["conditioning"] = torch.zeros_like(log["reconstructions"]) - for i in range(quant_c.shape[0]): - log["conditioning"][i] = plotter(quant_c[i], label_for_category_no, figure_size) - log["conditioning_rec"] = log["conditioning"] - elif self.cond_stage_key != "image": - cond_rec = self.cond_stage_model.decode(quant_c) - if self.cond_stage_key == "segmentation": - # get image from segmentation mask - num_classes = cond_rec.shape[1] - - c = torch.argmax(c, dim=1, keepdim=True) - c = F.one_hot(c, num_classes=num_classes) - c = c.squeeze(1).permute(0, 3, 1, 2).float() - c = self.cond_stage_model.to_rgb(c) - - cond_rec = torch.argmax(cond_rec, dim=1, keepdim=True) - cond_rec = F.one_hot(cond_rec, num_classes=num_classes) - cond_rec = cond_rec.squeeze(1).permute(0, 3, 1, 2).float() - cond_rec = self.cond_stage_model.to_rgb(cond_rec) - log["conditioning_rec"] = cond_rec - log["conditioning"] = c - - log["samples_half"] = x_sample - log["samples_nopix"] = x_sample_nopix - log["samples_det"] = x_sample_det - return log - - def get_input(self, key, batch): - x = batch[key] - if len(x.shape) == 3: - x = x[..., None] - if len(x.shape) == 4: - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format) - if x.dtype == torch.double: - x = x.float() - return x - - def get_xc(self, batch, N=None): - x = self.get_input(self.first_stage_key, batch) - c = self.get_input(self.cond_stage_key, batch) - if N is not None: - x = x[:N] - c = c[:N] - return x, c - - def shared_step(self, batch, batch_idx): - x, c = self.get_xc(batch) - logits, target = self(x, c) - loss = F.cross_entropy(logits.reshape(-1, logits.size(-1)), target.reshape(-1)) - return loss - - def training_step(self, batch, batch_idx): - loss = self.shared_step(batch, batch_idx) - self.log("train/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - return loss - - def validation_step(self, batch, batch_idx): - loss = self.shared_step(batch, batch_idx) - self.log("val/loss", loss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - return loss - - def configure_optimizers(self): - """ - Following minGPT: - This long function is unfortunately doing something very simple and is being very defensive: - We are separating out all parameters of the model into two buckets: those that will experience - weight decay for regularization and those that won't (biases, and layernorm/embedding weights). - We are then returning the PyTorch optimizer object. - """ - # separate out all parameters to those that will and won't experience regularizing weight decay - decay = set() - no_decay = set() - whitelist_weight_modules = (torch.nn.Linear, ) - blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding) - for mn, m in self.transformer.named_modules(): - for pn, p in m.named_parameters(): - fpn = '%s.%s' % (mn, pn) if mn else pn # full param name - - if pn.endswith('bias'): - # all biases will not be decayed - no_decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules): - # weights of whitelist modules will be weight decayed - decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules): - # weights of blacklist modules will NOT be weight decayed - no_decay.add(fpn) - - # special case the position embedding parameter in the root GPT module as not decayed - no_decay.add('pos_emb') - - # validate that we considered every parameter - param_dict = {pn: p for pn, p in self.transformer.named_parameters()} - inter_params = decay & no_decay - union_params = decay | no_decay - assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), ) - assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \ - % (str(param_dict.keys() - union_params), ) - - # create the pytorch optimizer object - optim_groups = [ - {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": 0.01}, - {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0}, - ] - optimizer = torch.optim.AdamW(optim_groups, lr=self.learning_rate, betas=(0.9, 0.95)) - return optimizer diff --git a/spaces/RMXK/RVC_HFF/slicer2.py b/spaces/RMXK/RVC_HFF/slicer2.py deleted file mode 100644 index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/slicer2.py +++ /dev/null @@ -1,260 +0,0 @@ -import numpy as np - - -# This function is obtained from librosa. -def get_rms( - y, - frame_length=2048, - hop_length=512, - pad_mode="constant", -): - padding = (int(frame_length // 2), int(frame_length // 2)) - y = np.pad(y, padding, mode=pad_mode) - - axis = -1 - # put our new within-frame axis at the end for now - out_strides = y.strides + tuple([y.strides[axis]]) - # Reduce the shape on the framing axis - x_shape_trimmed = list(y.shape) - x_shape_trimmed[axis] -= frame_length - 1 - out_shape = tuple(x_shape_trimmed) + tuple([frame_length]) - xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides) - if axis < 0: - target_axis = axis - 1 - else: - target_axis = axis + 1 - xw = np.moveaxis(xw, -1, target_axis) - # Downsample along the target axis - slices = [slice(None)] * xw.ndim - slices[axis] = slice(0, None, hop_length) - x = xw[tuple(slices)] - - # Calculate power - power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True) - - return np.sqrt(power) - - -class Slicer: - def __init__( - self, - sr: int, - threshold: float = -40.0, - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000, - ): - if not min_length >= min_interval >= hop_size: - raise ValueError( - "The following condition must be satisfied: min_length >= min_interval >= hop_size" - ) - if not max_sil_kept >= hop_size: - raise ValueError( - "The following condition must be satisfied: max_sil_kept >= hop_size" - ) - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.0) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[ - :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size) - ] - else: - return waveform[ - begin * self.hop_size : min(waveform.shape[0], end * self.hop_size) - ] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = waveform.mean(axis=0) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return [waveform] - rms_list = get_rms( - y=samples, frame_length=self.win_size, hop_length=self.hop_size - ).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = ( - i - silence_start >= self.min_interval - and i - clip_start >= self.min_length - ) - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start : i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[ - i - self.max_sil_kept : silence_start + self.max_sil_kept + 1 - ].argmin() - pos += i - self.max_sil_kept - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if ( - silence_start is not None - and total_frames - silence_start >= self.min_interval - ): - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return [waveform] - else: - chunks = [] - if sil_tags[0][0] > 0: - chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0])) - for i in range(len(sil_tags) - 1): - chunks.append( - self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0]) - ) - if sil_tags[-1][1] < total_frames: - chunks.append( - self._apply_slice(waveform, sil_tags[-1][1], total_frames) - ) - return chunks - - -def main(): - import os.path - from argparse import ArgumentParser - - import librosa - import soundfile - - parser = ArgumentParser() - parser.add_argument("audio", type=str, help="The audio to be sliced") - parser.add_argument( - "--out", type=str, help="Output directory of the sliced audio clips" - ) - parser.add_argument( - "--db_thresh", - type=float, - required=False, - default=-40, - help="The dB threshold for silence detection", - ) - parser.add_argument( - "--min_length", - type=int, - required=False, - default=5000, - help="The minimum milliseconds required for each sliced audio clip", - ) - parser.add_argument( - "--min_interval", - type=int, - required=False, - default=300, - help="The minimum milliseconds for a silence part to be sliced", - ) - parser.add_argument( - "--hop_size", - type=int, - required=False, - default=10, - help="Frame length in milliseconds", - ) - parser.add_argument( - "--max_sil_kept", - type=int, - required=False, - default=500, - help="The maximum silence length kept around the sliced clip, presented in milliseconds", - ) - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None, mono=False) - slicer = Slicer( - sr=sr, - threshold=args.db_thresh, - min_length=args.min_length, - min_interval=args.min_interval, - hop_size=args.hop_size, - max_sil_kept=args.max_sil_kept, - ) - chunks = slicer.slice(audio) - if not os.path.exists(out): - os.makedirs(out) - for i, chunk in enumerate(chunks): - if len(chunk.shape) > 1: - chunk = chunk.T - soundfile.write( - os.path.join( - out, - f"%s_%d.wav" - % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i), - ), - chunk, - sr, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/__init__.py deleted file mode 100644 index a40eeafcc914108ca79c5d83d6e81da1b29c6e80..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -from .package_data import __version__ -from .core import ( - IDNABidiError, - IDNAError, - InvalidCodepoint, - InvalidCodepointContext, - alabel, - check_bidi, - check_hyphen_ok, - check_initial_combiner, - check_label, - check_nfc, - decode, - encode, - ulabel, - uts46_remap, - valid_contextj, - valid_contexto, - valid_label_length, - valid_string_length, -) -from .intranges import intranges_contain - -__all__ = [ - "IDNABidiError", - "IDNAError", - "InvalidCodepoint", - "InvalidCodepointContext", - "alabel", - "check_bidi", - "check_hyphen_ok", - "check_initial_combiner", - "check_label", - "check_nfc", - "decode", - "encode", - "intranges_contain", - "ulabel", - "uts46_remap", - "valid_contextj", - "valid_contexto", - "valid_label_length", - "valid_string_length", -] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/requirements.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/requirements.py deleted file mode 100644 index 6af14ec4ce49e633d030611c26f0bd9beaf13e6a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/requirements.py +++ /dev/null @@ -1,146 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import re -import string -import urllib.parse -from typing import List, Optional as TOptional, Set - -from pkg_resources.extern.pyparsing import ( # noqa - Combine, - Literal as L, - Optional, - ParseException, - Regex, - Word, - ZeroOrMore, - originalTextFor, - stringEnd, - stringStart, -) - -from .markers import MARKER_EXPR, Marker -from .specifiers import LegacySpecifier, Specifier, SpecifierSet - - -class InvalidRequirement(ValueError): - """ - An invalid requirement was found, users should refer to PEP 508. - """ - - -ALPHANUM = Word(string.ascii_letters + string.digits) - -LBRACKET = L("[").suppress() -RBRACKET = L("]").suppress() -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() -COMMA = L(",").suppress() -SEMICOLON = L(";").suppress() -AT = L("@").suppress() - -PUNCTUATION = Word("-_.") -IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) -IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) - -NAME = IDENTIFIER("name") -EXTRA = IDENTIFIER - -URI = Regex(r"[^ ]+")("url") -URL = AT + URI - -EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) -EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") - -VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) -VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) - -VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY -VERSION_MANY = Combine( - VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False -)("_raw_spec") -_VERSION_SPEC = Optional((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY) -_VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or "") - -VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") -VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) - -MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") -MARKER_EXPR.setParseAction( - lambda s, l, t: Marker(s[t._original_start : t._original_end]) -) -MARKER_SEPARATOR = SEMICOLON -MARKER = MARKER_SEPARATOR + MARKER_EXPR - -VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) -URL_AND_MARKER = URL + Optional(MARKER) - -NAMED_REQUIREMENT = NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) - -REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd -# pkg_resources.extern.pyparsing isn't thread safe during initialization, so we do it eagerly, see -# issue #104 -REQUIREMENT.parseString("x[]") - - -class Requirement: - """Parse a requirement. - - Parse a given requirement string into its parts, such as name, specifier, - URL, and extras. Raises InvalidRequirement on a badly-formed requirement - string. - """ - - # TODO: Can we test whether something is contained within a requirement? - # If so how do we do that? Do we need to test against the _name_ of - # the thing as well as the version? What about the markers? - # TODO: Can we normalize the name and extra name? - - def __init__(self, requirement_string: str) -> None: - try: - req = REQUIREMENT.parseString(requirement_string) - except ParseException as e: - raise InvalidRequirement( - f'Parse error at "{ requirement_string[e.loc : e.loc + 8]!r}": {e.msg}' - ) - - self.name: str = req.name - if req.url: - parsed_url = urllib.parse.urlparse(req.url) - if parsed_url.scheme == "file": - if urllib.parse.urlunparse(parsed_url) != req.url: - raise InvalidRequirement("Invalid URL given") - elif not (parsed_url.scheme and parsed_url.netloc) or ( - not parsed_url.scheme and not parsed_url.netloc - ): - raise InvalidRequirement(f"Invalid URL: {req.url}") - self.url: TOptional[str] = req.url - else: - self.url = None - self.extras: Set[str] = set(req.extras.asList() if req.extras else []) - self.specifier: SpecifierSet = SpecifierSet(req.specifier) - self.marker: TOptional[Marker] = req.marker if req.marker else None - - def __str__(self) -> str: - parts: List[str] = [self.name] - - if self.extras: - formatted_extras = ",".join(sorted(self.extras)) - parts.append(f"[{formatted_extras}]") - - if self.specifier: - parts.append(str(self.specifier)) - - if self.url: - parts.append(f"@ {self.url}") - if self.marker: - parts.append(" ") - - if self.marker: - parts.append(f"; {self.marker}") - - return "".join(parts) - - def __repr__(self) -> str: - return f"" diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_meta.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_meta.py deleted file mode 100644 index 37ee43e6ef447dfb4ae68f5f6c35597d12fdc5a1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/importlib_metadata/_meta.py +++ /dev/null @@ -1,48 +0,0 @@ -from ._compat import Protocol -from typing import Any, Dict, Iterator, List, TypeVar, Union - - -_T = TypeVar("_T") - - -class PackageMetadata(Protocol): - def __len__(self) -> int: - ... # pragma: no cover - - def __contains__(self, item: str) -> bool: - ... # pragma: no cover - - def __getitem__(self, key: str) -> str: - ... # pragma: no cover - - def __iter__(self) -> Iterator[str]: - ... # pragma: no cover - - def get_all(self, name: str, failobj: _T = ...) -> Union[List[Any], _T]: - """ - Return all values associated with a possibly multi-valued key. - """ - - @property - def json(self) -> Dict[str, Union[str, List[str]]]: - """ - A JSON-compatible form of the metadata. - """ - - -class SimplePath(Protocol): - """ - A minimal subset of pathlib.Path required by PathDistribution. - """ - - def joinpath(self) -> 'SimplePath': - ... # pragma: no cover - - def __truediv__(self) -> 'SimplePath': - ... # pragma: no cover - - def parent(self) -> 'SimplePath': - ... # pragma: no cover - - def read_text(self) -> str: - ... # pragma: no cover diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/patchnet.py b/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/patchnet.py deleted file mode 100644 index 8ed3fdbd55ccbbd58f0cea3dad9384a402ec5e9d..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/r2d2/nets/patchnet.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright 2019-present NAVER Corp. -# CC BY-NC-SA 3.0 -# Available only for non-commercial use - -import pdb -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class BaseNet(nn.Module): - """Takes a list of images as input, and returns for each image: - - a pixelwise descriptor - - a pixelwise confidence - """ - - def softmax(self, ux): - if ux.shape[1] == 1: - x = F.softplus(ux) - return x / (1 + x) # for sure in [0,1], much less plateaus than softmax - elif ux.shape[1] == 2: - return F.softmax(ux, dim=1)[:, 1:2] - - def normalize(self, x, ureliability, urepeatability): - return dict( - descriptors=F.normalize(x, p=2, dim=1), - repeatability=self.softmax(urepeatability), - reliability=self.softmax(ureliability), - ) - - def forward_one(self, x): - raise NotImplementedError() - - def forward(self, imgs, **kw): - res = [self.forward_one(img) for img in imgs] - # merge all dictionaries into one - res = {k: [r[k] for r in res if k in r] for k in {k for r in res for k in r}} - return dict(res, imgs=imgs, **kw) - - -class PatchNet(BaseNet): - """Helper class to construct a fully-convolutional network that - extract a l2-normalized patch descriptor. - """ - - def __init__(self, inchan=3, dilated=True, dilation=1, bn=True, bn_affine=False): - BaseNet.__init__(self) - self.inchan = inchan - self.curchan = inchan - self.dilated = dilated - self.dilation = dilation - self.bn = bn - self.bn_affine = bn_affine - self.ops = nn.ModuleList([]) - - def _make_bn(self, outd): - return nn.BatchNorm2d(outd, affine=self.bn_affine) - - def _add_conv( - self, - outd, - k=3, - stride=1, - dilation=1, - bn=True, - relu=True, - k_pool=1, - pool_type="max", - ): - # as in the original implementation, dilation is applied at the end of layer, so it will have impact only from next layer - d = self.dilation * dilation - if self.dilated: - conv_params = dict(padding=((k - 1) * d) // 2, dilation=d, stride=1) - self.dilation *= stride - else: - conv_params = dict(padding=((k - 1) * d) // 2, dilation=d, stride=stride) - self.ops.append(nn.Conv2d(self.curchan, outd, kernel_size=k, **conv_params)) - if bn and self.bn: - self.ops.append(self._make_bn(outd)) - if relu: - self.ops.append(nn.ReLU(inplace=True)) - self.curchan = outd - - if k_pool > 1: - if pool_type == "avg": - self.ops.append(torch.nn.AvgPool2d(kernel_size=k_pool)) - elif pool_type == "max": - self.ops.append(torch.nn.MaxPool2d(kernel_size=k_pool)) - else: - print(f"Error, unknown pooling type {pool_type}...") - - def forward_one(self, x): - assert self.ops, "You need to add convolutions first" - for n, op in enumerate(self.ops): - x = op(x) - return self.normalize(x) - - -class L2_Net(PatchNet): - """Compute a 128D descriptor for all overlapping 32x32 patches. - From the L2Net paper (CVPR'17). - """ - - def __init__(self, dim=128, **kw): - PatchNet.__init__(self, **kw) - add_conv = lambda n, **kw: self._add_conv((n * dim) // 128, **kw) - add_conv(32) - add_conv(32) - add_conv(64, stride=2) - add_conv(64) - add_conv(128, stride=2) - add_conv(128) - add_conv(128, k=7, stride=8, bn=False, relu=False) - self.out_dim = dim - - -class Quad_L2Net(PatchNet): - """Same than L2_Net, but replace the final 8x8 conv by 3 successive 2x2 convs.""" - - def __init__(self, dim=128, mchan=4, relu22=False, **kw): - PatchNet.__init__(self, **kw) - self._add_conv(8 * mchan) - self._add_conv(8 * mchan) - self._add_conv(16 * mchan, stride=2) - self._add_conv(16 * mchan) - self._add_conv(32 * mchan, stride=2) - self._add_conv(32 * mchan) - # replace last 8x8 convolution with 3 2x2 convolutions - self._add_conv(32 * mchan, k=2, stride=2, relu=relu22) - self._add_conv(32 * mchan, k=2, stride=2, relu=relu22) - self._add_conv(dim, k=2, stride=2, bn=False, relu=False) - self.out_dim = dim - - -class Quad_L2Net_ConfCFS(Quad_L2Net): - """Same than Quad_L2Net, with 2 confidence maps for repeatability and reliability.""" - - def __init__(self, **kw): - Quad_L2Net.__init__(self, **kw) - # reliability classifier - self.clf = nn.Conv2d(self.out_dim, 2, kernel_size=1) - # repeatability classifier: for some reasons it's a softplus, not a softmax! - # Why? I guess it's a mistake that was left unnoticed in the code for a long time... - self.sal = nn.Conv2d(self.out_dim, 1, kernel_size=1) - - def forward_one(self, x): - assert self.ops, "You need to add convolutions first" - for op in self.ops: - x = op(x) - # compute the confidence maps - ureliability = self.clf(x**2) - urepeatability = self.sal(x**2) - return self.normalize(x, ureliability, urepeatability) - - -class Fast_Quad_L2Net(PatchNet): - """Faster version of Quad l2 net, replacing one dilated conv with one pooling to diminish image resolution thus increase inference time - Dilation factors and pooling: - 1,1,1, pool2, 1,1, 2,2, 4, 8, upsample2 - """ - - def __init__(self, dim=128, mchan=4, relu22=False, downsample_factor=2, **kw): - - PatchNet.__init__(self, **kw) - self._add_conv(8 * mchan) - self._add_conv(8 * mchan) - self._add_conv( - 16 * mchan, k_pool=downsample_factor - ) # added avg pooling to decrease img resolution - self._add_conv(16 * mchan) - self._add_conv(32 * mchan, stride=2) - self._add_conv(32 * mchan) - - # replace last 8x8 convolution with 3 2x2 convolutions - self._add_conv(32 * mchan, k=2, stride=2, relu=relu22) - self._add_conv(32 * mchan, k=2, stride=2, relu=relu22) - self._add_conv(dim, k=2, stride=2, bn=False, relu=False) - - # Go back to initial image resolution with upsampling - self.ops.append( - torch.nn.Upsample( - scale_factor=downsample_factor, mode="bilinear", align_corners=False - ) - ) - - self.out_dim = dim - - -class Fast_Quad_L2Net_ConfCFS(Fast_Quad_L2Net): - """Fast r2d2 architecture""" - - def __init__(self, **kw): - Fast_Quad_L2Net.__init__(self, **kw) - # reliability classifier - self.clf = nn.Conv2d(self.out_dim, 2, kernel_size=1) - - # repeatability classifier: for some reasons it's a softplus, not a softmax! - # Why? I guess it's a mistake that was left unnoticed in the code for a long time... - self.sal = nn.Conv2d(self.out_dim, 1, kernel_size=1) - - def forward_one(self, x): - assert self.ops, "You need to add convolutions first" - for op in self.ops: - x = op(x) - # compute the confidence maps - ureliability = self.clf(x**2) - urepeatability = self.sal(x**2) - return self.normalize(x, ureliability, urepeatability) diff --git a/spaces/Reeve/Ohayou_Face/training/dataset.py b/spaces/Reeve/Ohayou_Face/training/dataset.py deleted file mode 100644 index 18540c3c100004d637ca51740a179e690ce5f352..0000000000000000000000000000000000000000 --- a/spaces/Reeve/Ohayou_Face/training/dataset.py +++ /dev/null @@ -1,248 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib - -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - yflip = False, # Apply mirror augment vertically? - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - # Apply yflip. - self._yflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if yflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._yflip = np.concatenate([self._yflip, np.ones_like(self._yflip)]) - self._xflip = np.tile(self._xflip, 2) # double the indices for xflip, otherwise we get out of bounds - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - if self._yflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, ::-1, :] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.yflip = (int(self._yflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - -#---------------------------------------------------------------------------- diff --git a/spaces/RenXXV/Test02/Dockerfile b/spaces/RenXXV/Test02/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/RenXXV/Test02/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Reself/StableVideo/README.md b/spaces/Reself/StableVideo/README.md deleted file mode 100644 index ca34db94fe4eb39504a0cb527edf754b8f5cfda6..0000000000000000000000000000000000000000 --- a/spaces/Reself/StableVideo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StableVideo -emoji: 😻 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ritori/TTS_Yui/waveglow/distributed.py b/spaces/Ritori/TTS_Yui/waveglow/distributed.py deleted file mode 100644 index 19cbfd2cca72c065fb057a7de20d7ae4be9dce04..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/waveglow/distributed.py +++ /dev/null @@ -1,184 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import os -import sys -import time -import subprocess -import argparse - -import torch -import torch.distributed as dist -from torch.autograd import Variable - -def reduce_tensor(tensor, num_gpus): - rt = tensor.clone() - dist.all_reduce(rt, op=dist.reduce_op.SUM) - rt /= num_gpus - return rt - -def init_distributed(rank, num_gpus, group_name, dist_backend, dist_url): - assert torch.cuda.is_available(), "Distributed mode requires CUDA." - print("Initializing Distributed") - - # Set cuda device so everything is done on the right GPU. - torch.cuda.set_device(rank % torch.cuda.device_count()) - - # Initialize distributed communication - dist.init_process_group(dist_backend, init_method=dist_url, - world_size=num_gpus, rank=rank, - group_name=group_name) - -def _flatten_dense_tensors(tensors): - """Flatten dense tensors into a contiguous 1D buffer. Assume tensors are of - same dense type. - Since inputs are dense, the resulting tensor will be a concatenated 1D - buffer. Element-wise operation on this buffer will be equivalent to - operating individually. - Arguments: - tensors (Iterable[Tensor]): dense tensors to flatten. - Returns: - A contiguous 1D buffer containing input tensors. - """ - if len(tensors) == 1: - return tensors[0].contiguous().view(-1) - flat = torch.cat([t.contiguous().view(-1) for t in tensors], dim=0) - return flat - -def _unflatten_dense_tensors(flat, tensors): - """View a flat buffer using the sizes of tensors. Assume that tensors are of - same dense type, and that flat is given by _flatten_dense_tensors. - Arguments: - flat (Tensor): flattened dense tensors to unflatten. - tensors (Iterable[Tensor]): dense tensors whose sizes will be used to - unflatten flat. - Returns: - Unflattened dense tensors with sizes same as tensors and values from - flat. - """ - outputs = [] - offset = 0 - for tensor in tensors: - numel = tensor.numel() - outputs.append(flat.narrow(0, offset, numel).view_as(tensor)) - offset += numel - return tuple(outputs) - -def apply_gradient_allreduce(module): - """ - Modifies existing model to do gradient allreduce, but doesn't change class - so you don't need "module" - """ - if not hasattr(dist, '_backend'): - module.warn_on_half = True - else: - module.warn_on_half = True if dist._backend == dist.dist_backend.GLOO else False - - for p in module.state_dict().values(): - if not torch.is_tensor(p): - continue - dist.broadcast(p, 0) - - def allreduce_params(): - if(module.needs_reduction): - module.needs_reduction = False - buckets = {} - for param in module.parameters(): - if param.requires_grad and param.grad is not None: - tp = type(param.data) - if tp not in buckets: - buckets[tp] = [] - buckets[tp].append(param) - if module.warn_on_half: - if torch.cuda.HalfTensor in buckets: - print("WARNING: gloo dist backend for half parameters may be extremely slow." + - " It is recommended to use the NCCL backend in this case. This currently requires" + - "PyTorch built from top of tree master.") - module.warn_on_half = False - - for tp in buckets: - bucket = buckets[tp] - grads = [param.grad.data for param in bucket] - coalesced = _flatten_dense_tensors(grads) - dist.all_reduce(coalesced) - coalesced /= dist.get_world_size() - for buf, synced in zip(grads, _unflatten_dense_tensors(coalesced, grads)): - buf.copy_(synced) - - for param in list(module.parameters()): - def allreduce_hook(*unused): - Variable._execution_engine.queue_callback(allreduce_params) - if param.requires_grad: - param.register_hook(allreduce_hook) - dir(param) - - def set_needs_reduction(self, input, output): - self.needs_reduction = True - - module.register_forward_hook(set_needs_reduction) - return module - - -def main(config, stdout_dir, args_str): - args_list = ['train.py'] - args_list += args_str.split(' ') if len(args_str) > 0 else [] - - args_list.append('--config={}'.format(config)) - - num_gpus = torch.cuda.device_count() - args_list.append('--num_gpus={}'.format(num_gpus)) - args_list.append("--group_name=group_{}".format(time.strftime("%Y_%m_%d-%H%M%S"))) - - if not os.path.isdir(stdout_dir): - os.makedirs(stdout_dir) - os.chmod(stdout_dir, 0o775) - - workers = [] - - for i in range(num_gpus): - args_list[-2] = '--rank={}'.format(i) - stdout = None if i == 0 else open( - os.path.join(stdout_dir, "GPU_{}.log".format(i)), "w") - print(args_list) - p = subprocess.Popen([str(sys.executable)]+args_list, stdout=stdout) - workers.append(p) - - for p in workers: - p.wait() - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, required=True, - help='JSON file for configuration') - parser.add_argument('-s', '--stdout_dir', type=str, default=".", - help='directory to save stoud logs') - parser.add_argument( - '-a', '--args_str', type=str, default='', - help='double quoted string with space separated key value pairs') - - args = parser.parse_args() - main(args.config, args.stdout_dir, args.args_str) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py deleted file mode 100644 index 8822d1ca78ee2fa2f304a0649e81274830383533..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.ops import MaskedConv2d - -from ..builder import HEADS -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead - - -@HEADS.register_module() -class GARetinaHead(GuidedAnchorHead): - """Guided-Anchor-based RetinaNet head.""" - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(GARetinaHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2, - 1) - self.feature_adaption_cls = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.feature_adaption_reg = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.retina_cls = MaskedConv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = MaskedConv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the layer.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - - self.feature_adaption_cls.init_weights() - self.feature_adaption_reg.init_weights() - - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_loc, std=0.01, bias=bias_cls) - normal_init(self.conv_shape, std=0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - - loc_pred = self.conv_loc(cls_feat) - shape_pred = self.conv_shape(reg_feat) - - cls_feat = self.feature_adaption_cls(cls_feat, shape_pred) - reg_feat = self.feature_adaption_reg(reg_feat, shape_pred) - - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.retina_cls(cls_feat, mask) - bbox_pred = self.retina_reg(reg_feat, mask) - return cls_score, bbox_pred, shape_pred, loc_pred diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ema_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ema_head.py deleted file mode 100644 index 12267cb40569d2b5a4a2955a6dc2671377ff5e0a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/ema_head.py +++ /dev/null @@ -1,168 +0,0 @@ -import math - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -def reduce_mean(tensor): - """Reduce mean when distributed training.""" - if not (dist.is_available() and dist.is_initialized()): - return tensor - tensor = tensor.clone() - dist.all_reduce(tensor.div_(dist.get_world_size()), op=dist.ReduceOp.SUM) - return tensor - - -class EMAModule(nn.Module): - """Expectation Maximization Attention Module used in EMANet. - - Args: - channels (int): Channels of the whole module. - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - """ - - def __init__(self, channels, num_bases, num_stages, momentum): - super(EMAModule, self).__init__() - assert num_stages >= 1, 'num_stages must be at least 1!' - self.num_bases = num_bases - self.num_stages = num_stages - self.momentum = momentum - - bases = torch.zeros(1, channels, self.num_bases) - bases.normal_(0, math.sqrt(2. / self.num_bases)) - # [1, channels, num_bases] - bases = F.normalize(bases, dim=1, p=2) - self.register_buffer('bases', bases) - - def forward(self, feats): - """Forward function.""" - batch_size, channels, height, width = feats.size() - # [batch_size, channels, height*width] - feats = feats.view(batch_size, channels, height * width) - # [batch_size, channels, num_bases] - bases = self.bases.repeat(batch_size, 1, 1) - - with torch.no_grad(): - for i in range(self.num_stages): - # [batch_size, height*width, num_bases] - attention = torch.einsum('bcn,bck->bnk', feats, bases) - attention = F.softmax(attention, dim=2) - # l1 norm - attention_normed = F.normalize(attention, dim=1, p=1) - # [batch_size, channels, num_bases] - bases = torch.einsum('bcn,bnk->bck', feats, attention_normed) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - - feats_recon = torch.einsum('bck,bnk->bcn', bases, attention) - feats_recon = feats_recon.view(batch_size, channels, height, width) - - if self.training: - bases = bases.mean(dim=0, keepdim=True) - bases = reduce_mean(bases) - # l2 norm - bases = F.normalize(bases, dim=1, p=2) - self.bases = (1 - - self.momentum) * self.bases + self.momentum * bases - - return feats_recon - - -@HEADS.register_module() -class EMAHead(BaseDecodeHead): - """Expectation Maximization Attention Networks for Semantic Segmentation. - - This head is the implementation of `EMANet - `_. - - Args: - ema_channels (int): EMA module channels - num_bases (int): Number of bases. - num_stages (int): Number of the EM iterations. - concat_input (bool): Whether concat the input and output of convs - before classification layer. Default: True - momentum (float): Momentum to update the base. Default: 0.1. - """ - - def __init__(self, - ema_channels, - num_bases, - num_stages, - concat_input=True, - momentum=0.1, - **kwargs): - super(EMAHead, self).__init__(**kwargs) - self.ema_channels = ema_channels - self.num_bases = num_bases - self.num_stages = num_stages - self.concat_input = concat_input - self.momentum = momentum - self.ema_module = EMAModule(self.ema_channels, self.num_bases, - self.num_stages, self.momentum) - - self.ema_in_conv = ConvModule( - self.in_channels, - self.ema_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # project (0, inf) -> (-inf, inf) - self.ema_mid_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=None, - act_cfg=None) - for param in self.ema_mid_conv.parameters(): - param.requires_grad = False - - self.ema_out_conv = ConvModule( - self.ema_channels, - self.ema_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=None) - self.bottleneck = ConvModule( - self.ema_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - feats = self.ema_in_conv(x) - identity = feats - feats = self.ema_mid_conv(feats) - recon = self.ema_module(feats) - recon = F.relu(recon, inplace=True) - recon = self.ema_out_conv(recon) - output = F.relu(identity + recon, inplace=True) - output = self.bottleneck(output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/psp_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index b5f1e71c70c3a20f4007c263ec471a87bb214a48..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_parsing_gen.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_parsing_gen.py deleted file mode 100644 index e032b002b1252e289266d048f6326e02b36a023c..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/train_parsing_gen.py +++ /dev/null @@ -1,136 +0,0 @@ -import argparse -import logging -import os -import os.path as osp -import random -import time - -import torch - -from data.parsing_generation_segm_attr_dataset import \ - ParsingGenerationDeepFashionAttrSegmDataset -from models import create_model -from utils.logger import MessageLogger, get_root_logger, init_tb_logger -from utils.options import dict2str, dict_to_nonedict, parse -from utils.util import make_exp_dirs - - -def main(): - # options - parser = argparse.ArgumentParser() - parser.add_argument('-opt', type=str, help='Path to option YAML file.') - args = parser.parse_args() - opt = parse(args.opt, is_train=True) - - # mkdir and loggers - make_exp_dirs(opt) - log_file = osp.join(opt['path']['log'], f"train_{opt['name']}.log") - logger = get_root_logger( - logger_name='base', log_level=logging.INFO, log_file=log_file) - logger.info(dict2str(opt)) - # initialize tensorboard logger - tb_logger = None - if opt['use_tb_logger'] and 'debug' not in opt['name']: - tb_logger = init_tb_logger(log_dir='./tb_logger/' + opt['name']) - - # convert to NoneDict, which returns None for missing keys - opt = dict_to_nonedict(opt) - - # set up data loader - train_dataset = ParsingGenerationDeepFashionAttrSegmDataset( - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_file=opt['train_ann_file']) - train_loader = torch.utils.data.DataLoader( - dataset=train_dataset, - batch_size=opt['batch_size'], - shuffle=True, - num_workers=opt['num_workers'], - drop_last=True) - logger.info(f'Number of train set: {len(train_dataset)}.') - opt['max_iters'] = opt['num_epochs'] * len( - train_dataset) // opt['batch_size'] - - val_dataset = ParsingGenerationDeepFashionAttrSegmDataset( - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_file=opt['val_ann_file']) - val_loader = torch.utils.data.DataLoader( - dataset=val_dataset, - batch_size=1, - shuffle=False, - num_workers=opt['num_workers']) - logger.info(f'Number of val set: {len(val_dataset)}.') - - test_dataset = ParsingGenerationDeepFashionAttrSegmDataset( - segm_dir=opt['segm_dir'], - pose_dir=opt['pose_dir'], - ann_file=opt['test_ann_file']) - test_loader = torch.utils.data.DataLoader( - dataset=test_dataset, - batch_size=1, - shuffle=False, - num_workers=opt['num_workers']) - logger.info(f'Number of test set: {len(test_dataset)}.') - - current_iter = 0 - best_epoch = None - best_acc = 0 - - model = create_model(opt) - - data_time, iter_time = 0, 0 - current_iter = 0 - - # create message logger (formatted outputs) - msg_logger = MessageLogger(opt, current_iter, tb_logger) - - for epoch in range(opt['num_epochs']): - lr = model.update_learning_rate(epoch) - - for _, batch_data in enumerate(train_loader): - data_time = time.time() - data_time - - current_iter += 1 - - model.feed_data(batch_data) - model.optimize_parameters() - - iter_time = time.time() - iter_time - if current_iter % opt['print_freq'] == 0: - log_vars = {'epoch': epoch, 'iter': current_iter} - log_vars.update({'lrs': [lr]}) - log_vars.update({'time': iter_time, 'data_time': data_time}) - log_vars.update(model.get_current_log()) - msg_logger(log_vars) - - data_time = time.time() - iter_time = time.time() - - if epoch % opt['val_freq'] == 0: - save_dir = f'{opt["path"]["visualization"]}/valset/epoch_{epoch:03d}' - os.makedirs(save_dir, exist_ok=opt['debug']) - val_acc = model.inference(val_loader, save_dir) - - save_dir = f'{opt["path"]["visualization"]}/testset/epoch_{epoch:03d}' - os.makedirs(save_dir, exist_ok=opt['debug']) - test_acc = model.inference(test_loader, save_dir) - - logger.info(f'Epoch: {epoch}, ' - f'val_acc: {val_acc: .4f}, ' - f'test_acc: {test_acc: .4f}.') - - if test_acc > best_acc: - best_epoch = epoch - best_acc = test_acc - - logger.info(f'Best epoch: {best_epoch}, ' - f'Best test acc: {best_acc: .4f}.') - - # save model - model.save_network( - f'{opt["path"]["models"]}/parsing_generation_epoch{epoch}.pth') - - -if __name__ == '__main__': - main() diff --git a/spaces/SRankChatGpt/Presentation-Assistant/presentation_assistant/presentation_assistant.py b/spaces/SRankChatGpt/Presentation-Assistant/presentation_assistant/presentation_assistant.py deleted file mode 100644 index bc7f643db022b4cf4c0f575d2f57786f220f9de1..0000000000000000000000000000000000000000 --- a/spaces/SRankChatGpt/Presentation-Assistant/presentation_assistant/presentation_assistant.py +++ /dev/null @@ -1,159 +0,0 @@ -import os -import PyPDF2 -from pptx import Presentation -import openai -import subprocess -from io import BytesIO -import sys - -sys.path.append("/home/user/app") - -# Function to generate text2ppt input prompt -def generate_text2ppt_input_prompt(input_type, input_value, input_pages): - header = """ - Assume you are a designer creating a PPT using markdown syntax, and write a PPT of %s pages. - +++ Summarize the content or link below in markdown language, adhering to the rules in ===, and refer to the slide examples in ~~~. - +++ - """ % input_pages - - summary_value = "" - - if input_type == "Link": - summary_value += input_value - summary_value += "\n" - elif input_type == "Text": - summary_value += input_value - summary_value += "\n" - elif input_type == "PDF": - with open(input_value, 'rb') as pdf_file: - pdf_reader = PyPDF2.PdfReader(pdf_file) - num_pages = len(pdf_reader.pages) - - # Convert the content of each page to a string. - text = "" - for page_num in range(num_pages): - page = pdf_reader.pages[page_num] - text += page.extract_text() - summary_value += text - summary_value += "\n" - else: - print("ERROR: Invalid input") - - rule_value = """ - === - - Write factually only about the content or link provided. - - Always use --- as a slide divider. - - Design and arrange the slides diversely with appropriate shapes, images(![Image](Image link), https://unsplash.com/ko/images/stock/non-copyrighted for actual use), tables(|-|), quotes(>), emphasis(bold, ``), emojis(https://kr.piliapp.com/twitter-symbols/), icons (https://kr.piliapp.com/symbol/#popular). - - Use emojis only once in every two pages, and use various other designs. - - When using images and tables, specify the size considering the page size so that all the text content appears. - - Make Slide 1 the title, for a total of %s pages. - - Write the content of the PPT richly in markdown. - - Don't explain slide by slide, just write the code. - - Don't write using the content of the example, just refer to the format. - ~~~ - - # Slide Title - ![Image link](https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo-with-title.png) - - This is 🤗**TEXT2PPT service PA!** using ChatGPT. - - Converts `link`,`text`, `PDF` input or upload into PPT. - """ % input_pages - - return header + summary_value + rule_value - - -# Function to execute text2ppt -def text2ppt(token_key, input_prompt, input_theme): - openai.api_key = token_key - - messages = [ - {"role": "system", "content": "You are a kind helpful PPT designer."}, - ] - - message = input_prompt - - if message: - messages.append( - {"role": "user", "content": message}, - ) - chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo-0301", messages=messages - ) - - reply = chat.choices[0].message.content - messages.append({"role": "assistant", "content": reply}) - - md_text = reply[4:] if reply[:3] == "---" else reply - md_text_list = md_text.split('\n') - - f = open("text2ppt_input.md", 'w') - for i in range(0, len(md_text_list)): - data = md_text_list[i] + "\n" - f.write(data) - f.close() - - if input_theme == 'default': - subprocess.run(["/home/user/app/pandoc-2.14.2/bin/pandoc", "/home/user/app/text2ppt_input.md", "-t", "pptx", "-o", "/home/user/app/text2ppt_output.pptx"], capture_output=True) - else: - ppt_theme = "--reference-doc=/home/user/app/template/"+input_theme+".pptx" - subprocess.run(["/home/user/app/pandoc-2.14.2/bin/pandoc", "/home/user/app/text2ppt_input.md", "-t", "pptx", ppt_theme, "-o", "/home/user/app/text2ppt_output.pptx"], capture_output=True) - - -def ppt2script(token_key, input_file, input_type): - openai.api_key = token_key - - if input_type=="PDF": - with open(input_file, 'rb') as pdf_file: - pdf_reader = PyPDF2.PdfReader(pdf_file) - num_pages = len(pdf_reader.pages) - - # Convert the content of each page to a string. - text = "" - for page_num in range(num_pages): - page = pdf_reader.pages[page_num] - text += "[PAGE_NUM " + str(page_num + 1) + "]" - text += page.extract_text() - else: - prs = Presentation(input_file) - - text = "" - page_num = 0 - for slide in prs.slides: - text += "[PAGE_NUM " + str(page_num + 1) + "]" - page_num += 1 - for shape in slide.shapes: - if not shape.has_text_frame: - continue - for paragraph in shape.text_frame.paragraphs: - for run in paragraph.runs: - text += run.text - - header = """ - You are an assistant helping with PPT presentations. - ~~~Follow the rules below and write a presentation script for the PPT content below. - ~~~ - - When [PAGE_NUM 1], where 1 is the page number, write a presentation script for each page number. - - Write only in text without using markdown language. - - Add additional explanations or examples to the PPT content. - --- - """ - - input_prompt = header + text - - messages = [ - {"role": "system", "content": "You are a kind helpful PPT Assistant."}, - ] - - message = input_prompt - - if message: - messages.append( - {"role": "user", "content": message}, - ) - chat = openai.ChatCompletion.create( - model="gpt-3.5-turbo-0301", messages=messages - ) - - reply = chat.choices[0].message.content - messages.append({"role": "assistant", "content": reply}) - - return reply diff --git a/spaces/SWHL/PaperEdgeDemo/networks/tps_warp.py b/spaces/SWHL/PaperEdgeDemo/networks/tps_warp.py deleted file mode 100644 index 3a0276bdbbf5049b37ad2bb7a06da9027f0d2403..0000000000000000000000000000000000000000 --- a/spaces/SWHL/PaperEdgeDemo/networks/tps_warp.py +++ /dev/null @@ -1,204 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class TpsWarp(nn.Module): - def __init__(self, s): - super(TpsWarp, self).__init__() - iy, ix = torch.meshgrid(torch.linspace(-1, 1, s), - torch.linspace(-1, 1, s)) - self.gs = torch.stack((ix, iy), dim=2).reshape((1, -1, 2)) - self.sz = s - - def forward(self, src, dst): - # src and dst are B.n.2 - B, n, _ = src.size() - # B.n.1.2 - delta = src.unsqueeze(2) - delta = delta - delta.permute(0, 2, 1, 3) - # B.n.n - K = delta.norm(dim=3) - # Rsq = torch.sum(delta**2, dim=3) - # Rsq += torch.eye(n) - # Rsq[Rsq == 0] = 1. - # K = 0.5 * Rsq * torch.log(Rsq) - # c = -150 - # K = torch.exp(c * Rsq) - # K = torch.abs(Rsq - 0.5) - 0.5 - # WARNING: TORCH.SQRT HAS NAN GRAD AT 0 - # K = torch.sqrt(Rsq) - # print(K) - # K[torch.isnan(K)] = 0. - P = torch.cat((torch.ones((B, n, 1)), src), 2) - L = torch.cat((K, P), 2) - t = torch.cat( - (P.permute(0, 2, 1), torch.zeros((B, 3, 3))), 2) - L = torch.cat((L, t), 1) - # LInv = L.inverse() - # # wv is B.n+3.2 - # wv = torch.matmul(LInv, torch.cat((dst, torch.zeros((B, 3, 2))), 1)) - # the above implementation has stability problem near the boundaries - wv = torch.solve( - torch.cat((dst, torch.zeros((B, 3, 2))), 1), L)[0] - - # get the grid sampler - s = self.gs.size(1) - gs = self.gs - delta = gs.unsqueeze(2) - delta = delta - src.unsqueeze(1) - K = delta.norm(dim=3) - # Rsq = torch.sum(delta**2, dim=3) - # K = torch.exp(c * Rsq) - # Rsq[Rsq == 0] = 1. - # K = 0.5 * Rsq * torch.log(Rsq) - # K = torch.abs(Rsq - 0.5) - 0.5 - # K = torch.sqrt(Rsq) - # K[torch.isnan(K)] = 0. - gs = gs.expand(B, -1, -1) - P = torch.cat((torch.ones((B, s, 1)), gs), 2) - L = torch.cat((K, P), 2) - gs = torch.matmul(L, wv) - return gs.reshape(B, self.sz, self.sz, 2).permute(0, 3, 1, 2) - - -class PspWarp(nn.Module): - def __init__(self): - super().__init__() - - def pspmat(self, src, dst): - # B, 4, 2 - B, _, _ = src.size() - s = torch.cat([ - torch.cat([src, - torch.ones((B, 4, 1)), - torch.zeros((B, 4, 3)), - -dst[..., 0: 1] * src[..., 0: 1], -dst[..., 0: 1] * src[..., 1: 2]], dim=2), - torch.cat([torch.zeros((B, 4, 3)), src, torch.ones((B, 4, 1)), - -dst[..., 1: 2] * src[..., 0: 1], -dst[..., 1: 2] * src[..., 1: 2]], dim=2) - ], dim=1) - t = torch.cat([dst[..., 0: 1], dst[..., 1: 2]], dim=1) - # M = s.inverse() @ t - M = torch.solve(t, s)[0] - # M is B 8 1 - return M - - def forward(self, xy, M): - # permute M to B 1 8 - M = M.permute(0, 2, 1) - t = M[..., 6] * xy[..., 0] + M[..., 7] * xy[..., 1] + 1 - u = (M[..., 0] * xy[..., 0] + M[..., 1] * xy[..., 1] + M[..., 2]) / t - v = (M[..., 3] * xy[..., 0] + M[..., 4] * xy[..., 1] + M[..., 5]) / t - return torch.stack((u, v), dim=2) - # for ii in range(4): - # xy = src[:, ii : ii + 1, :] - # uv = dst[:, ii : ii + 1, :] - # t0 = [xy, torch.ones((B, 1, 1)), torch.zeros((B, 1, 3)), -uv[..., 0] * xy[..., 0], -uv[..., 0] * xy[..., 1]] - # t0 = torch.cat(t0, dim=2) - # t1 = [torch.zeros((B, 1, 3)), xy, torch.ones((B, 1, 1)), -uv[..., 1] * xy[..., 0], -uv[..., 1] * xy[..., 1]] - # t1 = torch.cat(t1, dim=2) - - -class IdwWarp(nn.Module): - # inverse distance weighting - def __init__(self, s): - super().__init__() - iy, ix = torch.meshgrid(torch.linspace(-1, 1, s), - torch.linspace(-1, 1, s)) - self.gs = torch.stack((ix, iy), dim=2).reshape((1, -1, 2)).to('cuda') - self.s = s - - def forward(self, src, dst): - # B n 2 - B, n, _ = src.size() - # B.n.1.2 - delta = src.unsqueeze(2) - delta = delta - self.gs.unsqueeze(0) - # B.n.K - p = 1 - Rsq = torch.sum(delta**2, dim=3)**p - w = 1 / Rsq - # turn inf to [0...1...0] - t = torch.isinf(w) - idx = t.any(dim=1).nonzero() - w[idx[:, 0], :, idx[:, 1]] = t[idx[:, 0], :, idx[:, 1]].float() - wwx = w * dst[..., 0: 1] - wwx = wwx.sum(dim=1) / w.sum(dim=1) - wwy = w * dst[..., 1: 2] - wwy = wwy.sum(dim=1) / w.sum(dim=1) - # print(wwy.size()) - gs = torch.stack((wwx, wwy), dim=2).reshape( - B, self.s, self.s, 2).permute(0, 3, 1, 2) - return gs - - -if __name__ == "__main__": - import cv2 - import numpy as np - from hdf5storage import loadmat - from visdom import Visdom - vis = Visdom(port=10086) - - # bm_path = '/nfs/bigdisk/sagnik/swat3d/bm/7/2_471_7-ec_Page_375-5LI0001.mat' - # img_path = '/nfs/bigdisk/sagnik/swat3d/img/7/2_471_7-ec_Page_375-5LI0001.png' - - # bm = loadmat(bm_path)['bm'] - # bm = (bm - 224) / 224. - # bm = cv2.resize(bm, (64, 64), cv2.INTER_LINEAR).astype(np.float32) - - # im = cv2.imread(img_path) / 255. - # im = im[..., ::-1].copy() - # im = cv2.resize(im, (256, 256), cv2.INTER_AREA).astype(np.float32) - # im = torch.from_numpy(im.transpose(2, 0, 1)).unsqueeze(0).to('cuda') - - # x = np.random.choice(np.arange(64), 50, False) - # y = np.random.choice(np.arange(64), 50, False) - - # src = torch.tensor([[x, y]], dtype=torch.float32).permute(0, 2, 1) - # src = (src - 32) / 32. - # dst = torch.from_numpy(bm[y, x, :]).unsqueeze(0).to('cuda') - - # # print(src.size()) - # # print(dst.size()) - - # tpswarp = TpsWarp(64) - # import time - # t = time.time() - # for _ in range(100): - # gs = tpswarp(src, dst) - # print(f'time:{time.time() - t}') - # gs = gs.view(-1, 64, 64, 2) - - # print(gs.size()) - # bm2x2 = F.interpolate(gs.permute(0, 3, 1, 2), size=256, mode='bilinear', align_corners=True).permute(0, 2, 3, 1) - - # rim = F.grid_sample(im, bm2x2, align_corners=True) - # vis.images(rim, win='sk3') - tpswarp = TpsWarp(16) - import matplotlib.pyplot as plt - cn = torch.tensor([[-1, -1], [1, -1], [1, 1], [-1, 1], [-0.5, -1], - [0, -1], [0.5, -1]], dtype=torch.float).unsqueeze(0) - pn = torch.tensor([[-1, -0.5], [1, -1], [1, 1], [-1, 0.5], - [-0.5, -1], [0, -0.5], [0.5, -1]]).unsqueeze(0) - pspwarp = PspWarp() - # # print(cn.dtype) - M = pspwarp.pspmat(cn[..., 0: 4, :], pn[..., 0: 4, :]) - invM = pspwarp.pspmat(pn[..., 0: 4, :], cn[..., 0: 4, :]) - # iy, ix = torch.meshgrid(torch.linspace(-1, 1, 8), torch.linspace(-1, 1, 8)) - # gs = torch.stack((ix, iy), dim=2).reshape((1, -1, 2)).to('cuda') - # t = pspwarp(gs, M).reshape(8, 8, 2).detach().cpu().numpy() - # print(M) - - t = tpswarp(cn, pn) - from tsdeform import WarperUtil - wu = WarperUtil(16) - tgs = wu.global_post_warp(t, 16, invM, M) - - t = tgs.permute(0, 2, 3, 1)[0].detach().cpu().numpy() - - plt.clf() - plt.pcolormesh(t[..., 0], t[..., 1], - np.zeros_like(t[..., 0]), edgecolors='r') - plt.gca().invert_yaxis() - plt.gca().axis('equal') - vis.matplot(plt, env='grid', win='mpl') diff --git a/spaces/Salesforce/EDICT/my_diffusers/commands/__init__.py b/spaces/Salesforce/EDICT/my_diffusers/commands/__init__.py deleted file mode 100644 index 902bd46cedc6f2df785c1dc5d2e6bd8ef7c69ca6..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/commands/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from abc import ABC, abstractmethod -from argparse import ArgumentParser - - -class BaseDiffusersCLICommand(ABC): - @staticmethod - @abstractmethod - def register_subcommand(parser: ArgumentParser): - raise NotImplementedError() - - @abstractmethod - def run(self): - raise NotImplementedError() diff --git a/spaces/SeViLA/SeViLA/lavis/models/timesformer/features.py b/spaces/SeViLA/SeViLA/lavis/models/timesformer/features.py deleted file mode 100644 index a1ef6bb31fae6253a1e3f23a2570c290d5cdf432..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/timesformer/features.py +++ /dev/null @@ -1,308 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause - - Based on https://github.com/facebookresearch/TimeSformer -""" - -# Copyright 2020 Ross Wightman - -from collections import OrderedDict, defaultdict -from copy import deepcopy -from functools import partial -from typing import Dict, List, Tuple - -import torch -import torch.nn as nn - - -class FeatureInfo: - def __init__(self, feature_info: List[Dict], out_indices: Tuple[int]): - prev_reduction = 1 - for fi in feature_info: - # sanity check the mandatory fields, there may be additional fields depending on the model - assert "num_chs" in fi and fi["num_chs"] > 0 - assert "reduction" in fi and fi["reduction"] >= prev_reduction - prev_reduction = fi["reduction"] - assert "module" in fi - self.out_indices = out_indices - self.info = feature_info - - def from_other(self, out_indices: Tuple[int]): - return FeatureInfo(deepcopy(self.info), out_indices) - - def get(self, key, idx=None): - """Get value by key at specified index (indices) - if idx == None, returns value for key at each output index - if idx is an integer, return value for that feature module index (ignoring output indices) - if idx is a list/tupple, return value for each module index (ignoring output indices) - """ - if idx is None: - return [self.info[i][key] for i in self.out_indices] - if isinstance(idx, (tuple, list)): - return [self.info[i][key] for i in idx] - else: - return self.info[idx][key] - - def get_dicts(self, keys=None, idx=None): - """return info dicts for specified keys (or all if None) at specified indices (or out_indices if None)""" - if idx is None: - if keys is None: - return [self.info[i] for i in self.out_indices] - else: - return [{k: self.info[i][k] for k in keys} for i in self.out_indices] - if isinstance(idx, (tuple, list)): - return [ - self.info[i] if keys is None else {k: self.info[i][k] for k in keys} - for i in idx - ] - else: - return ( - self.info[idx] if keys is None else {k: self.info[idx][k] for k in keys} - ) - - def channels(self, idx=None): - """feature channels accessor""" - return self.get("num_chs", idx) - - def reduction(self, idx=None): - """feature reduction (output stride) accessor""" - return self.get("reduction", idx) - - def module_name(self, idx=None): - """feature module name accessor""" - return self.get("module", idx) - - def __getitem__(self, item): - return self.info[item] - - def __len__(self): - return len(self.info) - - -class FeatureHooks: - """Feature Hook Helper - This module helps with the setup and extraction of hooks for extracting features from - internal nodes in a model by node name. This works quite well in eager Python but needs - redesign for torcscript. - """ - - def __init__(self, hooks, named_modules, out_map=None, default_hook_type="forward"): - # setup feature hooks - modules = {k: v for k, v in named_modules} - for i, h in enumerate(hooks): - hook_name = h["module"] - m = modules[hook_name] - hook_id = out_map[i] if out_map else hook_name - hook_fn = partial(self._collect_output_hook, hook_id) - hook_type = h["hook_type"] if "hook_type" in h else default_hook_type - if hook_type == "forward_pre": - m.register_forward_pre_hook(hook_fn) - elif hook_type == "forward": - m.register_forward_hook(hook_fn) - else: - assert False, "Unsupported hook type" - self._feature_outputs = defaultdict(OrderedDict) - - def _collect_output_hook(self, hook_id, *args): - x = args[ - -1 - ] # tensor we want is last argument, output for fwd, input for fwd_pre - if isinstance(x, tuple): - x = x[0] # unwrap input tuple - self._feature_outputs[x.device][hook_id] = x - - def get_output(self, device) -> Dict[str, torch.tensor]: - output = self._feature_outputs[device] - self._feature_outputs[device] = OrderedDict() # clear after reading - return output - - -def _module_list(module, flatten_sequential=False): - # a yield/iter would be better for this but wouldn't be compatible with torchscript - ml = [] - for name, module in module.named_children(): - if flatten_sequential and isinstance(module, nn.Sequential): - # first level of Sequential containers is flattened into containing model - for child_name, child_module in module.named_children(): - combined = [name, child_name] - ml.append(("_".join(combined), ".".join(combined), child_module)) - else: - ml.append((name, name, module)) - return ml - - -def _get_feature_info(net, out_indices): - feature_info = getattr(net, "feature_info") - if isinstance(feature_info, FeatureInfo): - return feature_info.from_other(out_indices) - elif isinstance(feature_info, (list, tuple)): - return FeatureInfo(net.feature_info, out_indices) - else: - assert False, "Provided feature_info is not valid" - - -def _get_return_layers(feature_info, out_map): - module_names = feature_info.module_name() - return_layers = {} - for i, name in enumerate(module_names): - return_layers[name] = ( - out_map[i] if out_map is not None else feature_info.out_indices[i] - ) - return return_layers - - -class FeatureDictNet(nn.ModuleDict): - """Feature extractor with OrderedDict return - Wrap a model and extract features as specified by the out indices, the network is - partially re-built from contained modules. - There is a strong assumption that the modules have been registered into the model in the same - order as they are used. There should be no reuse of the same nn.Module more than once, including - trivial modules like `self.relu = nn.ReLU`. - Only submodules that are directly assigned to the model class (`model.feature1`) or at most - one Sequential container deep (`model.features.1`, with flatten_sequent=True) can be captured. - All Sequential containers that are directly assigned to the original model will have their - modules assigned to this module with the name `model.features.1` being changed to `model.features_1` - Arguments: - model (nn.Module): model from which we will extract the features - out_indices (tuple[int]): model output indices to extract features for - out_map (sequence): list or tuple specifying desired return id for each out index, - otherwise str(index) is used - feature_concat (bool): whether to concatenate intermediate features that are lists or tuples - vs select element [0] - flatten_sequential (bool): whether to flatten sequential modules assigned to model - """ - - def __init__( - self, - model, - out_indices=(0, 1, 2, 3, 4), - out_map=None, - feature_concat=False, - flatten_sequential=False, - ): - super(FeatureDictNet, self).__init__() - self.feature_info = _get_feature_info(model, out_indices) - self.concat = feature_concat - self.return_layers = {} - return_layers = _get_return_layers(self.feature_info, out_map) - modules = _module_list(model, flatten_sequential=flatten_sequential) - remaining = set(return_layers.keys()) - layers = OrderedDict() - for new_name, old_name, module in modules: - layers[new_name] = module - if old_name in remaining: - # return id has to be consistently str type for torchscript - self.return_layers[new_name] = str(return_layers[old_name]) - remaining.remove(old_name) - if not remaining: - break - assert not remaining and len(self.return_layers) == len( - return_layers - ), f"Return layers ({remaining}) are not present in model" - self.update(layers) - - def _collect(self, x) -> (Dict[str, torch.Tensor]): - out = OrderedDict() - for name, module in self.items(): - x = module(x) - if name in self.return_layers: - out_id = self.return_layers[name] - if isinstance(x, (tuple, list)): - # If model tap is a tuple or list, concat or select first element - # FIXME this may need to be more generic / flexible for some nets - out[out_id] = torch.cat(x, 1) if self.concat else x[0] - else: - out[out_id] = x - return out - - def forward(self, x) -> Dict[str, torch.Tensor]: - return self._collect(x) - - -class FeatureListNet(FeatureDictNet): - """Feature extractor with list return - See docstring for FeatureDictNet above, this class exists only to appease Torchscript typing constraints. - In eager Python we could have returned List[Tensor] vs Dict[id, Tensor] based on a member bool. - """ - - def __init__( - self, - model, - out_indices=(0, 1, 2, 3, 4), - out_map=None, - feature_concat=False, - flatten_sequential=False, - ): - super(FeatureListNet, self).__init__( - model, - out_indices=out_indices, - out_map=out_map, - feature_concat=feature_concat, - flatten_sequential=flatten_sequential, - ) - - def forward(self, x) -> (List[torch.Tensor]): - return list(self._collect(x).values()) - - -class FeatureHookNet(nn.ModuleDict): - """FeatureHookNet - Wrap a model and extract features specified by the out indices using forward/forward-pre hooks. - If `no_rewrite` is True, features are extracted via hooks without modifying the underlying - network in any way. - If `no_rewrite` is False, the model will be re-written as in the - FeatureList/FeatureDict case by folding first to second (Sequential only) level modules into this one. - FIXME this does not currently work with Torchscript, see FeatureHooks class - """ - - def __init__( - self, - model, - out_indices=(0, 1, 2, 3, 4), - out_map=None, - out_as_dict=False, - no_rewrite=False, - feature_concat=False, - flatten_sequential=False, - default_hook_type="forward", - ): - super(FeatureHookNet, self).__init__() - assert not torch.jit.is_scripting() - self.feature_info = _get_feature_info(model, out_indices) - self.out_as_dict = out_as_dict - layers = OrderedDict() - hooks = [] - if no_rewrite: - assert not flatten_sequential - if hasattr(model, "reset_classifier"): # make sure classifier is removed? - model.reset_classifier(0) - layers["body"] = model - hooks.extend(self.feature_info.get_dicts()) - else: - modules = _module_list(model, flatten_sequential=flatten_sequential) - remaining = { - f["module"]: f["hook_type"] if "hook_type" in f else default_hook_type - for f in self.feature_info.get_dicts() - } - for new_name, old_name, module in modules: - layers[new_name] = module - for fn, fm in module.named_modules(prefix=old_name): - if fn in remaining: - hooks.append(dict(module=fn, hook_type=remaining[fn])) - del remaining[fn] - if not remaining: - break - assert ( - not remaining - ), f"Return layers ({remaining}) are not present in model" - self.update(layers) - self.hooks = FeatureHooks(hooks, model.named_modules(), out_map=out_map) - - def forward(self, x): - for name, module in self.items(): - x = module(x) - out = self.hooks.get_output(x.device) - return out if self.out_as_dict else list(out.values()) diff --git a/spaces/SilenWang/ReviewGPT/utils/review.py b/spaces/SilenWang/ReviewGPT/utils/review.py deleted file mode 100644 index 65c62b2f00196dedd8972aae9e4d437995a942ff..0000000000000000000000000000000000000000 --- a/spaces/SilenWang/ReviewGPT/utils/review.py +++ /dev/null @@ -1,164 +0,0 @@ -# -*- coding: utf-8 -*- - -# import tiktoken -import openai -from openai.embeddings_utils import get_embedding, cosine_similarity -from json import dump, loads -import pandas as pd - -try: - from utils.config import Prompts, OPENAI_KEY, REVIEW_MODEL -except ImportError: - from utils.config_sample import Prompts, OPENAI_KEY, REVIEW_MODEL - -# def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"): -# ''' -# Returns the number of tokens used by a list of messages. -# 官方给的token计算示例, -# ''' -# try: -# encoding = tiktoken.encoding_for_model(model) -# except KeyError: -# encoding = tiktoken.get_encoding("cl100k_base") -# if model == "gpt-3.5-turbo-0301": # note: future models may deviate from this -# num_tokens = 0 -# for message in messages: -# num_tokens += 4 # every message follows {role/name}\n{content}\n -# for key, value in message.items(): -# num_tokens += len(encoding.encode(value)) -# if key == "name": # if there's a name, the role is omitted -# num_tokens += -1 # role is always required and always 1 token -# num_tokens += 2 # every reply is primed with assistant -# return num_tokens -# else: -# raise NotImplementedError(f"""num_tokens_from_messages() is not presently implemented for model {model}.""") - - -class Reviewer: - ''' - 调用chatGPT进行文献内容查看: - 1. 文献总结: 将不超过10篇的文献摘要总结为一段话 - 2. 文献内容判断, 根据自定义的Promot判断, 文献是否符合准入标准 - ''' - def __init__(self, api_key=None, model=None): - self.api_key = api_key if api_key else OPENAI_KEY - self.model = model if model else REVIEW_MODEL - self.messages = None - - - def query(self, msg=None): - ''' - 发送请求并获取chatGPT给的结果 - ''' - # 设置email和搜索关键词 - openai.api_key = self.api_key - if msg: - response = openai.ChatCompletion.create( - model = self.model, - messages = [{"role": "user", "content": msg}] - ) - else: - response = openai.ChatCompletion.create( - model = self.model, - messages = self.messages - ) - - return response - - - def screen(self, criterias, abstract): - ''' - meta分析时用的方法, 读取标准内容和文献摘要后, - 判断文章是否符合准入标准 - ''' - message = Prompts['Screen'].format(criterias=criterias, abstract=abstract) - self.messages = [ - {"role": "user", "content": message}, - ] - - return self.query() - - - def summarise(self, papers, prompts): - ''' - meta分析时用的方法, 读取标准内容和文献摘要后, - 判断文章是否符合准入标准 - ''' - units = [] - for idx, (paper_id, abstract) in enumerate(papers): - units.append(Prompts['Summarize_Unit'].format( - idx=idx, - paper_id=paper_id, - abstract=abstract - )) - paper_text = '\n'.join(units) - message = Prompts['Summarize'].format(idx=len(papers), papers=paper_text, questions=prompts) - self.messages = [ - {"role": "user", "content": message}, - ] - - return self.query() - - - def study(self, question: str, paper_data: pd.DataFrame, top: int = 2): - ''' - 阅读文献的部分内容, 给出问题的回答 - step1, 根据问题, 计算embedding - step2, 找到top N 最接近的内容 - step3, 形成prompts, 请求回答 - ''' - question_embedding = get_embedding(question, engine='text-embedding-ada-002') - paper_data['similarity'] = paper_data['embedding'].apply(lambda x: cosine_similarity(x, question_embedding)) - # 找出最符合的n段文本, 默认2, 主要是为了节省token... - top_texts = paper_data.sort_values(by='similarity', ascending=False).head(2)[['page', 'text']].values.tolist() - - units = [] - for page, text in top_texts: - units.append(Prompts['Review_Unit'].format(page=page, content=text)) - - message = Prompts['Review'].format(pages='\n'.join(units), question=question) - - self.messages = [ - {"role": "user", "content": message}, - ] - - return self.query() - - -def screen_demo(): - reviewer = Reviewer() - criterias = ''' - 1. 文献是研究成果文献, 不可以是Meta分析或者文献综述 - 2. 文献的研究内容不是减肥手术、体重干预等对疾病影响 - 3. 文献的研究对象不可以包含男性, 必须全部是女性 - ''' - abstract = ''' - Objective: To assess the separate and combined associations of maternal pre-pregnancy body mass index (BMI) and gestational weight gain with the risks of pregnancy complications and their population impact. - Design: Individual participant data meta-analysis of 39 cohorts. - Setting: Europe, North America, and Oceania. - Population: 265 270 births. - Methods: Information on maternal pre-pregnancy BMI, gestational weight gain, and pregnancy complications was obtained. Multilevel binary logistic regression models were used. - Main outcome measures: Gestational hypertension, pre-eclampsia, gestational diabetes, preterm birth, small and large for gestational age at birth. - Results: Higher maternal pre-pregnancy BMI and gestational weight gain were, across their full ranges, associated with higher risks of gestational hypertensive disorders, gestational diabetes, and large for gestational age at birth. Preterm birth risk was higher at lower and higher BMI and weight gain. Compared with normal weight mothers with medium gestational weight gain, obese mothers with high gestational weight gain had the highest risk of any pregnancy complication (odds ratio 2.51, 95% CI 2.31- 2.74). We estimated that 23.9% of any pregnancy complication was attributable to maternal overweight/obesity and 31.6% of large for gestational age infants was attributable to excessive gestational weight gain. - Conclusions: Maternal pre-pregnancy BMI and gestational weight gain are, across their full ranges, associated with risks of pregnancy complications. Obese mothers with high gestational weight gain are at the highest risk of pregnancy complications. Promoting a healthy pre-pregnancy BMI and gestational weight gain may reduce the burden of pregnancy complications and ultimately the risk of maternal and neonatal morbidity. - ''' - # 返回的内容直接解析为Python对象了 - response = reviewer.screen(criterias, abstract) - with open('demo.json', 'w') as rJson: - dump(response, rJson) - - answer = loads(response['choices'][0]['message']['content']) - print(answer) - with open('answer.json', 'w') as aJson: - dump(answer, aJson) - - -def summarise_demo(): - reviewer = Reviewer() - papers = [ - ('12345', 'Background: Little is known about reproductive health in severely obese women. In this study, we present associations between different levels of severe obesity and a wide range of health outcomes in the mother and child. Method(s): From the Danish National Birth Cohort, we obtained self-reported information about prepregnant body mass index (BMI) for 2451 severely obese women and 2450 randomly selected women from the remaining cohort who served as a comparison group. Information about maternal and infant outcomes was also self-reported or came from registers. Logistic regression was used to estimate the association between different levels of severe obesity and reproductive outcomes. Principal Findings: Subfecundity was more frequent in severely obese women, and during pregnancy, they had an excess risk of urinary tract infections, gestational diabetes, preeclampsia and other hypertensive disorders which increased with severity of obesity. They tended to have a higher risk of both pre- and post-term birth, and risk of cesarean and instrumental deliveries increased across obesity categories. After birth, severely obese women more often failed to initiate or sustain breastfeeding. Risk of weight retention 1.5 years after birth was similar to that of other women, but after adjustment for gestational weight gain, the risk was increased, especially in women in the lowest obesity category. In infants, increasing maternal obesity was associated with decreased risk of a low birth weight and increased risk of a high birth weight. Estimates for ponderal index showed the same pattern indicating an increasing risk of neonatal fatness with severity of obesity. Infant obesity measured one year after birth was also increased in children of severely obese mothers. Conclusion(s): Severe obesity is correlated with a substantial disease burden in reproductive health. Although the causal mechanisms remain elusive, these findings are useful for making predictions and planning health care at the individual level. © 2009 Nohr et al.'), - ('45456', 'Background: Preeclampsia is one of the leading causes of maternal and perinatal morbidity and mortality world-wide. The risk for developing preeclampsia varies depending on the underlying mechanism. Because the disorder is heterogeneous, the pathogenesis can differ in women with various risk factors. Understanding these mechanisms of disease responsible for preeclampsia as well as risk assessment is still a major challenge. The aim of this study was to determine the risk factors associated with preeclampsia, in healthy women in maternity hospitals of Karachi and Rawalpindi. Method(s): We conducted a hospital based matched case-control study to assess the factors associated with preeclampsia in Karachi and Rawalpindi, from January 2006 to December 2007. 131 hospital-reported cases of PE and 262 controls without history of preeclampsia were enrolled within 3 days of delivery. Cases and controls were matched on the hospital, day of delivery and parity. Potential risk factors for preeclampsia were ascertained during in-person postpartum interviews using a structured questionnaire and by medical record abstraction. Conditional logistic regression was used to estimate matched odds ratios (ORs) and 95% confidence intervals (95% CIs). Result(s): In multivariate analysis, women having a family history of hypertension (adjusted OR 2.06, 95% CI; 1.27-3.35), gestational diabetes (adjusted OR 6.57, 95% CI; 1.94 -22.25), pre-gestational diabetes (adjusted OR 7.36, 95% CI; 1.37-33.66) and mental stress during pregnancy (adjusted OR 1.32; 95% CI; 1.19-1.46, for each 5 unit increase in Perceived stress scale score) were at increased risk of preeclampsia. However, high body mass index, maternal age, urinary tract infection, use of condoms prior to index pregnancy and sociodemographic factors were not associated with higher risk of having preeclampsia. Conclusion(s): Development of preeclampsia was associated with gestational diabetes, pregestational diabetes, family history of hypertension and mental stress during pregnancy. These factors can be used as a screening tool for preeclampsia prediction. Identification of the above mentioned predictors would enhance the ability to diagnose and monitor women likely to develop preeclampsia before the onset of disease for timely interventions and better maternal and fetal outcomes. © 2010 Shamsi et al; licensee BioMed Central Ltd.'), - ] - reviewer.summarise(papers) - - diff --git a/spaces/SmileyTatsu/Smile/Dockerfile b/spaces/SmileyTatsu/Smile/Dockerfile deleted file mode 100644 index 97eed882cd9fb47d4d06f4ca56ef3517e29baa19..0000000000000000000000000000000000000000 --- a/spaces/SmileyTatsu/Smile/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/Drago/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/Southstar1/img-to-music/share_btn.py b/spaces/Southstar1/img-to-music/share_btn.py deleted file mode 100644 index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000 --- a/spaces/Southstar1/img-to-music/share_btn.py +++ /dev/null @@ -1,100 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/Srihari1611/Gender_Classification/app.py b/spaces/Srihari1611/Gender_Classification/app.py deleted file mode 100644 index c68764182e53d91111fa8417d2dff97f3da48afa..0000000000000000000000000000000000000000 --- a/spaces/Srihari1611/Gender_Classification/app.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr -import fastai -from fastai.vision.all import * -learn=load_learner("Gender_classification_with 0.8751480728387833_accuracy.pkl") -categories=('female','male') -def classify_image(img): - pred,idx,probs=learn.predict(img) - return dict(zip(categories,map(float,probs))) -image=gr.inputs.Image(shape=(192,192)) -label=gr.outputs.Label() -examples=['male.jpg','female.jpg'] -intf=gr.Interface(fn=classify_image,inputs=image,outputs=label,examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Sukhyun/MBTI_translator/README.md b/spaces/Sukhyun/MBTI_translator/README.md deleted file mode 100644 index 6a40ae8a310e7f425a955e58995f3832728ef56f..0000000000000000000000000000000000000000 --- a/spaces/Sukhyun/MBTI_translator/README.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Text to MBTI -emoji: 🤗 -colorFrom: yellow -colorTo: orange -sdk: streamlit -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Text to MBTI (Zero Shot - model) -😄 MBTI Translator (Infer you personality type based on your sentence) - -Using Streamlit library, this project is implementing application with UI of following project: https://github.com/ethHong/text_mbti - -## About the project & Examples -* Pur the sentence and click 'generate' button. Using Zero shot classification model, the app predict the most probable MBTI of yourself. -Screenshot 2023-01-08 at 3 04 29 PM - -Screenshot 2023-01-08 at 3 05 18 PM - - -## How it works? -* This app uases Zero-shot classification model from Facebook. -* The model uses pre-defined dictionary data which representes each of 16 personality type -* It compute probability input keyword be relevant to each of the keywords mapped in dictionary of 16 personality types. -![image](https://user-images.githubusercontent.com/43837843/211183359-ad2cf761-99a7-467f-8bb9-cdf308bc019e.png) - - -## Output sample -``` -Input: "I stayed home all day" - -=== - -Output: - -You are: ISFP -Ratio {'E': 27.338588094108168, 'I': 72.66141190589182} {'N': 22.149243913056992, 'S': 77.85075608694301} {'T': 46.17274433748438, 'F': 53.82725566251562} {'P': 57.30466611213056, 'J': 42.69533388786944} -``` - -``` -Input: "I'm making plans for my trip to Osaka. I'm so excited!" - -=== - -Output: - -You are: ESTJ -Ratio {'E': 71.53464326345417, 'I': 28.46535673654582} {'N': 35.33135528913844, 'S': 64.66864471086156} {'T': 58.70273162646018, 'F': 41.29726837353982} {'P': 46.96476087995551, 'J': 53.03523912004449} -``` - -## Model and requirements -* Model reference: https://huggingface.co/facebook/bart-large-mnli -* Environment setup: -> 02.20 update: Use requirements.txt to setup required libraries. - -``` -pip install -r requirements.txt -``` -``` -streamlit run app.py -``` \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/display.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/display.py deleted file mode 100644 index f39f389f98b208ee990f898a70d827a9319c6c92..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/display.py +++ /dev/null @@ -1,677 +0,0 @@ -"""Various display related classes. - -Authors : MinRK, gregcaporaso, dannystaple -""" -from html import escape as html_escape -from os.path import exists, isfile, splitext, abspath, join, isdir -from os import walk, sep, fsdecode - -from IPython.core.display import DisplayObject, TextDisplayObject - -from typing import Tuple, Iterable, Optional - -__all__ = ['Audio', 'IFrame', 'YouTubeVideo', 'VimeoVideo', 'ScribdDocument', - 'FileLink', 'FileLinks', 'Code'] - - -class Audio(DisplayObject): - """Create an audio object. - - When this object is returned by an input cell or passed to the - display function, it will result in Audio controls being displayed - in the frontend (only works in the notebook). - - Parameters - ---------- - data : numpy array, list, unicode, str or bytes - Can be one of - - * Numpy 1d array containing the desired waveform (mono) - * Numpy 2d array containing waveforms for each channel. - Shape=(NCHAN, NSAMPLES). For the standard channel order, see - http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx - * List of float or integer representing the waveform (mono) - * String containing the filename - * Bytestring containing raw PCM data or - * URL pointing to a file on the web. - - If the array option is used, the waveform will be normalized. - - If a filename or url is used, the format support will be browser - dependent. - url : unicode - A URL to download the data from. - filename : unicode - Path to a local file to load the data from. - embed : boolean - Should the audio data be embedded using a data URI (True) or should - the original source be referenced. Set this to True if you want the - audio to playable later with no internet connection in the notebook. - - Default is `True`, unless the keyword argument `url` is set, then - default value is `False`. - rate : integer - The sampling rate of the raw data. - Only required when data parameter is being used as an array - autoplay : bool - Set to True if the audio should immediately start playing. - Default is `False`. - normalize : bool - Whether audio should be normalized (rescaled) to the maximum possible - range. Default is `True`. When set to `False`, `data` must be between - -1 and 1 (inclusive), otherwise an error is raised. - Applies only when `data` is a list or array of samples; other types of - audio are never normalized. - - Examples - -------- - - >>> import pytest - >>> np = pytest.importorskip("numpy") - - Generate a sound - - >>> import numpy as np - >>> framerate = 44100 - >>> t = np.linspace(0,5,framerate*5) - >>> data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) - >>> Audio(data, rate=framerate) - - - Can also do stereo or more channels - - >>> dataleft = np.sin(2*np.pi*220*t) - >>> dataright = np.sin(2*np.pi*224*t) - >>> Audio([dataleft, dataright], rate=framerate) - - - From URL: - - >>> Audio("http://www.nch.com.au/acm/8k16bitpcm.wav") # doctest: +SKIP - >>> Audio(url="http://www.w3schools.com/html/horse.ogg") # doctest: +SKIP - - From a File: - - >>> Audio('IPython/lib/tests/test.wav') # doctest: +SKIP - >>> Audio(filename='IPython/lib/tests/test.wav') # doctest: +SKIP - - From Bytes: - - >>> Audio(b'RAW_WAV_DATA..') # doctest: +SKIP - >>> Audio(data=b'RAW_WAV_DATA..') # doctest: +SKIP - - See Also - -------- - ipywidgets.Audio - - Audio widget with more more flexibility and options. - - """ - _read_flags = 'rb' - - def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False, normalize=True, *, - element_id=None): - if filename is None and url is None and data is None: - raise ValueError("No audio data found. Expecting filename, url, or data.") - if embed is False and url is None: - raise ValueError("No url found. Expecting url when embed=False") - - if url is not None and embed is not True: - self.embed = False - else: - self.embed = True - self.autoplay = autoplay - self.element_id = element_id - super(Audio, self).__init__(data=data, url=url, filename=filename) - - if self.data is not None and not isinstance(self.data, bytes): - if rate is None: - raise ValueError("rate must be specified when data is a numpy array or list of audio samples.") - self.data = Audio._make_wav(data, rate, normalize) - - def reload(self): - """Reload the raw data from file or URL.""" - import mimetypes - if self.embed: - super(Audio, self).reload() - - if self.filename is not None: - self.mimetype = mimetypes.guess_type(self.filename)[0] - elif self.url is not None: - self.mimetype = mimetypes.guess_type(self.url)[0] - else: - self.mimetype = "audio/wav" - - @staticmethod - def _make_wav(data, rate, normalize): - """ Transform a numpy array to a PCM bytestring """ - from io import BytesIO - import wave - - try: - scaled, nchan = Audio._validate_and_normalize_with_numpy(data, normalize) - except ImportError: - scaled, nchan = Audio._validate_and_normalize_without_numpy(data, normalize) - - fp = BytesIO() - waveobj = wave.open(fp,mode='wb') - waveobj.setnchannels(nchan) - waveobj.setframerate(rate) - waveobj.setsampwidth(2) - waveobj.setcomptype('NONE','NONE') - waveobj.writeframes(scaled) - val = fp.getvalue() - waveobj.close() - - return val - - @staticmethod - def _validate_and_normalize_with_numpy(data, normalize) -> Tuple[bytes, int]: - import numpy as np - - data = np.array(data, dtype=float) - if len(data.shape) == 1: - nchan = 1 - elif len(data.shape) == 2: - # In wave files,channels are interleaved. E.g., - # "L1R1L2R2..." for stereo. See - # http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx - # for channel ordering - nchan = data.shape[0] - data = data.T.ravel() - else: - raise ValueError('Array audio input must be a 1D or 2D array') - - max_abs_value = np.max(np.abs(data)) - normalization_factor = Audio._get_normalization_factor(max_abs_value, normalize) - scaled = data / normalization_factor * 32767 - return scaled.astype(" 1: - raise ValueError('Audio data must be between -1 and 1 when normalize=False.') - return max_abs_value if normalize else 1 - - def _data_and_metadata(self): - """shortcut for returning metadata with url information, if defined""" - md = {} - if self.url: - md['url'] = self.url - if md: - return self.data, md - else: - return self.data - - def _repr_html_(self): - src = """ - - """ - return src.format(src=self.src_attr(), type=self.mimetype, autoplay=self.autoplay_attr(), - element_id=self.element_id_attr()) - - def src_attr(self): - import base64 - if self.embed and (self.data is not None): - data = base64=base64.b64encode(self.data).decode('ascii') - return """data:{type};base64,{base64}""".format(type=self.mimetype, - base64=data) - elif self.url is not None: - return self.url - else: - return "" - - def autoplay_attr(self): - if(self.autoplay): - return 'autoplay="autoplay"' - else: - return '' - - def element_id_attr(self): - if (self.element_id): - return 'id="{element_id}"'.format(element_id=self.element_id) - else: - return '' - -class IFrame(object): - """ - Generic class to embed an iframe in an IPython notebook - """ - - iframe = """ - - """ - - def __init__( - self, src, width, height, extras: Optional[Iterable[str]] = None, **kwargs - ): - if extras is None: - extras = [] - - self.src = src - self.width = width - self.height = height - self.extras = extras - self.params = kwargs - - def _repr_html_(self): - """return the embed iframe""" - if self.params: - from urllib.parse import urlencode - params = "?" + urlencode(self.params) - else: - params = "" - return self.iframe.format( - src=self.src, - width=self.width, - height=self.height, - params=params, - extras=" ".join(self.extras), - ) - - -class YouTubeVideo(IFrame): - """Class for embedding a YouTube Video in an IPython session, based on its video id. - - e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would - do:: - - vid = YouTubeVideo("foo") - display(vid) - - To start from 30 seconds:: - - vid = YouTubeVideo("abc", start=30) - display(vid) - - To calculate seconds from time as hours, minutes, seconds use - :class:`datetime.timedelta`:: - - start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds()) - - Other parameters can be provided as documented at - https://developers.google.com/youtube/player_parameters#Parameters - - When converting the notebook using nbconvert, a jpeg representation of the video - will be inserted in the document. - """ - - def __init__(self, id, width=400, height=300, allow_autoplay=False, **kwargs): - self.id=id - src = "https://www.youtube.com/embed/{0}".format(id) - if allow_autoplay: - extras = list(kwargs.get("extras", [])) + ['allow="autoplay"'] - kwargs.update(autoplay=1, extras=extras) - super(YouTubeVideo, self).__init__(src, width, height, **kwargs) - - def _repr_jpeg_(self): - # Deferred import - from urllib.request import urlopen - - try: - return urlopen("https://img.youtube.com/vi/{id}/hqdefault.jpg".format(id=self.id)).read() - except IOError: - return None - -class VimeoVideo(IFrame): - """ - Class for embedding a Vimeo video in an IPython session, based on its video id. - """ - - def __init__(self, id, width=400, height=300, **kwargs): - src="https://player.vimeo.com/video/{0}".format(id) - super(VimeoVideo, self).__init__(src, width, height, **kwargs) - -class ScribdDocument(IFrame): - """ - Class for embedding a Scribd document in an IPython session - - Use the start_page params to specify a starting point in the document - Use the view_mode params to specify display type one off scroll | slideshow | book - - e.g to Display Wes' foundational paper about PANDAS in book mode from page 3 - - ScribdDocument(71048089, width=800, height=400, start_page=3, view_mode="book") - """ - - def __init__(self, id, width=400, height=300, **kwargs): - src="https://www.scribd.com/embeds/{0}/content".format(id) - super(ScribdDocument, self).__init__(src, width, height, **kwargs) - -class FileLink(object): - """Class for embedding a local file link in an IPython session, based on path - - e.g. to embed a link that was generated in the IPython notebook as my/data.txt - - you would do:: - - local_file = FileLink("my/data.txt") - display(local_file) - - or in the HTML notebook, just:: - - FileLink("my/data.txt") - """ - - html_link_str = "%s" - - def __init__(self, - path, - url_prefix='', - result_html_prefix='', - result_html_suffix='
        '): - """ - Parameters - ---------- - path : str - path to the file or directory that should be formatted - url_prefix : str - prefix to be prepended to all files to form a working link [default: - ''] - result_html_prefix : str - text to append to beginning to link [default: ''] - result_html_suffix : str - text to append at the end of link [default: '
        '] - """ - if isdir(path): - raise ValueError("Cannot display a directory using FileLink. " - "Use FileLinks to display '%s'." % path) - self.path = fsdecode(path) - self.url_prefix = url_prefix - self.result_html_prefix = result_html_prefix - self.result_html_suffix = result_html_suffix - - def _format_path(self): - fp = ''.join([self.url_prefix, html_escape(self.path)]) - return ''.join([self.result_html_prefix, - self.html_link_str % \ - (fp, html_escape(self.path, quote=False)), - self.result_html_suffix]) - - def _repr_html_(self): - """return html link to file - """ - if not exists(self.path): - return ("Path (%s) doesn't exist. " - "It may still be in the process of " - "being generated, or you may have the " - "incorrect path." % self.path) - - return self._format_path() - - def __repr__(self): - """return absolute path to file - """ - return abspath(self.path) - -class FileLinks(FileLink): - """Class for embedding local file links in an IPython session, based on path - - e.g. to embed links to files that were generated in the IPython notebook - under ``my/data``, you would do:: - - local_files = FileLinks("my/data") - display(local_files) - - or in the HTML notebook, just:: - - FileLinks("my/data") - """ - def __init__(self, - path, - url_prefix='', - included_suffixes=None, - result_html_prefix='', - result_html_suffix='
        ', - notebook_display_formatter=None, - terminal_display_formatter=None, - recursive=True): - """ - See :class:`FileLink` for the ``path``, ``url_prefix``, - ``result_html_prefix`` and ``result_html_suffix`` parameters. - - included_suffixes : list - Filename suffixes to include when formatting output [default: include - all files] - - notebook_display_formatter : function - Used to format links for display in the notebook. See discussion of - formatter functions below. - - terminal_display_formatter : function - Used to format links for display in the terminal. See discussion of - formatter functions below. - - Formatter functions must be of the form:: - - f(dirname, fnames, included_suffixes) - - dirname : str - The name of a directory - fnames : list - The files in that directory - included_suffixes : list - The file suffixes that should be included in the output (passing None - meansto include all suffixes in the output in the built-in formatters) - recursive : boolean - Whether to recurse into subdirectories. Default is True. - - The function should return a list of lines that will be printed in the - notebook (if passing notebook_display_formatter) or the terminal (if - passing terminal_display_formatter). This function is iterated over for - each directory in self.path. Default formatters are in place, can be - passed here to support alternative formatting. - - """ - if isfile(path): - raise ValueError("Cannot display a file using FileLinks. " - "Use FileLink to display '%s'." % path) - self.included_suffixes = included_suffixes - # remove trailing slashes for more consistent output formatting - path = path.rstrip('/') - - self.path = path - self.url_prefix = url_prefix - self.result_html_prefix = result_html_prefix - self.result_html_suffix = result_html_suffix - - self.notebook_display_formatter = \ - notebook_display_formatter or self._get_notebook_display_formatter() - self.terminal_display_formatter = \ - terminal_display_formatter or self._get_terminal_display_formatter() - - self.recursive = recursive - - def _get_display_formatter( - self, dirname_output_format, fname_output_format, fp_format, fp_cleaner=None - ): - """generate built-in formatter function - - this is used to define both the notebook and terminal built-in - formatters as they only differ by some wrapper text for each entry - - dirname_output_format: string to use for formatting directory - names, dirname will be substituted for a single "%s" which - must appear in this string - fname_output_format: string to use for formatting file names, - if a single "%s" appears in the string, fname will be substituted - if two "%s" appear in the string, the path to fname will be - substituted for the first and fname will be substituted for the - second - fp_format: string to use for formatting filepaths, must contain - exactly two "%s" and the dirname will be substituted for the first - and fname will be substituted for the second - """ - def f(dirname, fnames, included_suffixes=None): - result = [] - # begin by figuring out which filenames, if any, - # are going to be displayed - display_fnames = [] - for fname in fnames: - if (isfile(join(dirname,fname)) and - (included_suffixes is None or - splitext(fname)[1] in included_suffixes)): - display_fnames.append(fname) - - if len(display_fnames) == 0: - # if there are no filenames to display, don't print anything - # (not even the directory name) - pass - else: - # otherwise print the formatted directory name followed by - # the formatted filenames - dirname_output_line = dirname_output_format % dirname - result.append(dirname_output_line) - for fname in display_fnames: - fp = fp_format % (dirname,fname) - if fp_cleaner is not None: - fp = fp_cleaner(fp) - try: - # output can include both a filepath and a filename... - fname_output_line = fname_output_format % (fp, fname) - except TypeError: - # ... or just a single filepath - fname_output_line = fname_output_format % fname - result.append(fname_output_line) - return result - return f - - def _get_notebook_display_formatter(self, - spacer="  "): - """ generate function to use for notebook formatting - """ - dirname_output_format = \ - self.result_html_prefix + "%s/" + self.result_html_suffix - fname_output_format = \ - self.result_html_prefix + spacer + self.html_link_str + self.result_html_suffix - fp_format = self.url_prefix + '%s/%s' - if sep == "\\": - # Working on a platform where the path separator is "\", so - # must convert these to "/" for generating a URI - def fp_cleaner(fp): - # Replace all occurrences of backslash ("\") with a forward - # slash ("/") - this is necessary on windows when a path is - # provided as input, but we must link to a URI - return fp.replace('\\','/') - else: - fp_cleaner = None - - return self._get_display_formatter(dirname_output_format, - fname_output_format, - fp_format, - fp_cleaner) - - def _get_terminal_display_formatter(self, - spacer=" "): - """ generate function to use for terminal formatting - """ - dirname_output_format = "%s/" - fname_output_format = spacer + "%s" - fp_format = '%s/%s' - - return self._get_display_formatter(dirname_output_format, - fname_output_format, - fp_format) - - def _format_path(self): - result_lines = [] - if self.recursive: - walked_dir = list(walk(self.path)) - else: - walked_dir = [next(walk(self.path))] - walked_dir.sort() - for dirname, subdirs, fnames in walked_dir: - result_lines += self.notebook_display_formatter(dirname, fnames, self.included_suffixes) - return '\n'.join(result_lines) - - def __repr__(self): - """return newline-separated absolute paths - """ - result_lines = [] - if self.recursive: - walked_dir = list(walk(self.path)) - else: - walked_dir = [next(walk(self.path))] - walked_dir.sort() - for dirname, subdirs, fnames in walked_dir: - result_lines += self.terminal_display_formatter(dirname, fnames, self.included_suffixes) - return '\n'.join(result_lines) - - -class Code(TextDisplayObject): - """Display syntax-highlighted source code. - - This uses Pygments to highlight the code for HTML and Latex output. - - Parameters - ---------- - data : str - The code as a string - url : str - A URL to fetch the code from - filename : str - A local filename to load the code from - language : str - The short name of a Pygments lexer to use for highlighting. - If not specified, it will guess the lexer based on the filename - or the code. Available lexers: http://pygments.org/docs/lexers/ - """ - def __init__(self, data=None, url=None, filename=None, language=None): - self.language = language - super().__init__(data=data, url=url, filename=filename) - - def _get_lexer(self): - if self.language: - from pygments.lexers import get_lexer_by_name - return get_lexer_by_name(self.language) - elif self.filename: - from pygments.lexers import get_lexer_for_filename - return get_lexer_for_filename(self.filename) - else: - from pygments.lexers import guess_lexer - return guess_lexer(self.data) - - def __repr__(self): - return self.data - - def _repr_html_(self): - from pygments import highlight - from pygments.formatters import HtmlFormatter - fmt = HtmlFormatter() - style = ''.format(fmt.get_style_defs('.output_html')) - return style + highlight(self.data, self._get_lexer(), fmt) - - def _repr_latex_(self): - from pygments import highlight - from pygments.formatters import LatexFormatter - return highlight(self.data, self._get_lexer(), LatexFormatter()) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_module_paths.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_module_paths.py deleted file mode 100644 index 8dc52fd3234fe23a490b8e98e12deacd174bd1f0..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_module_paths.py +++ /dev/null @@ -1,107 +0,0 @@ -# encoding: utf-8 -"""Tests for IPython.utils.module_paths.py""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2008-2011 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -import shutil -import sys -import tempfile - -from pathlib import Path - -import IPython.utils.module_paths as mp - -TEST_FILE_PATH = Path(__file__).resolve().parent - -TMP_TEST_DIR = Path(tempfile.mkdtemp(suffix="with.dot")) -# -# Setup/teardown functions/decorators -# - -old_syspath = sys.path - -def make_empty_file(fname): - open(fname, "w", encoding="utf-8").close() - - -def setup_module(): - """Setup testenvironment for the module: - - """ - # Do not mask exceptions here. In particular, catching WindowsError is a - # problem because that exception is only defined on Windows... - Path(TMP_TEST_DIR / "xmod").mkdir(parents=True) - Path(TMP_TEST_DIR / "nomod").mkdir(parents=True) - make_empty_file(TMP_TEST_DIR / "xmod/__init__.py") - make_empty_file(TMP_TEST_DIR / "xmod/sub.py") - make_empty_file(TMP_TEST_DIR / "pack.py") - make_empty_file(TMP_TEST_DIR / "packpyc.pyc") - sys.path = [str(TMP_TEST_DIR)] - -def teardown_module(): - """Teardown testenvironment for the module: - - - Remove tempdir - - restore sys.path - """ - # Note: we remove the parent test dir, which is the root of all test - # subdirs we may have created. Use shutil instead of os.removedirs, so - # that non-empty directories are all recursively removed. - shutil.rmtree(TMP_TEST_DIR) - sys.path = old_syspath - -def test_tempdir(): - """ - Ensure the test are done with a temporary file that have a dot somewhere. - """ - assert "." in str(TMP_TEST_DIR) - - -def test_find_mod_1(): - """ - Search for a directory's file path. - Expected output: a path to that directory's __init__.py file. - """ - modpath = TMP_TEST_DIR / "xmod" / "__init__.py" - assert Path(mp.find_mod("xmod")) == modpath - -def test_find_mod_2(): - """ - Search for a directory's file path. - Expected output: a path to that directory's __init__.py file. - TODO: Confirm why this is a duplicate test. - """ - modpath = TMP_TEST_DIR / "xmod" / "__init__.py" - assert Path(mp.find_mod("xmod")) == modpath - -def test_find_mod_3(): - """ - Search for a directory + a filename without its .py extension - Expected output: full path with .py extension. - """ - modpath = TMP_TEST_DIR / "xmod" / "sub.py" - assert Path(mp.find_mod("xmod.sub")) == modpath - -def test_find_mod_4(): - """ - Search for a filename without its .py extension - Expected output: full path with .py extension - """ - modpath = TMP_TEST_DIR / "pack.py" - assert Path(mp.find_mod("pack")) == modpath - -def test_find_mod_5(): - """ - Search for a filename with a .pyc extension - Expected output: TODO: do we exclude or include .pyc files? - """ - assert mp.find_mod("packpyc") == None diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_process_net_command_json.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_process_net_command_json.py deleted file mode 100644 index 42eb599146851fe3ff182caa1b4033a3a6d77e12..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_process_net_command_json.py +++ /dev/null @@ -1,1341 +0,0 @@ -import itertools -import json -import linecache -import os -import platform -import sys -from functools import partial - -import pydevd_file_utils -from _pydev_bundle import pydev_log -from _pydevd_bundle._debug_adapter import pydevd_base_schema, pydevd_schema -from _pydevd_bundle._debug_adapter.pydevd_schema import ( - CompletionsResponseBody, EvaluateResponseBody, ExceptionOptions, - GotoTargetsResponseBody, ModulesResponseBody, ProcessEventBody, - ProcessEvent, Scope, ScopesResponseBody, SetExpressionResponseBody, - SetVariableResponseBody, SourceBreakpoint, SourceResponseBody, - VariablesResponseBody, SetBreakpointsResponseBody, Response, - Capabilities, PydevdAuthorizeRequest, Request, - StepInTargetsResponseBody, SetFunctionBreakpointsResponseBody, BreakpointEvent, - BreakpointEventBody, InitializedEvent) -from _pydevd_bundle.pydevd_api import PyDevdAPI -from _pydevd_bundle.pydevd_breakpoints import get_exception_class, FunctionBreakpoint -from _pydevd_bundle.pydevd_comm_constants import ( - CMD_PROCESS_EVENT, CMD_RETURN, CMD_SET_NEXT_STATEMENT, CMD_STEP_INTO, - CMD_STEP_INTO_MY_CODE, CMD_STEP_OVER, CMD_STEP_OVER_MY_CODE, file_system_encoding, - CMD_STEP_RETURN_MY_CODE, CMD_STEP_RETURN) -from _pydevd_bundle.pydevd_filtering import ExcludeFilter -from _pydevd_bundle.pydevd_json_debug_options import _extract_debug_options, DebugOptions -from _pydevd_bundle.pydevd_net_command import NetCommand -from _pydevd_bundle.pydevd_utils import convert_dap_log_message_to_expression, ScopeRequest -from _pydevd_bundle.pydevd_constants import (PY_IMPL_NAME, DebugInfoHolder, PY_VERSION_STR, - PY_IMPL_VERSION_STR, IS_64BIT_PROCESS) -from _pydevd_bundle.pydevd_trace_dispatch import USING_CYTHON -from _pydevd_frame_eval.pydevd_frame_eval_main import USING_FRAME_EVAL -from _pydevd_bundle.pydevd_comm import internal_get_step_in_targets_json -from _pydevd_bundle.pydevd_additional_thread_info import set_additional_thread_info -from _pydevd_bundle.pydevd_thread_lifecycle import pydevd_find_thread_by_id - - -def _convert_rules_to_exclude_filters(rules, on_error): - exclude_filters = [] - if not isinstance(rules, list): - on_error('Invalid "rules" (expected list of dicts). Found: %s' % (rules,)) - - else: - directory_exclude_filters = [] - module_exclude_filters = [] - glob_exclude_filters = [] - - for rule in rules: - if not isinstance(rule, dict): - on_error('Invalid "rules" (expected list of dicts). Found: %s' % (rules,)) - continue - - include = rule.get('include') - if include is None: - on_error('Invalid "rule" (expected dict with "include"). Found: %s' % (rule,)) - continue - - path = rule.get('path') - module = rule.get('module') - if path is None and module is None: - on_error('Invalid "rule" (expected dict with "path" or "module"). Found: %s' % (rule,)) - continue - - if path is not None: - glob_pattern = path - if '*' not in path and '?' not in path: - if os.path.isdir(glob_pattern): - # If a directory was specified, add a '/**' - # to be consistent with the glob pattern required - # by pydevd. - if not glob_pattern.endswith('/') and not glob_pattern.endswith('\\'): - glob_pattern += '/' - glob_pattern += '**' - directory_exclude_filters.append(ExcludeFilter(glob_pattern, not include, True)) - else: - glob_exclude_filters.append(ExcludeFilter(glob_pattern, not include, True)) - - elif module is not None: - module_exclude_filters.append(ExcludeFilter(module, not include, False)) - - else: - on_error('Internal error: expected path or module to be specified.') - - # Note that we have to sort the directory/module exclude filters so that the biggest - # paths match first. - # i.e.: if we have: - # /sub1/sub2/sub3 - # a rule with /sub1/sub2 would match before a rule only with /sub1. - directory_exclude_filters = sorted(directory_exclude_filters, key=lambda exclude_filter:-len(exclude_filter.name)) - module_exclude_filters = sorted(module_exclude_filters, key=lambda exclude_filter:-len(exclude_filter.name)) - exclude_filters = directory_exclude_filters + glob_exclude_filters + module_exclude_filters - - return exclude_filters - - -class IDMap(object): - - def __init__(self): - self._value_to_key = {} - self._key_to_value = {} - self._next_id = partial(next, itertools.count(0)) - - def obtain_value(self, key): - return self._key_to_value[key] - - def obtain_key(self, value): - try: - key = self._value_to_key[value] - except KeyError: - key = self._next_id() - self._key_to_value[key] = value - self._value_to_key[value] = key - return key - - -class PyDevJsonCommandProcessor(object): - - def __init__(self, from_json): - self.from_json = from_json - self.api = PyDevdAPI() - self._options = DebugOptions() - self._next_breakpoint_id = partial(next, itertools.count(0)) - self._goto_targets_map = IDMap() - self._launch_or_attach_request_done = False - - def process_net_command_json(self, py_db, json_contents, send_response=True): - ''' - Processes a debug adapter protocol json command. - ''' - - DEBUG = False - - try: - if isinstance(json_contents, bytes): - json_contents = json_contents.decode('utf-8') - - request = self.from_json(json_contents, update_ids_from_dap=True) - except Exception as e: - try: - loaded_json = json.loads(json_contents) - request = Request(loaded_json.get('command', ''), loaded_json['seq']) - except: - # There's not much we can do in this case... - pydev_log.exception('Error loading json: %s', json_contents) - return - - error_msg = str(e) - if error_msg.startswith("'") and error_msg.endswith("'"): - error_msg = error_msg[1:-1] - - # This means a failure processing the request (but we were able to load the seq, - # so, answer with a failure response). - def on_request(py_db, request): - error_response = { - 'type': 'response', - 'request_seq': request.seq, - 'success': False, - 'command': request.command, - 'message': error_msg, - } - return NetCommand(CMD_RETURN, 0, error_response, is_json=True) - - else: - if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 1: - pydev_log.info('Process %s: %s\n' % ( - request.__class__.__name__, json.dumps(request.to_dict(update_ids_to_dap=True), indent=4, sort_keys=True),)) - - assert request.type == 'request' - method_name = 'on_%s_request' % (request.command.lower(),) - on_request = getattr(self, method_name, None) - if on_request is None: - print('Unhandled: %s not available in PyDevJsonCommandProcessor.\n' % (method_name,)) - return - - if DEBUG: - print('Handled in pydevd: %s (in PyDevJsonCommandProcessor).\n' % (method_name,)) - - with py_db._main_lock: - if request.__class__ == PydevdAuthorizeRequest: - authorize_request = request # : :type authorize_request: PydevdAuthorizeRequest - access_token = authorize_request.arguments.debugServerAccessToken - py_db.authentication.login(access_token) - - if not py_db.authentication.is_authenticated(): - response = Response( - request.seq, success=False, command=request.command, message='Client not authenticated.', body={}) - cmd = NetCommand(CMD_RETURN, 0, response, is_json=True) - py_db.writer.add_command(cmd) - return - - cmd = on_request(py_db, request) - if cmd is not None and send_response: - py_db.writer.add_command(cmd) - - def on_pydevdauthorize_request(self, py_db, request): - client_access_token = py_db.authentication.client_access_token - body = {'clientAccessToken': None} - if client_access_token: - body['clientAccessToken'] = client_access_token - - response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_initialize_request(self, py_db, request): - body = Capabilities( - # Supported. - supportsConfigurationDoneRequest=True, - supportsConditionalBreakpoints=True, - supportsHitConditionalBreakpoints=True, - supportsEvaluateForHovers=True, - supportsSetVariable=True, - supportsGotoTargetsRequest=True, - supportsCompletionsRequest=True, - supportsModulesRequest=True, - supportsExceptionOptions=True, - supportsValueFormattingOptions=True, - supportsExceptionInfoRequest=True, - supportTerminateDebuggee=True, - supportsDelayedStackTraceLoading=True, - supportsLogPoints=True, - supportsSetExpression=True, - supportsTerminateRequest=True, - supportsClipboardContext=True, - supportsFunctionBreakpoints=True, - - exceptionBreakpointFilters=[ - {'filter': 'raised', 'label': 'Raised Exceptions', 'default': False}, - {'filter': 'uncaught', 'label': 'Uncaught Exceptions', 'default': True}, - {"filter": "userUnhandled", "label": "User Uncaught Exceptions", "default": False}, - ], - - # Not supported. - supportsStepBack=False, - supportsRestartFrame=False, - supportsStepInTargetsRequest=True, - supportsRestartRequest=False, - supportsLoadedSourcesRequest=False, - supportsTerminateThreadsRequest=False, - supportsDataBreakpoints=False, - supportsReadMemoryRequest=False, - supportsDisassembleRequest=False, - additionalModuleColumns=[], - completionTriggerCharacters=[], - supportedChecksumAlgorithms=[], - ).to_dict() - - # Non-standard capabilities/info below. - body['supportsDebuggerProperties'] = True - - body['pydevd'] = pydevd_info = {} - pydevd_info['processId'] = os.getpid() - self.api.notify_initialize(py_db) - response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_configurationdone_request(self, py_db, request): - ''' - :param ConfigurationDoneRequest request: - ''' - if not self._launch_or_attach_request_done: - pydev_log.critical('Missing launch request or attach request before configuration done request.') - - self.api.run(py_db) - self.api.notify_configuration_done(py_db) - - configuration_done_response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, configuration_done_response, is_json=True) - - def on_threads_request(self, py_db, request): - ''' - :param ThreadsRequest request: - ''' - return self.api.list_threads(py_db, request.seq) - - def on_terminate_request(self, py_db, request): - ''' - :param TerminateRequest request: - ''' - self._request_terminate_process(py_db) - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def _request_terminate_process(self, py_db): - self.api.request_terminate_process(py_db) - - def on_completions_request(self, py_db, request): - ''' - :param CompletionsRequest request: - ''' - arguments = request.arguments # : :type arguments: CompletionsArguments - seq = request.seq - text = arguments.text - frame_id = arguments.frameId - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - frame_id) - - if thread_id is None: - body = CompletionsResponseBody([]) - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Thread to get completions seems to have resumed already.' - }) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - # Note: line and column are 1-based (convert to 0-based for pydevd). - column = arguments.column - 1 - - if arguments.line is None: - # line is optional - line = -1 - else: - line = arguments.line - 1 - - self.api.request_completions(py_db, seq, thread_id, frame_id, text, line=line, column=column) - - def _resolve_remote_root(self, local_root, remote_root): - if remote_root == '.': - cwd = os.getcwd() - append_pathsep = local_root.endswith('\\') or local_root.endswith('/') - return cwd + (os.path.sep if append_pathsep else '') - return remote_root - - def _set_debug_options(self, py_db, args, start_reason): - rules = args.get('rules') - stepping_resumes_all_threads = args.get('steppingResumesAllThreads', True) - self.api.set_stepping_resumes_all_threads(py_db, stepping_resumes_all_threads) - - terminate_child_processes = args.get('terminateChildProcesses', True) - self.api.set_terminate_child_processes(py_db, terminate_child_processes) - - terminate_keyboard_interrupt = args.get('onTerminate', 'kill') == 'KeyboardInterrupt' - self.api.set_terminate_keyboard_interrupt(py_db, terminate_keyboard_interrupt) - - variable_presentation = args.get('variablePresentation', None) - if isinstance(variable_presentation, dict): - - def get_variable_presentation(setting, default): - value = variable_presentation.get(setting, default) - if value not in ('group', 'inline', 'hide'): - pydev_log.info( - 'The value set for "%s" (%s) in the variablePresentation is not valid. Valid values are: "group", "inline", "hide"' % ( - setting, value,)) - value = default - - return value - - default = get_variable_presentation('all', 'group') - - special_presentation = get_variable_presentation('special', default) - function_presentation = get_variable_presentation('function', default) - class_presentation = get_variable_presentation('class', default) - protected_presentation = get_variable_presentation('protected', default) - - self.api.set_variable_presentation(py_db, self.api.VariablePresentation( - special_presentation, - function_presentation, - class_presentation, - protected_presentation - )) - - exclude_filters = [] - - if rules is not None: - exclude_filters = _convert_rules_to_exclude_filters( - rules, lambda msg: self.api.send_error_message(py_db, msg)) - - self.api.set_exclude_filters(py_db, exclude_filters) - - debug_options = _extract_debug_options( - args.get('options'), - args.get('debugOptions'), - ) - self._options.update_fom_debug_options(debug_options) - self._options.update_from_args(args) - - self.api.set_use_libraries_filter(py_db, self._options.just_my_code) - - if self._options.client_os: - self.api.set_ide_os(self._options.client_os) - - path_mappings = [] - for pathMapping in args.get('pathMappings', []): - localRoot = pathMapping.get('localRoot', '') - remoteRoot = pathMapping.get('remoteRoot', '') - remoteRoot = self._resolve_remote_root(localRoot, remoteRoot) - if (localRoot != '') and (remoteRoot != ''): - path_mappings.append((localRoot, remoteRoot)) - - if bool(path_mappings): - pydevd_file_utils.setup_client_server_paths(path_mappings) - - resolve_symlinks = args.get('resolveSymlinks', None) - if resolve_symlinks is not None: - pydevd_file_utils.set_resolve_symlinks(resolve_symlinks) - - redirecting = args.get("isOutputRedirected") - if self._options.redirect_output: - py_db.enable_output_redirection(True, True) - redirecting = True - else: - py_db.enable_output_redirection(False, False) - - py_db.is_output_redirected = redirecting - - self.api.set_show_return_values(py_db, self._options.show_return_value) - - if not self._options.break_system_exit_zero: - ignore_system_exit_codes = [0, None] - if self._options.django_debug or self._options.flask_debug: - ignore_system_exit_codes += [3] - - self.api.set_ignore_system_exit_codes(py_db, ignore_system_exit_codes) - - auto_reload = args.get('autoReload', {}) - if not isinstance(auto_reload, dict): - pydev_log.info('Expected autoReload to be a dict. Received: %s' % (auto_reload,)) - auto_reload = {} - - enable_auto_reload = auto_reload.get('enable', False) - watch_dirs = auto_reload.get('watchDirectories') - if not watch_dirs: - watch_dirs = [] - # Note: by default this is no longer done because on some cases there are entries in the PYTHONPATH - # such as the home directory or /python/x64, where the site packages are in /python/x64/libs, so, - # we only watch the current working directory as well as executed script. - # check = getattr(sys, 'path', [])[:] - # # By default only watch directories that are in the project roots / - # # program dir (if available), sys.argv[0], as well as the current dir (we don't want to - # # listen to the whole site-packages by default as it can be huge). - # watch_dirs = [pydevd_file_utils.absolute_path(w) for w in check] - # watch_dirs = [w for w in watch_dirs if py_db.in_project_roots_filename_uncached(w) and os.path.isdir(w)] - - program = args.get('program') - if program: - if os.path.isdir(program): - watch_dirs.append(program) - else: - watch_dirs.append(os.path.dirname(program)) - watch_dirs.append(os.path.abspath('.')) - - argv = getattr(sys, 'argv', []) - if argv: - f = argv[0] - if f: # argv[0] could be None (https://github.com/microsoft/debugpy/issues/987) - if os.path.isdir(f): - watch_dirs.append(f) - else: - watch_dirs.append(os.path.dirname(f)) - - if not isinstance(watch_dirs, (list, set, tuple)): - watch_dirs = (watch_dirs,) - new_watch_dirs = set() - for w in watch_dirs: - try: - new_watch_dirs.add(pydevd_file_utils.get_path_with_real_case(pydevd_file_utils.absolute_path(w))) - except Exception: - pydev_log.exception('Error adding watch dir: %s', w) - watch_dirs = new_watch_dirs - - poll_target_time = auto_reload.get('pollingInterval', 1) - exclude_patterns = auto_reload.get('exclude', ('**/.git/**', '**/__pycache__/**', '**/node_modules/**', '**/.metadata/**', '**/site-packages/**')) - include_patterns = auto_reload.get('include', ('**/*.py', '**/*.pyw')) - self.api.setup_auto_reload_watcher( - py_db, enable_auto_reload, watch_dirs, poll_target_time, exclude_patterns, include_patterns) - - if self._options.stop_on_entry and start_reason == 'launch': - self.api.stop_on_entry() - - self.api.set_gui_event_loop(py_db, self._options.gui_event_loop) - - def _send_process_event(self, py_db, start_method): - argv = getattr(sys, 'argv', []) - if len(argv) > 0: - name = argv[0] - else: - name = '' - - if isinstance(name, bytes): - name = name.decode(file_system_encoding, 'replace') - name = name.encode('utf-8') - - body = ProcessEventBody( - name=name, - systemProcessId=os.getpid(), - isLocalProcess=True, - startMethod=start_method, - ) - event = ProcessEvent(body) - py_db.writer.add_command(NetCommand(CMD_PROCESS_EVENT, 0, event, is_json=True)) - - def _handle_launch_or_attach_request(self, py_db, request, start_reason): - self._send_process_event(py_db, start_reason) - self._launch_or_attach_request_done = True - self.api.set_enable_thread_notifications(py_db, True) - self._set_debug_options(py_db, request.arguments.kwargs, start_reason=start_reason) - response = pydevd_base_schema.build_response(request) - - initialized_event = InitializedEvent() - py_db.writer.add_command(NetCommand(CMD_RETURN, 0, initialized_event, is_json=True)) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_launch_request(self, py_db, request): - ''' - :param LaunchRequest request: - ''' - return self._handle_launch_or_attach_request(py_db, request, start_reason='launch') - - def on_attach_request(self, py_db, request): - ''' - :param AttachRequest request: - ''' - return self._handle_launch_or_attach_request(py_db, request, start_reason='attach') - - def on_pause_request(self, py_db, request): - ''' - :param PauseRequest request: - ''' - arguments = request.arguments # : :type arguments: PauseArguments - thread_id = arguments.threadId - - self.api.request_suspend_thread(py_db, thread_id=thread_id) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_continue_request(self, py_db, request): - ''' - :param ContinueRequest request: - ''' - arguments = request.arguments # : :type arguments: ContinueArguments - thread_id = arguments.threadId - - def on_resumed(): - body = {'allThreadsContinued': thread_id == '*'} - response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - cmd = NetCommand(CMD_RETURN, 0, response, is_json=True) - py_db.writer.add_command(cmd) - - # Only send resumed notification when it has actually resumed! - # (otherwise the user could send a continue, receive the notification and then - # request a new pause which would be paused without sending any notification as - # it didn't really run in the first place). - py_db.threads_suspended_single_notification.add_on_resumed_callback(on_resumed) - self.api.request_resume_thread(thread_id) - - def on_next_request(self, py_db, request): - ''' - :param NextRequest request: - ''' - arguments = request.arguments # : :type arguments: NextArguments - thread_id = arguments.threadId - - if py_db.get_use_libraries_filter(): - step_cmd_id = CMD_STEP_OVER_MY_CODE - else: - step_cmd_id = CMD_STEP_OVER - - self.api.request_step(py_db, thread_id, step_cmd_id) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_stepin_request(self, py_db, request): - ''' - :param StepInRequest request: - ''' - arguments = request.arguments # : :type arguments: StepInArguments - thread_id = arguments.threadId - - target_id = arguments.targetId - if target_id is not None: - thread = pydevd_find_thread_by_id(thread_id) - if thread is None: - response = Response( - request_seq=request.seq, - success=False, - command=request.command, - message='Unable to find thread from thread_id: %s' % (thread_id,), - body={}, - ) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - info = set_additional_thread_info(thread) - target_id_to_smart_step_into_variant = info.target_id_to_smart_step_into_variant - if not target_id_to_smart_step_into_variant: - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'success': False, - 'message': 'Unable to step into target (no targets are saved in the thread info).' - }) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - variant = target_id_to_smart_step_into_variant.get(target_id) - if variant is not None: - parent = variant.parent - if parent is not None: - self.api.request_smart_step_into(py_db, request.seq, thread_id, parent.offset, variant.offset) - else: - self.api.request_smart_step_into(py_db, request.seq, thread_id, variant.offset, -1) - else: - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'success': False, - 'message': 'Unable to find step into target %s. Available targets: %s' % ( - target_id, target_id_to_smart_step_into_variant) - }) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - else: - if py_db.get_use_libraries_filter(): - step_cmd_id = CMD_STEP_INTO_MY_CODE - else: - step_cmd_id = CMD_STEP_INTO - - self.api.request_step(py_db, thread_id, step_cmd_id) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_stepintargets_request(self, py_db, request): - ''' - :param StepInTargetsRequest request: - ''' - frame_id = request.arguments.frameId - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - frame_id) - - if thread_id is None: - body = StepInTargetsResponseBody([]) - variables_response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Unable to get thread_id from frame_id (thread to get step in targets seems to have resumed already).' - }) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - py_db.post_method_as_internal_command( - thread_id, internal_get_step_in_targets_json, request.seq, thread_id, frame_id, request, set_additional_thread_info) - - def on_stepout_request(self, py_db, request): - ''' - :param StepOutRequest request: - ''' - arguments = request.arguments # : :type arguments: StepOutArguments - thread_id = arguments.threadId - - if py_db.get_use_libraries_filter(): - step_cmd_id = CMD_STEP_RETURN_MY_CODE - else: - step_cmd_id = CMD_STEP_RETURN - - self.api.request_step(py_db, thread_id, step_cmd_id) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def _get_hit_condition_expression(self, hit_condition): - '''Following hit condition values are supported - - * x or == x when breakpoint is hit x times - * >= x when breakpoint is hit more than or equal to x times - * % x when breakpoint is hit multiple of x times - - Returns '@HIT@ == x' where @HIT@ will be replaced by number of hits - ''' - if not hit_condition: - return None - - expr = hit_condition.strip() - try: - int(expr) - return '@HIT@ == {}'.format(expr) - except ValueError: - pass - - if expr.startswith('%'): - return '@HIT@ {} == 0'.format(expr) - - if expr.startswith('==') or \ - expr.startswith('>') or \ - expr.startswith('<'): - return '@HIT@ {}'.format(expr) - - return hit_condition - - def on_disconnect_request(self, py_db, request): - ''' - :param DisconnectRequest request: - ''' - if request.arguments.terminateDebuggee: - self._request_terminate_process(py_db) - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - self._launch_or_attach_request_done = False - py_db.enable_output_redirection(False, False) - self.api.request_disconnect(py_db, resume_threads=True) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def _verify_launch_or_attach_done(self, request): - if not self._launch_or_attach_request_done: - # Note that to validate the breakpoints we need the launch request to be done already - # (otherwise the filters wouldn't be set for the breakpoint validation). - if request.command == 'setFunctionBreakpoints': - body = SetFunctionBreakpointsResponseBody([]) - else: - body = SetBreakpointsResponseBody([]) - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Breakpoints may only be set after the launch request is received.' - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_setfunctionbreakpoints_request(self, py_db, request): - ''' - :param SetFunctionBreakpointsRequest request: - ''' - response = self._verify_launch_or_attach_done(request) - if response is not None: - return response - - arguments = request.arguments # : :type arguments: SetFunctionBreakpointsArguments - function_breakpoints = [] - suspend_policy = 'ALL' - - # Not currently covered by the DAP. - is_logpoint = False - expression = None - - breakpoints_set = [] - for bp in arguments.breakpoints: - hit_condition = self._get_hit_condition_expression(bp.get('hitCondition')) - condition = bp.get('condition') - - function_breakpoints.append( - FunctionBreakpoint(bp['name'], condition, expression, suspend_policy, hit_condition, is_logpoint)) - - # Note: always succeeds. - breakpoints_set.append(pydevd_schema.Breakpoint( - verified=True, id=self._next_breakpoint_id()).to_dict()) - - self.api.set_function_breakpoints(py_db, function_breakpoints) - - body = {'breakpoints': breakpoints_set} - set_breakpoints_response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, set_breakpoints_response, is_json=True) - - def on_setbreakpoints_request(self, py_db, request): - ''' - :param SetBreakpointsRequest request: - ''' - response = self._verify_launch_or_attach_done(request) - if response is not None: - return response - - arguments = request.arguments # : :type arguments: SetBreakpointsArguments - # TODO: Path is optional here it could be source reference. - filename = self.api.filename_to_str(arguments.source.path) - func_name = 'None' - - self.api.remove_all_breakpoints(py_db, filename) - - btype = 'python-line' - suspend_policy = 'ALL' - - if not filename.lower().endswith('.py'): # Note: check based on original file, not mapping. - if self._options.django_debug: - btype = 'django-line' - elif self._options.flask_debug: - btype = 'jinja2-line' - - breakpoints_set = [] - - for source_breakpoint in arguments.breakpoints: - source_breakpoint = SourceBreakpoint(**source_breakpoint) - line = source_breakpoint.line - condition = source_breakpoint.condition - breakpoint_id = self._next_breakpoint_id() - - hit_condition = self._get_hit_condition_expression(source_breakpoint.hitCondition) - log_message = source_breakpoint.logMessage - if not log_message: - is_logpoint = None - expression = None - else: - is_logpoint = True - expression = convert_dap_log_message_to_expression(log_message) - - on_changed_breakpoint_state = partial(self._on_changed_breakpoint_state, py_db, arguments.source) - result = self.api.add_breakpoint( - py_db, filename, btype, breakpoint_id, line, condition, func_name, expression, - suspend_policy, hit_condition, is_logpoint, adjust_line=True, on_changed_breakpoint_state=on_changed_breakpoint_state) - - bp = self._create_breakpoint_from_add_breakpoint_result(py_db, arguments.source, breakpoint_id, result) - breakpoints_set.append(bp) - - body = {'breakpoints': breakpoints_set} - set_breakpoints_response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, set_breakpoints_response, is_json=True) - - def _on_changed_breakpoint_state(self, py_db, source, breakpoint_id, result): - bp = self._create_breakpoint_from_add_breakpoint_result(py_db, source, breakpoint_id, result) - body = BreakpointEventBody( - reason='changed', - breakpoint=bp, - ) - event = BreakpointEvent(body) - event_id = 0 # Actually ignored in this case - py_db.writer.add_command(NetCommand(event_id, 0, event, is_json=True)) - - def _create_breakpoint_from_add_breakpoint_result(self, py_db, source, breakpoint_id, result): - error_code = result.error_code - - if error_code: - if error_code == self.api.ADD_BREAKPOINT_FILE_NOT_FOUND: - error_msg = 'Breakpoint in file that does not exist.' - - elif error_code == self.api.ADD_BREAKPOINT_FILE_EXCLUDED_BY_FILTERS: - error_msg = 'Breakpoint in file excluded by filters.' - if py_db.get_use_libraries_filter(): - error_msg += ('\nNote: may be excluded because of "justMyCode" option (default == true).' - 'Try setting \"justMyCode\": false in the debug configuration (e.g., launch.json).\n') - - elif error_code == self.api.ADD_BREAKPOINT_LAZY_VALIDATION: - error_msg = 'Waiting for code to be loaded to verify breakpoint.' - - elif error_code == self.api.ADD_BREAKPOINT_INVALID_LINE: - error_msg = 'Breakpoint added to invalid line.' - - else: - # Shouldn't get here. - error_msg = 'Breakpoint not validated (reason unknown -- please report as bug).' - - return pydevd_schema.Breakpoint( - verified=False, id=breakpoint_id, line=result.translated_line, message=error_msg, source=source).to_dict() - else: - return pydevd_schema.Breakpoint( - verified=True, id=breakpoint_id, line=result.translated_line, source=source).to_dict() - - def on_setexceptionbreakpoints_request(self, py_db, request): - ''' - :param SetExceptionBreakpointsRequest request: - ''' - # : :type arguments: SetExceptionBreakpointsArguments - arguments = request.arguments - filters = arguments.filters - exception_options = arguments.exceptionOptions - self.api.remove_all_exception_breakpoints(py_db) - - # Can't set these in the DAP. - condition = None - expression = None - notify_on_first_raise_only = False - - ignore_libraries = 1 if py_db.get_use_libraries_filter() else 0 - - if exception_options: - break_raised = False - break_uncaught = False - - for option in exception_options: - option = ExceptionOptions(**option) - if not option.path: - continue - - # never: never breaks - # - # always: always breaks - # - # unhandled: breaks when exception unhandled - # - # userUnhandled: breaks if the exception is not handled by user code - - notify_on_handled_exceptions = 1 if option.breakMode == 'always' else 0 - notify_on_unhandled_exceptions = 1 if option.breakMode == 'unhandled' else 0 - notify_on_user_unhandled_exceptions = 1 if option.breakMode == 'userUnhandled' else 0 - exception_paths = option.path - break_raised |= notify_on_handled_exceptions - break_uncaught |= notify_on_unhandled_exceptions - - exception_names = [] - if len(exception_paths) == 0: - continue - - elif len(exception_paths) == 1: - if 'Python Exceptions' in exception_paths[0]['names']: - exception_names = ['BaseException'] - - else: - path_iterator = iter(exception_paths) - if 'Python Exceptions' in next(path_iterator)['names']: - for path in path_iterator: - for ex_name in path['names']: - exception_names.append(ex_name) - - for exception_name in exception_names: - self.api.add_python_exception_breakpoint( - py_db, - exception_name, - condition, - expression, - notify_on_handled_exceptions, - notify_on_unhandled_exceptions, - notify_on_user_unhandled_exceptions, - notify_on_first_raise_only, - ignore_libraries - ) - - else: - break_raised = 'raised' in filters - break_uncaught = 'uncaught' in filters - break_user = 'userUnhandled' in filters - if break_raised or break_uncaught or break_user: - notify_on_handled_exceptions = 1 if break_raised else 0 - notify_on_unhandled_exceptions = 1 if break_uncaught else 0 - notify_on_user_unhandled_exceptions = 1 if break_user else 0 - exception = 'BaseException' - - self.api.add_python_exception_breakpoint( - py_db, - exception, - condition, - expression, - notify_on_handled_exceptions, - notify_on_unhandled_exceptions, - notify_on_user_unhandled_exceptions, - notify_on_first_raise_only, - ignore_libraries - ) - - if break_raised: - btype = None - if self._options.django_debug: - btype = 'django' - elif self._options.flask_debug: - btype = 'jinja2' - - if btype: - self.api.add_plugins_exception_breakpoint( - py_db, btype, 'BaseException') # Note: Exception name could be anything here. - - # Note: no body required on success. - set_breakpoints_response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, set_breakpoints_response, is_json=True) - - def on_stacktrace_request(self, py_db, request): - ''' - :param StackTraceRequest request: - ''' - # : :type stack_trace_arguments: StackTraceArguments - stack_trace_arguments = request.arguments - thread_id = stack_trace_arguments.threadId - - if stack_trace_arguments.startFrame: - start_frame = int(stack_trace_arguments.startFrame) - else: - start_frame = 0 - - if stack_trace_arguments.levels: - levels = int(stack_trace_arguments.levels) - else: - levels = 0 - - fmt = stack_trace_arguments.format - if hasattr(fmt, 'to_dict'): - fmt = fmt.to_dict() - self.api.request_stack(py_db, request.seq, thread_id, fmt=fmt, start_frame=start_frame, levels=levels) - - def on_exceptioninfo_request(self, py_db, request): - ''' - :param ExceptionInfoRequest request: - ''' - # : :type exception_into_arguments: ExceptionInfoArguments - exception_into_arguments = request.arguments - thread_id = exception_into_arguments.threadId - max_frames = self._options.max_exception_stack_frames - thread = pydevd_find_thread_by_id(thread_id) - if thread is not None: - self.api.request_exception_info_json(py_db, request, thread_id, thread, max_frames) - else: - response = Response( - request_seq=request.seq, - success=False, - command=request.command, - message='Unable to find thread from thread_id: %s' % (thread_id,), - body={}, - ) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_scopes_request(self, py_db, request): - ''' - Scopes are the top-level items which appear for a frame (so, we receive the frame id - and provide the scopes it has). - - :param ScopesRequest request: - ''' - frame_id = request.arguments.frameId - - variables_reference = frame_id - scopes = [ - Scope('Locals', ScopeRequest(int(variables_reference), 'locals'), False, presentationHint='locals'), - Scope('Globals', ScopeRequest(int(variables_reference), 'globals'), False), - ] - body = ScopesResponseBody(scopes) - scopes_response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, scopes_response, is_json=True) - - def on_evaluate_request(self, py_db, request): - ''' - :param EvaluateRequest request: - ''' - # : :type arguments: EvaluateArguments - arguments = request.arguments - - if arguments.frameId is None: - self.api.request_exec_or_evaluate_json(py_db, request, thread_id='*') - else: - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - arguments.frameId) - - if thread_id is not None: - self.api.request_exec_or_evaluate_json( - py_db, request, thread_id) - else: - body = EvaluateResponseBody('', 0) - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Unable to find thread for evaluation.' - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_setexpression_request(self, py_db, request): - # : :type arguments: SetExpressionArguments - arguments = request.arguments - - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - arguments.frameId) - - if thread_id is not None: - self.api.request_set_expression_json(py_db, request, thread_id) - else: - body = SetExpressionResponseBody('') - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': body, - 'success': False, - 'message': 'Unable to find thread to set expression.' - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_variables_request(self, py_db, request): - ''' - Variables can be asked whenever some place returned a variables reference (so, it - can be a scope gotten from on_scopes_request, the result of some evaluation, etc.). - - Note that in the DAP the variables reference requires a unique int... the way this works for - pydevd is that an instance is generated for that specific variable reference and we use its - id(instance) to identify it to make sure all items are unique (and the actual {id->instance} - is added to a dict which is only valid while the thread is suspended and later cleared when - the related thread resumes execution). - - see: SuspendedFramesManager - - :param VariablesRequest request: - ''' - arguments = request.arguments # : :type arguments: VariablesArguments - variables_reference = arguments.variablesReference - - if isinstance(variables_reference, ScopeRequest): - variables_reference = variables_reference.variable_reference - - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - variables_reference) - if thread_id is not None: - self.api.request_get_variable_json(py_db, request, thread_id) - else: - variables = [] - body = VariablesResponseBody(variables) - variables_response = pydevd_base_schema.build_response(request, kwargs={ - 'body': body, - 'success': False, - 'message': 'Unable to find thread to evaluate variable reference.' - }) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - def on_setvariable_request(self, py_db, request): - arguments = request.arguments # : :type arguments: SetVariableArguments - variables_reference = arguments.variablesReference - - if isinstance(variables_reference, ScopeRequest): - variables_reference = variables_reference.variable_reference - - if arguments.name.startswith('(return) '): - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': SetVariableResponseBody(''), - 'success': False, - 'message': 'Cannot change return value' - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - thread_id = py_db.suspended_frames_manager.get_thread_id_for_variable_reference( - variables_reference) - - if thread_id is not None: - self.api.request_change_variable_json(py_db, request, thread_id) - else: - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': SetVariableResponseBody(''), - 'success': False, - 'message': 'Unable to find thread to evaluate variable reference.' - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_modules_request(self, py_db, request): - modules_manager = py_db.cmd_factory.modules_manager # : :type modules_manager: ModulesManager - modules_info = modules_manager.get_modules_info() - body = ModulesResponseBody(modules_info) - variables_response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, variables_response, is_json=True) - - def on_source_request(self, py_db, request): - ''' - :param SourceRequest request: - ''' - source_reference = request.arguments.sourceReference - server_filename = None - content = None - - if source_reference != 0: - server_filename = pydevd_file_utils.get_server_filename_from_source_reference(source_reference) - if not server_filename: - server_filename = pydevd_file_utils.get_source_reference_filename_from_linecache(source_reference) - - if server_filename: - # Try direct file access first - it's much faster when available. - try: - with open(server_filename, 'r') as stream: - content = stream.read() - except: - pass - - if content is None: - # File might not exist at all, or we might not have a permission to read it, - # but it might also be inside a zipfile, or an IPython cell. In this case, - # linecache might still be able to retrieve the source. - lines = (linecache.getline(server_filename, i) for i in itertools.count(1)) - lines = itertools.takewhile(bool, lines) # empty lines are '\n', EOF is '' - - # If we didn't get at least one line back, reset it to None so that it's - # reported as error below, and not as an empty file. - content = ''.join(lines) or None - - if content is None: - frame_id = pydevd_file_utils.get_frame_id_from_source_reference(source_reference) - pydev_log.debug('Found frame id: %s for source reference: %s', frame_id, source_reference) - if frame_id is not None: - try: - content = self.api.get_decompiled_source_from_frame_id(py_db, frame_id) - except Exception: - pydev_log.exception('Error getting source for frame id: %s', frame_id) - content = None - - body = SourceResponseBody(content or '') - response_args = {'body': body} - - if content is None: - if source_reference == 0: - message = 'Source unavailable' - elif server_filename: - message = 'Unable to retrieve source for %s' % (server_filename,) - else: - message = 'Invalid sourceReference %d' % (source_reference,) - response_args.update({'success': False, 'message': message}) - - response = pydevd_base_schema.build_response(request, kwargs=response_args) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_gototargets_request(self, py_db, request): - path = request.arguments.source.path - line = request.arguments.line - target_id = self._goto_targets_map.obtain_key((path, line)) - target = { - 'id': target_id, - 'label': '%s:%s' % (path, line), - 'line': line - } - body = GotoTargetsResponseBody(targets=[target]) - response_args = {'body': body} - response = pydevd_base_schema.build_response(request, kwargs=response_args) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_goto_request(self, py_db, request): - target_id = int(request.arguments.targetId) - thread_id = request.arguments.threadId - try: - path, line = self._goto_targets_map.obtain_value(target_id) - except KeyError: - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': {}, - 'success': False, - 'message': 'Unknown goto target id: %d' % (target_id,), - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - self.api.request_set_next(py_db, request.seq, thread_id, CMD_SET_NEXT_STATEMENT, path, line, '*') - # See 'NetCommandFactoryJson.make_set_next_stmnt_status_message' for response - return None - - def on_setdebuggerproperty_request(self, py_db, request): - args = request.arguments # : :type args: SetDebuggerPropertyArguments - if args.ideOS is not None: - self.api.set_ide_os(args.ideOS) - - if args.dontTraceStartPatterns is not None and args.dontTraceEndPatterns is not None: - start_patterns = tuple(args.dontTraceStartPatterns) - end_patterns = tuple(args.dontTraceEndPatterns) - self.api.set_dont_trace_start_end_patterns(py_db, start_patterns, end_patterns) - - if args.skipSuspendOnBreakpointException is not None: - py_db.skip_suspend_on_breakpoint_exception = tuple( - get_exception_class(x) for x in args.skipSuspendOnBreakpointException) - - if args.skipPrintBreakpointException is not None: - py_db.skip_print_breakpoint_exception = tuple( - get_exception_class(x) for x in args.skipPrintBreakpointException) - - if args.multiThreadsSingleNotification is not None: - py_db.multi_threads_single_notification = args.multiThreadsSingleNotification - - # TODO: Support other common settings. Note that not all of these might be relevant to python. - # JustMyCodeStepping: 0 or 1 - # AllowOutOfProcessSymbols: 0 or 1 - # DisableJITOptimization: 0 or 1 - # InterpreterOptions: 0 or 1 - # StopOnExceptionCrossingManagedBoundary: 0 or 1 - # WarnIfNoUserCodeOnLaunch: 0 or 1 - # EnableStepFiltering: true of false - - response = pydevd_base_schema.build_response(request, kwargs={'body': {}}) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_pydevdsysteminfo_request(self, py_db, request): - try: - pid = os.getpid() - except AttributeError: - pid = None - - # It's possible to have the ppid reported from args. In this case, use that instead of the - # real ppid (athough we're using `ppid`, what we want in meaning is the `launcher_pid` -- - # so, if a python process is launched from another python process, consider that process the - # parent and not any intermediary stubs). - - ppid = py_db.get_arg_ppid() or self.api.get_ppid() - - try: - impl_desc = platform.python_implementation() - except AttributeError: - impl_desc = PY_IMPL_NAME - - py_info = pydevd_schema.PydevdPythonInfo( - version=PY_VERSION_STR, - implementation=pydevd_schema.PydevdPythonImplementationInfo( - name=PY_IMPL_NAME, - version=PY_IMPL_VERSION_STR, - description=impl_desc, - ) - ) - platform_info = pydevd_schema.PydevdPlatformInfo(name=sys.platform) - process_info = pydevd_schema.PydevdProcessInfo( - pid=pid, - ppid=ppid, - executable=sys.executable, - bitness=64 if IS_64BIT_PROCESS else 32, - ) - pydevd_info = pydevd_schema.PydevdInfo( - usingCython=USING_CYTHON, - usingFrameEval=USING_FRAME_EVAL, - ) - body = { - 'python': py_info, - 'platform': platform_info, - 'process': process_info, - 'pydevd': pydevd_info, - } - response = pydevd_base_schema.build_response(request, kwargs={'body': body}) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - def on_setpydevdsourcemap_request(self, py_db, request): - args = request.arguments # : :type args: SetPydevdSourceMapArguments - SourceMappingEntry = self.api.SourceMappingEntry - - path = args.source.path - source_maps = args.pydevdSourceMaps - # : :type source_map: PydevdSourceMap - new_mappings = [ - SourceMappingEntry( - source_map['line'], - source_map['endLine'], - source_map['runtimeLine'], - self.api.filename_to_str(source_map['runtimeSource']['path']) - ) for source_map in source_maps - ] - - error_msg = self.api.set_source_mapping(py_db, path, new_mappings) - if error_msg: - response = pydevd_base_schema.build_response( - request, - kwargs={ - 'body': {}, - 'success': False, - 'message': error_msg, - }) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - - response = pydevd_base_schema.build_response(request) - return NetCommand(CMD_RETURN, 0, response, is_json=True) - diff --git a/spaces/TIGER-Lab/TIGERScore/utils.py b/spaces/TIGER-Lab/TIGERScore/utils.py deleted file mode 100644 index e6471908415a530a928a220d86d16103aa5c5eda..0000000000000000000000000000000000000000 --- a/spaces/TIGER-Lab/TIGERScore/utils.py +++ /dev/null @@ -1,85 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForCausalLM -from string import Template -import torch - -FINETUNE_INST = "You are evaluating errors in a model-generated output for a(an) ${task} task." -FINETUNE_INPUT = """\ -Task instruction: ${generation_instruction} -Source: ${input_context} -Model-generated Output: ${hypothesis_output} - -Based on the given task instruction and source, identify errors in this model-generated output. -For each error you give in the response, please also elaborate the following information: -- error location (the words that are wrong in the output) -- error aspect it belongs to. -- explanation why it's an error, and the correction suggestions. -- severity of the error ("Major" or "Minor"). -- reduction of score (between 0.5 and 5 given the severity of the error) - -Your evaluation output: -""" - -TIGERScore_model_map = { - "7b": "TIGER-Lab/TIGERScore-7B-V1.0", - "13b": "TIGER-Lab/TIGERScore-13B-V1.0", -} -tigerscore_model = None -tigerscore_tokenizer = None - -tasks = [ - "translation", - "summarization", - "data2text", - "mathQA", - "long-form QA", - "instruction-following", -] - -def load_tigerscore(model_size): - assert model_size in TIGERScore_model_map - model_name = TIGERScore_model_map[model_size] - global tigerscore_model, tigerscore_tokenizer - tigerscore_model = AutoModelForCausalLM.from_pretrained( - model_name, - torch_dtype=torch.bfloat16, - device_map="auto" - ) - tigerscore_tokenizer = AutoTokenizer.from_pretrained( - model_name, - use_fast=True - ) - -def generate(task, input_context, generation_instruction, hypo_output, **generate_kwargs): - inst_part = Template(FINETUNE_INST) - inst_part = inst_part.substitute(task=task) - input_part = Template(FINETUNE_INPUT) - input_part = input_part.substitute( - generation_instruction=generation_instruction, - input_context=input_context, - hypothesis_output=hypo_output - ) - prompt = (inst_part + "\n" + input_part).strip("\n ") + "\n" - encodings = tigerscore_tokenizer(prompt, return_tensors="pt") - input_ids = encodings["input_ids"].to(tigerscore_model.device) - attention_mask = encodings["attention_mask"].to(tigerscore_model.device) - gen_params = { - "input_ids": input_ids, - "attention_mask": attention_mask, - "max_new_tokens": 512, - "do_sample": True, - "top_k": 1, - "num_return_sequences": 1, - } - gen_params.update(generate_kwargs) - output = tigerscore_model.generate(**gen_params) - output = tigerscore_tokenizer.decode(output[0][len(input_ids[0]):], skip_special_tokens=True) - return output - -if __name__ == "__main__": - task = "translation" - input_context = "Der künftige EM-Cheforganisator Philipp Lahm soll laut Grindel im DFB-Präsidium mitarbeiten." - generation_instruction = "Translate the following text from German to English." - hypo_output = "According to Grindel, the future head of the European Championships, Philipp Lahm, is to participate in the DFB Presidency." - output = generate(task, input_context, generation_instruction, hypo_output) - print(output) - diff --git a/spaces/TNR-5/Image-Semantic-Searchj/imglib.py b/spaces/TNR-5/Image-Semantic-Searchj/imglib.py deleted file mode 100644 index d13179ce01d62ecc518fe81e90ad280a5f262853..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/Image-Semantic-Searchj/imglib.py +++ /dev/null @@ -1,191 +0,0 @@ -from html import escape -import re -import streamlit as st -import pandas as pd, numpy as np -from transformers import CLIPProcessor, CLIPModel -from st_clickable_images import clickable_images - - -num_results=75 - -@st.cache( - show_spinner=False, - hash_funcs={ - CLIPModel: lambda _: None, - CLIPProcessor: lambda _: None, - dict: lambda _: None, - }, -) -def load(): - model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14") - processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") - df = {0: pd.read_csv("data.csv"), 1: pd.read_csv("data2.csv")} - embeddings = {0: np.load("embeddings.npy"), 1: np.load("embeddings2.npy")} - for k in [0, 1]: - embeddings[k] = embeddings[k] / np.linalg.norm( - embeddings[k], axis=1, keepdims=True - ) - return model, processor, df, embeddings - - -model, processor, df, embeddings = load() -source = {0: "\nSource: Unsplash", 1: "\nSource: The Movie Database (TMDB)"} - - -def compute_text_embeddings(list_of_strings): - inputs = processor(text=list_of_strings, return_tensors="pt", padding=True) - result = model.get_text_features(**inputs).detach().numpy() - return result / np.linalg.norm(result, axis=1, keepdims=True) - - -def image_search(query, corpus, n_results=num_results): - positive_embeddings = None - - def concatenate_embeddings(e1, e2): - if e1 is None: - return e2 - else: - return np.concatenate((e1, e2), axis=0) - - splitted_query = query.split("EXCLUDING ") - dot_product = 0 - k = 0 if corpus == "Unsplash" else 1 - if len(splitted_query[0]) > 0: - positive_queries = splitted_query[0].split(";") - for positive_query in positive_queries: - match = re.match(r"\[(Movies|Unsplash):(\d{1,5})\](.*)", positive_query) - if match: - corpus2, idx, remainder = match.groups() - idx, remainder = int(idx), remainder.strip() - k2 = 0 if corpus2 == "Unsplash" else 1 - positive_embeddings = concatenate_embeddings( - positive_embeddings, embeddings[k2][idx : idx + 1, :] - ) - if len(remainder) > 0: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([remainder]) - ) - else: - positive_embeddings = concatenate_embeddings( - positive_embeddings, compute_text_embeddings([positive_query]) - ) - dot_product = embeddings[k] @ positive_embeddings.T - dot_product = dot_product - np.median(dot_product, axis=0) - dot_product = dot_product / np.max(dot_product, axis=0, keepdims=True) - dot_product = np.min(dot_product, axis=1) - - if len(splitted_query) > 1: - negative_queries = (" ".join(splitted_query[1:])).split(";") - negative_embeddings = compute_text_embeddings(negative_queries) - dot_product2 = embeddings[k] @ negative_embeddings.T - dot_product2 = dot_product2 - np.median(dot_product2, axis=0) - dot_product2 = dot_product2 / np.max(dot_product2, axis=0, keepdims=True) - dot_product -= np.max(np.maximum(dot_product2, 0), axis=1) - - results = np.argsort(dot_product)[-1 : -n_results - 1 : -1] - return [ - ( - df[k].iloc[i]["path"], - df[k].iloc[i]["tooltip"] + source[k], - i, - ) - for i in results - ] - - -description = """ - -# ImgLib -**Enter your query and hit enter** -""" - -howto = """ -- Click image to find similar images -- Use "**;**" to combine multiple queries) -- Use "**EXCLUDING**", to exclude a query -""" - - -def main(): - st.markdown( - """ - """, - unsafe_allow_html=True, - ) - st.sidebar.markdown(description) - with st.sidebar.expander("Advanced use"): - st.markdown(howto) - - - st.sidebar.markdown(f"Try these test prompts: Lord of the Rings, Interstellar, Back to the Future, Avengers, The Matrix, WALL·E, Castle , Dune, Blade Runner, Guardians of the Galaxy, Aliens, Her, Legend of the Ten Rings, Harry Potter, Logan, Dragon, Scissorhands, Captain, Deadpool, ThorArrivval, Wick, Peaks, Labyrinth, Terabithia, RoboCop, Wonder Woman, Meteor, NYC, Stork, Pink, Yellow, Orange, Blue, tulip, dog, Dragon, sunrise, kitten, Swimming, jellyfish, Beach, puppy, Coral") - st.sidebar.markdown(f"Unsplash has categories that match: backgrounds, photos, nature, iphone, etc") - st.sidebar.markdown(f"Unsplash images contain animals, apps, events, feelings, food, travel, nature, people, religion, sports, things, stock") - st.sidebar.markdown(f"Unsplash things include flag, tree, clock, money, tattoo, arrow, book, car, fireworks, ghost, health, kiss, dance, balloon, crown, eye, house, music, airplane, lighthouse, typewriter, toys") - st.sidebar.markdown(f"unsplash feelings include funny, heart, love, cool, congratulations, love, scary, cute, friendship, inspirational, hug, sad, cursed, beautiful, crazy, respect, transformation, peaceful, happy") - st.sidebar.markdown(f"unsplash people contain baby, life, women, family, girls, pregnancy, society, old people, musician, attractive, bohemian") - st.sidebar.markdown(f"imagenet queries include: photo of, photo of many, sculpture of, rendering of, graffiti of, tattoo of, embroidered, drawing of, plastic, black and white, painting, video game, doodle, origami, sketch, etc") - st.sidebar.markdown(f"by Evgeniy Hristoforu") - - - _, c, _ = st.columns((1, 3, 1)) - if "query" in st.session_state: - query = c.text_input("", value=st.session_state["query"]) - else: - - query = c.text_input("", value="lighthouse") - corpus = st.radio("", ["Unsplash"]) - #corpus = st.radio("", ["Unsplash", "Movies"]) - if len(query) > 0: - results = image_search(query, corpus) - clicked = clickable_images( - [result[0] for result in results], - titles=[result[1] for result in results], - div_style={ - "display": "flex", - "justify-content": "center", - "flex-wrap": "wrap", - }, - img_style={"margin": "2px", "height": "200px"}, - ) - if clicked >= 0: - change_query = False - if "last_clicked" not in st.session_state: - change_query = True - else: - if clicked != st.session_state["last_clicked"]: - change_query = True - if change_query: - st.session_state["query"] = f"[{corpus}:{results[clicked][2]}]" - st.experimental_rerun() - - -if __name__ == "__main__": - main() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/wrapper.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/wrapper.py deleted file mode 100644 index b6ee7f2039801c9792dfe6e473843fb0a4bc4a5b..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/wrapper.py +++ /dev/null @@ -1,33 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -from .adapter import CacheControlAdapter -from .cache import DictCache - - -def CacheControl( - sess, - cache=None, - cache_etags=True, - serializer=None, - heuristic=None, - controller_class=None, - adapter_class=None, - cacheable_methods=None, -): - - cache = DictCache() if cache is None else cache - adapter_class = adapter_class or CacheControlAdapter - adapter = adapter_class( - cache, - cache_etags=cache_etags, - serializer=serializer, - heuristic=heuristic, - controller_class=controller_class, - cacheable_methods=cacheable_methods, - ) - sess.mount("http://", adapter) - sess.mount("https://", adapter) - - return sess diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pkg_resources/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pkg_resources/__init__.py deleted file mode 100644 index ad2794077b0a0299700fd0e8a0336bd1d6e24677..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pkg_resources/__init__.py +++ /dev/null @@ -1,3361 +0,0 @@ -""" -Package resource API --------------------- - -A resource is a logical file contained within a package, or a logical -subdirectory thereof. The package resource API expects resource names -to have their path parts separated with ``/``, *not* whatever the local -path separator is. Do not use os.path operations to manipulate resource -names being passed into the API. - -The package resource API is designed to work with normal filesystem packages, -.egg files, and unpacked .egg files. It can also work in a limited way with -.zip files and with custom PEP 302 loaders that support the ``get_data()`` -method. - -This module is deprecated. Users are directed to :mod:`importlib.resources`, -:mod:`importlib.metadata` and :pypi:`packaging` instead. -""" - -import sys -import os -import io -import time -import re -import types -import zipfile -import zipimport -import warnings -import stat -import functools -import pkgutil -import operator -import platform -import collections -import plistlib -import email.parser -import errno -import tempfile -import textwrap -import inspect -import ntpath -import posixpath -import importlib -from pkgutil import get_importer - -try: - import _imp -except ImportError: - # Python 3.2 compatibility - import imp as _imp - -try: - FileExistsError -except NameError: - FileExistsError = OSError - -# capture these to bypass sandboxing -from os import utime - -try: - from os import mkdir, rename, unlink - - WRITE_SUPPORT = True -except ImportError: - # no write support, probably under GAE - WRITE_SUPPORT = False - -from os import open as os_open -from os.path import isdir, split - -try: - import importlib.machinery as importlib_machinery - - # access attribute to force import under delayed import mechanisms. - importlib_machinery.__name__ -except ImportError: - importlib_machinery = None - -from pip._internal.utils._jaraco_text import ( - yield_lines, - drop_comment, - join_continuation, -) - -from pip._vendor import platformdirs -from pip._vendor import packaging - -__import__('pip._vendor.packaging.version') -__import__('pip._vendor.packaging.specifiers') -__import__('pip._vendor.packaging.requirements') -__import__('pip._vendor.packaging.markers') -__import__('pip._vendor.packaging.utils') - -if sys.version_info < (3, 5): - raise RuntimeError("Python 3.5 or later is required") - -# declare some globals that will be defined later to -# satisfy the linters. -require = None -working_set = None -add_activation_listener = None -resources_stream = None -cleanup_resources = None -resource_dir = None -resource_stream = None -set_extraction_path = None -resource_isdir = None -resource_string = None -iter_entry_points = None -resource_listdir = None -resource_filename = None -resource_exists = None -_distribution_finders = None -_namespace_handlers = None -_namespace_packages = None - - -warnings.warn( - "pkg_resources is deprecated as an API. " - "See https://setuptools.pypa.io/en/latest/pkg_resources.html", - DeprecationWarning, - stacklevel=2 -) - - -_PEP440_FALLBACK = re.compile(r"^v?(?P(?:[0-9]+!)?[0-9]+(?:\.[0-9]+)*)", re.I) - - -class PEP440Warning(RuntimeWarning): - """ - Used when there is an issue with a version or specifier not complying with - PEP 440. - """ - - -parse_version = packaging.version.Version - - -_state_vars = {} - - -def _declare_state(vartype, **kw): - globals().update(kw) - _state_vars.update(dict.fromkeys(kw, vartype)) - - -def __getstate__(): - state = {} - g = globals() - for k, v in _state_vars.items(): - state[k] = g['_sget_' + v](g[k]) - return state - - -def __setstate__(state): - g = globals() - for k, v in state.items(): - g['_sset_' + _state_vars[k]](k, g[k], v) - return state - - -def _sget_dict(val): - return val.copy() - - -def _sset_dict(key, ob, state): - ob.clear() - ob.update(state) - - -def _sget_object(val): - return val.__getstate__() - - -def _sset_object(key, ob, state): - ob.__setstate__(state) - - -_sget_none = _sset_none = lambda *args: None - - -def get_supported_platform(): - """Return this platform's maximum compatible version. - - distutils.util.get_platform() normally reports the minimum version - of macOS that would be required to *use* extensions produced by - distutils. But what we want when checking compatibility is to know the - version of macOS that we are *running*. To allow usage of packages that - explicitly require a newer version of macOS, we must also know the - current version of the OS. - - If this condition occurs for any other platform with a version in its - platform strings, this function should be extended accordingly. - """ - plat = get_build_platform() - m = macosVersionString.match(plat) - if m is not None and sys.platform == "darwin": - try: - plat = 'macosx-%s-%s' % ('.'.join(_macos_vers()[:2]), m.group(3)) - except ValueError: - # not macOS - pass - return plat - - -__all__ = [ - # Basic resource access and distribution/entry point discovery - 'require', - 'run_script', - 'get_provider', - 'get_distribution', - 'load_entry_point', - 'get_entry_map', - 'get_entry_info', - 'iter_entry_points', - 'resource_string', - 'resource_stream', - 'resource_filename', - 'resource_listdir', - 'resource_exists', - 'resource_isdir', - # Environmental control - 'declare_namespace', - 'working_set', - 'add_activation_listener', - 'find_distributions', - 'set_extraction_path', - 'cleanup_resources', - 'get_default_cache', - # Primary implementation classes - 'Environment', - 'WorkingSet', - 'ResourceManager', - 'Distribution', - 'Requirement', - 'EntryPoint', - # Exceptions - 'ResolutionError', - 'VersionConflict', - 'DistributionNotFound', - 'UnknownExtra', - 'ExtractionError', - # Warnings - 'PEP440Warning', - # Parsing functions and string utilities - 'parse_requirements', - 'parse_version', - 'safe_name', - 'safe_version', - 'get_platform', - 'compatible_platforms', - 'yield_lines', - 'split_sections', - 'safe_extra', - 'to_filename', - 'invalid_marker', - 'evaluate_marker', - # filesystem utilities - 'ensure_directory', - 'normalize_path', - # Distribution "precedence" constants - 'EGG_DIST', - 'BINARY_DIST', - 'SOURCE_DIST', - 'CHECKOUT_DIST', - 'DEVELOP_DIST', - # "Provider" interfaces, implementations, and registration/lookup APIs - 'IMetadataProvider', - 'IResourceProvider', - 'FileMetadata', - 'PathMetadata', - 'EggMetadata', - 'EmptyProvider', - 'empty_provider', - 'NullProvider', - 'EggProvider', - 'DefaultProvider', - 'ZipProvider', - 'register_finder', - 'register_namespace_handler', - 'register_loader_type', - 'fixup_namespace_packages', - 'get_importer', - # Warnings - 'PkgResourcesDeprecationWarning', - # Deprecated/backward compatibility only - 'run_main', - 'AvailableDistributions', -] - - -class ResolutionError(Exception): - """Abstract base for dependency resolution errors""" - - def __repr__(self): - return self.__class__.__name__ + repr(self.args) - - -class VersionConflict(ResolutionError): - """ - An already-installed version conflicts with the requested version. - - Should be initialized with the installed Distribution and the requested - Requirement. - """ - - _template = "{self.dist} is installed but {self.req} is required" - - @property - def dist(self): - return self.args[0] - - @property - def req(self): - return self.args[1] - - def report(self): - return self._template.format(**locals()) - - def with_context(self, required_by): - """ - If required_by is non-empty, return a version of self that is a - ContextualVersionConflict. - """ - if not required_by: - return self - args = self.args + (required_by,) - return ContextualVersionConflict(*args) - - -class ContextualVersionConflict(VersionConflict): - """ - A VersionConflict that accepts a third parameter, the set of the - requirements that required the installed Distribution. - """ - - _template = VersionConflict._template + ' by {self.required_by}' - - @property - def required_by(self): - return self.args[2] - - -class DistributionNotFound(ResolutionError): - """A requested distribution was not found""" - - _template = ( - "The '{self.req}' distribution was not found " - "and is required by {self.requirers_str}" - ) - - @property - def req(self): - return self.args[0] - - @property - def requirers(self): - return self.args[1] - - @property - def requirers_str(self): - if not self.requirers: - return 'the application' - return ', '.join(self.requirers) - - def report(self): - return self._template.format(**locals()) - - def __str__(self): - return self.report() - - -class UnknownExtra(ResolutionError): - """Distribution doesn't have an "extra feature" of the given name""" - - -_provider_factories = {} - -PY_MAJOR = '{}.{}'.format(*sys.version_info) -EGG_DIST = 3 -BINARY_DIST = 2 -SOURCE_DIST = 1 -CHECKOUT_DIST = 0 -DEVELOP_DIST = -1 - - -def register_loader_type(loader_type, provider_factory): - """Register `provider_factory` to make providers for `loader_type` - - `loader_type` is the type or class of a PEP 302 ``module.__loader__``, - and `provider_factory` is a function that, passed a *module* object, - returns an ``IResourceProvider`` for that module. - """ - _provider_factories[loader_type] = provider_factory - - -def get_provider(moduleOrReq): - """Return an IResourceProvider for the named module or requirement""" - if isinstance(moduleOrReq, Requirement): - return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] - try: - module = sys.modules[moduleOrReq] - except KeyError: - __import__(moduleOrReq) - module = sys.modules[moduleOrReq] - loader = getattr(module, '__loader__', None) - return _find_adapter(_provider_factories, loader)(module) - - -def _macos_vers(_cache=[]): - if not _cache: - version = platform.mac_ver()[0] - # fallback for MacPorts - if version == '': - plist = '/System/Library/CoreServices/SystemVersion.plist' - if os.path.exists(plist): - if hasattr(plistlib, 'readPlist'): - plist_content = plistlib.readPlist(plist) - if 'ProductVersion' in plist_content: - version = plist_content['ProductVersion'] - - _cache.append(version.split('.')) - return _cache[0] - - -def _macos_arch(machine): - return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine) - - -def get_build_platform(): - """Return this platform's string for platform-specific distributions - - XXX Currently this is the same as ``distutils.util.get_platform()``, but it - needs some hacks for Linux and macOS. - """ - from sysconfig import get_platform - - plat = get_platform() - if sys.platform == "darwin" and not plat.startswith('macosx-'): - try: - version = _macos_vers() - machine = os.uname()[4].replace(" ", "_") - return "macosx-%d.%d-%s" % ( - int(version[0]), - int(version[1]), - _macos_arch(machine), - ) - except ValueError: - # if someone is running a non-Mac darwin system, this will fall - # through to the default implementation - pass - return plat - - -macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") -darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") -# XXX backward compat -get_platform = get_build_platform - - -def compatible_platforms(provided, required): - """Can code for the `provided` platform run on the `required` platform? - - Returns true if either platform is ``None``, or the platforms are equal. - - XXX Needs compatibility checks for Linux and other unixy OSes. - """ - if provided is None or required is None or provided == required: - # easy case - return True - - # macOS special cases - reqMac = macosVersionString.match(required) - if reqMac: - provMac = macosVersionString.match(provided) - - # is this a Mac package? - if not provMac: - # this is backwards compatibility for packages built before - # setuptools 0.6. All packages built after this point will - # use the new macOS designation. - provDarwin = darwinVersionString.match(provided) - if provDarwin: - dversion = int(provDarwin.group(1)) - macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2)) - if ( - dversion == 7 - and macosversion >= "10.3" - or dversion == 8 - and macosversion >= "10.4" - ): - return True - # egg isn't macOS or legacy darwin - return False - - # are they the same major version and machine type? - if provMac.group(1) != reqMac.group(1) or provMac.group(3) != reqMac.group(3): - return False - - # is the required OS major update >= the provided one? - if int(provMac.group(2)) > int(reqMac.group(2)): - return False - - return True - - # XXX Linux and other platforms' special cases should go here - return False - - -def run_script(dist_spec, script_name): - """Locate distribution `dist_spec` and run its `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - require(dist_spec)[0].run_script(script_name, ns) - - -# backward compatibility -run_main = run_script - - -def get_distribution(dist): - """Return a current distribution object for a Requirement or string""" - if isinstance(dist, str): - dist = Requirement.parse(dist) - if isinstance(dist, Requirement): - dist = get_provider(dist) - if not isinstance(dist, Distribution): - raise TypeError("Expected string, Requirement, or Distribution", dist) - return dist - - -def load_entry_point(dist, group, name): - """Return `name` entry point of `group` for `dist` or raise ImportError""" - return get_distribution(dist).load_entry_point(group, name) - - -def get_entry_map(dist, group=None): - """Return the entry point map for `group`, or the full entry map""" - return get_distribution(dist).get_entry_map(group) - - -def get_entry_info(dist, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return get_distribution(dist).get_entry_info(group, name) - - -class IMetadataProvider: - def has_metadata(name): - """Does the package's distribution contain the named metadata?""" - - def get_metadata(name): - """The named metadata resource as a string""" - - def get_metadata_lines(name): - """Yield named metadata resource as list of non-blank non-comment lines - - Leading and trailing whitespace is stripped from each line, and lines - with ``#`` as the first non-blank character are omitted.""" - - def metadata_isdir(name): - """Is the named metadata a directory? (like ``os.path.isdir()``)""" - - def metadata_listdir(name): - """List of metadata names in the directory (like ``os.listdir()``)""" - - def run_script(script_name, namespace): - """Execute the named script in the supplied namespace dictionary""" - - -class IResourceProvider(IMetadataProvider): - """An object that provides access to package resources""" - - def get_resource_filename(manager, resource_name): - """Return a true filesystem path for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_stream(manager, resource_name): - """Return a readable file-like object for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_string(manager, resource_name): - """Return a string containing the contents of `resource_name` - - `manager` must be an ``IResourceManager``""" - - def has_resource(resource_name): - """Does the package contain the named resource?""" - - def resource_isdir(resource_name): - """Is the named resource a directory? (like ``os.path.isdir()``)""" - - def resource_listdir(resource_name): - """List of resource names in the directory (like ``os.listdir()``)""" - - -class WorkingSet: - """A collection of active distributions on sys.path (or a similar list)""" - - def __init__(self, entries=None): - """Create working set from list of path entries (default=sys.path)""" - self.entries = [] - self.entry_keys = {} - self.by_key = {} - self.normalized_to_canonical_keys = {} - self.callbacks = [] - - if entries is None: - entries = sys.path - - for entry in entries: - self.add_entry(entry) - - @classmethod - def _build_master(cls): - """ - Prepare the master working set. - """ - ws = cls() - try: - from __main__ import __requires__ - except ImportError: - # The main program does not list any requirements - return ws - - # ensure the requirements are met - try: - ws.require(__requires__) - except VersionConflict: - return cls._build_from_requirements(__requires__) - - return ws - - @classmethod - def _build_from_requirements(cls, req_spec): - """ - Build a working set from a requirement spec. Rewrites sys.path. - """ - # try it without defaults already on sys.path - # by starting with an empty path - ws = cls([]) - reqs = parse_requirements(req_spec) - dists = ws.resolve(reqs, Environment()) - for dist in dists: - ws.add(dist) - - # add any missing entries from sys.path - for entry in sys.path: - if entry not in ws.entries: - ws.add_entry(entry) - - # then copy back to sys.path - sys.path[:] = ws.entries - return ws - - def add_entry(self, entry): - """Add a path item to ``.entries``, finding any distributions on it - - ``find_distributions(entry, True)`` is used to find distributions - corresponding to the path entry, and they are added. `entry` is - always appended to ``.entries``, even if it is already present. - (This is because ``sys.path`` can contain the same value more than - once, and the ``.entries`` of the ``sys.path`` WorkingSet should always - equal ``sys.path``.) - """ - self.entry_keys.setdefault(entry, []) - self.entries.append(entry) - for dist in find_distributions(entry, True): - self.add(dist, entry, False) - - def __contains__(self, dist): - """True if `dist` is the active distribution for its project""" - return self.by_key.get(dist.key) == dist - - def find(self, req): - """Find a distribution matching requirement `req` - - If there is an active distribution for the requested project, this - returns it as long as it meets the version requirement specified by - `req`. But, if there is an active distribution for the project and it - does *not* meet the `req` requirement, ``VersionConflict`` is raised. - If there is no active distribution for the requested project, ``None`` - is returned. - """ - dist = self.by_key.get(req.key) - - if dist is None: - canonical_key = self.normalized_to_canonical_keys.get(req.key) - - if canonical_key is not None: - req.key = canonical_key - dist = self.by_key.get(canonical_key) - - if dist is not None and dist not in req: - # XXX add more info - raise VersionConflict(dist, req) - return dist - - def iter_entry_points(self, group, name=None): - """Yield entry point objects from `group` matching `name` - - If `name` is None, yields all entry points in `group` from all - distributions in the working set, otherwise only ones matching - both `group` and `name` are yielded (in distribution order). - """ - return ( - entry - for dist in self - for entry in dist.get_entry_map(group).values() - if name is None or name == entry.name - ) - - def run_script(self, requires, script_name): - """Locate distribution for `requires` and run `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - self.require(requires)[0].run_script(script_name, ns) - - def __iter__(self): - """Yield distributions for non-duplicate projects in the working set - - The yield order is the order in which the items' path entries were - added to the working set. - """ - seen = {} - for item in self.entries: - if item not in self.entry_keys: - # workaround a cache issue - continue - - for key in self.entry_keys[item]: - if key not in seen: - seen[key] = 1 - yield self.by_key[key] - - def add(self, dist, entry=None, insert=True, replace=False): - """Add `dist` to working set, associated with `entry` - - If `entry` is unspecified, it defaults to the ``.location`` of `dist`. - On exit from this routine, `entry` is added to the end of the working - set's ``.entries`` (if it wasn't already present). - - `dist` is only added to the working set if it's for a project that - doesn't already have a distribution in the set, unless `replace=True`. - If it's added, any callbacks registered with the ``subscribe()`` method - will be called. - """ - if insert: - dist.insert_on(self.entries, entry, replace=replace) - - if entry is None: - entry = dist.location - keys = self.entry_keys.setdefault(entry, []) - keys2 = self.entry_keys.setdefault(dist.location, []) - if not replace and dist.key in self.by_key: - # ignore hidden distros - return - - self.by_key[dist.key] = dist - normalized_name = packaging.utils.canonicalize_name(dist.key) - self.normalized_to_canonical_keys[normalized_name] = dist.key - if dist.key not in keys: - keys.append(dist.key) - if dist.key not in keys2: - keys2.append(dist.key) - self._added_new(dist) - - def resolve( - self, - requirements, - env=None, - installer=None, - replace_conflicting=False, - extras=None, - ): - """List all distributions needed to (recursively) meet `requirements` - - `requirements` must be a sequence of ``Requirement`` objects. `env`, - if supplied, should be an ``Environment`` instance. If - not supplied, it defaults to all distributions available within any - entry or distribution in the working set. `installer`, if supplied, - will be invoked with each requirement that cannot be met by an - already-installed distribution; it should return a ``Distribution`` or - ``None``. - - Unless `replace_conflicting=True`, raises a VersionConflict exception - if - any requirements are found on the path that have the correct name but - the wrong version. Otherwise, if an `installer` is supplied it will be - invoked to obtain the correct version of the requirement and activate - it. - - `extras` is a list of the extras to be used with these requirements. - This is important because extra requirements may look like `my_req; - extra = "my_extra"`, which would otherwise be interpreted as a purely - optional requirement. Instead, we want to be able to assert that these - requirements are truly required. - """ - - # set up the stack - requirements = list(requirements)[::-1] - # set of processed requirements - processed = {} - # key -> dist - best = {} - to_activate = [] - - req_extras = _ReqExtras() - - # Mapping of requirement to set of distributions that required it; - # useful for reporting info about conflicts. - required_by = collections.defaultdict(set) - - while requirements: - # process dependencies breadth-first - req = requirements.pop(0) - if req in processed: - # Ignore cyclic or redundant dependencies - continue - - if not req_extras.markers_pass(req, extras): - continue - - dist = self._resolve_dist( - req, best, replace_conflicting, env, installer, required_by, to_activate - ) - - # push the new requirements onto the stack - new_requirements = dist.requires(req.extras)[::-1] - requirements.extend(new_requirements) - - # Register the new requirements needed by req - for new_requirement in new_requirements: - required_by[new_requirement].add(req.project_name) - req_extras[new_requirement] = req.extras - - processed[req] = True - - # return list of distros to activate - return to_activate - - def _resolve_dist( - self, req, best, replace_conflicting, env, installer, required_by, to_activate - ): - dist = best.get(req.key) - if dist is None: - # Find the best distribution and add it to the map - dist = self.by_key.get(req.key) - if dist is None or (dist not in req and replace_conflicting): - ws = self - if env is None: - if dist is None: - env = Environment(self.entries) - else: - # Use an empty environment and workingset to avoid - # any further conflicts with the conflicting - # distribution - env = Environment([]) - ws = WorkingSet([]) - dist = best[req.key] = env.best_match( - req, ws, installer, replace_conflicting=replace_conflicting - ) - if dist is None: - requirers = required_by.get(req, None) - raise DistributionNotFound(req, requirers) - to_activate.append(dist) - if dist not in req: - # Oops, the "best" so far conflicts with a dependency - dependent_req = required_by[req] - raise VersionConflict(dist, req).with_context(dependent_req) - return dist - - def find_plugins(self, plugin_env, full_env=None, installer=None, fallback=True): - """Find all activatable distributions in `plugin_env` - - Example usage:: - - distributions, errors = working_set.find_plugins( - Environment(plugin_dirlist) - ) - # add plugins+libs to sys.path - map(working_set.add, distributions) - # display errors - print('Could not load', errors) - - The `plugin_env` should be an ``Environment`` instance that contains - only distributions that are in the project's "plugin directory" or - directories. The `full_env`, if supplied, should be an ``Environment`` - contains all currently-available distributions. If `full_env` is not - supplied, one is created automatically from the ``WorkingSet`` this - method is called on, which will typically mean that every directory on - ``sys.path`` will be scanned for distributions. - - `installer` is a standard installer callback as used by the - ``resolve()`` method. The `fallback` flag indicates whether we should - attempt to resolve older versions of a plugin if the newest version - cannot be resolved. - - This method returns a 2-tuple: (`distributions`, `error_info`), where - `distributions` is a list of the distributions found in `plugin_env` - that were loadable, along with any other distributions that are needed - to resolve their dependencies. `error_info` is a dictionary mapping - unloadable plugin distributions to an exception instance describing the - error that occurred. Usually this will be a ``DistributionNotFound`` or - ``VersionConflict`` instance. - """ - - plugin_projects = list(plugin_env) - # scan project names in alphabetic order - plugin_projects.sort() - - error_info = {} - distributions = {} - - if full_env is None: - env = Environment(self.entries) - env += plugin_env - else: - env = full_env + plugin_env - - shadow_set = self.__class__([]) - # put all our entries in shadow_set - list(map(shadow_set.add, self)) - - for project_name in plugin_projects: - for dist in plugin_env[project_name]: - req = [dist.as_requirement()] - - try: - resolvees = shadow_set.resolve(req, env, installer) - - except ResolutionError as v: - # save error info - error_info[dist] = v - if fallback: - # try the next older version of project - continue - else: - # give up on this project, keep going - break - - else: - list(map(shadow_set.add, resolvees)) - distributions.update(dict.fromkeys(resolvees)) - - # success, no need to try any more versions of this project - break - - distributions = list(distributions) - distributions.sort() - - return distributions, error_info - - def require(self, *requirements): - """Ensure that distributions matching `requirements` are activated - - `requirements` must be a string or a (possibly-nested) sequence - thereof, specifying the distributions and versions required. The - return value is a sequence of the distributions that needed to be - activated to fulfill the requirements; all relevant distributions are - included, even if they were already activated in this working set. - """ - needed = self.resolve(parse_requirements(requirements)) - - for dist in needed: - self.add(dist) - - return needed - - def subscribe(self, callback, existing=True): - """Invoke `callback` for all distributions - - If `existing=True` (default), - call on all existing ones, as well. - """ - if callback in self.callbacks: - return - self.callbacks.append(callback) - if not existing: - return - for dist in self: - callback(dist) - - def _added_new(self, dist): - for callback in self.callbacks: - callback(dist) - - def __getstate__(self): - return ( - self.entries[:], - self.entry_keys.copy(), - self.by_key.copy(), - self.normalized_to_canonical_keys.copy(), - self.callbacks[:], - ) - - def __setstate__(self, e_k_b_n_c): - entries, keys, by_key, normalized_to_canonical_keys, callbacks = e_k_b_n_c - self.entries = entries[:] - self.entry_keys = keys.copy() - self.by_key = by_key.copy() - self.normalized_to_canonical_keys = normalized_to_canonical_keys.copy() - self.callbacks = callbacks[:] - - -class _ReqExtras(dict): - """ - Map each requirement to the extras that demanded it. - """ - - def markers_pass(self, req, extras=None): - """ - Evaluate markers for req against each extra that - demanded it. - - Return False if the req has a marker and fails - evaluation. Otherwise, return True. - """ - extra_evals = ( - req.marker.evaluate({'extra': extra}) - for extra in self.get(req, ()) + (extras or (None,)) - ) - return not req.marker or any(extra_evals) - - -class Environment: - """Searchable snapshot of distributions on a search path""" - - def __init__( - self, search_path=None, platform=get_supported_platform(), python=PY_MAJOR - ): - """Snapshot distributions available on a search path - - Any distributions found on `search_path` are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. - - `platform` is an optional string specifying the name of the platform - that platform-specific distributions must be compatible with. If - unspecified, it defaults to the current platform. `python` is an - optional string naming the desired version of Python (e.g. ``'3.6'``); - it defaults to the current version. - - You may explicitly set `platform` (and/or `python`) to ``None`` if you - wish to map *all* distributions, not just those compatible with the - running platform or Python version. - """ - self._distmap = {} - self.platform = platform - self.python = python - self.scan(search_path) - - def can_add(self, dist): - """Is distribution `dist` acceptable for this environment? - - The distribution must match the platform and python version - requirements specified when this environment was created, or False - is returned. - """ - py_compat = ( - self.python is None - or dist.py_version is None - or dist.py_version == self.python - ) - return py_compat and compatible_platforms(dist.platform, self.platform) - - def remove(self, dist): - """Remove `dist` from the environment""" - self._distmap[dist.key].remove(dist) - - def scan(self, search_path=None): - """Scan `search_path` for distributions usable in this environment - - Any distributions found are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. Only distributions conforming to - the platform/python version defined at initialization are added. - """ - if search_path is None: - search_path = sys.path - - for item in search_path: - for dist in find_distributions(item): - self.add(dist) - - def __getitem__(self, project_name): - """Return a newest-to-oldest list of distributions for `project_name` - - Uses case-insensitive `project_name` comparison, assuming all the - project's distributions use their project's name converted to all - lowercase as their key. - - """ - distribution_key = project_name.lower() - return self._distmap.get(distribution_key, []) - - def add(self, dist): - """Add `dist` if we ``can_add()`` it and it has not already been added""" - if self.can_add(dist) and dist.has_version(): - dists = self._distmap.setdefault(dist.key, []) - if dist not in dists: - dists.append(dist) - dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) - - def best_match(self, req, working_set, installer=None, replace_conflicting=False): - """Find distribution best matching `req` and usable on `working_set` - - This calls the ``find(req)`` method of the `working_set` to see if a - suitable distribution is already active. (This may raise - ``VersionConflict`` if an unsuitable version of the project is already - active in the specified `working_set`.) If a suitable distribution - isn't active, this method returns the newest distribution in the - environment that meets the ``Requirement`` in `req`. If no suitable - distribution is found, and `installer` is supplied, then the result of - calling the environment's ``obtain(req, installer)`` method will be - returned. - """ - try: - dist = working_set.find(req) - except VersionConflict: - if not replace_conflicting: - raise - dist = None - if dist is not None: - return dist - for dist in self[req.key]: - if dist in req: - return dist - # try to download/install - return self.obtain(req, installer) - - def obtain(self, requirement, installer=None): - """Obtain a distribution matching `requirement` (e.g. via download) - - Obtain a distro that matches requirement (e.g. via download). In the - base ``Environment`` class, this routine just returns - ``installer(requirement)``, unless `installer` is None, in which case - None is returned instead. This method is a hook that allows subclasses - to attempt other ways of obtaining a distribution before falling back - to the `installer` argument.""" - if installer is not None: - return installer(requirement) - - def __iter__(self): - """Yield the unique project names of the available distributions""" - for key in self._distmap.keys(): - if self[key]: - yield key - - def __iadd__(self, other): - """In-place addition of a distribution or environment""" - if isinstance(other, Distribution): - self.add(other) - elif isinstance(other, Environment): - for project in other: - for dist in other[project]: - self.add(dist) - else: - raise TypeError("Can't add %r to environment" % (other,)) - return self - - def __add__(self, other): - """Add an environment or distribution to an environment""" - new = self.__class__([], platform=None, python=None) - for env in self, other: - new += env - return new - - -# XXX backward compatibility -AvailableDistributions = Environment - - -class ExtractionError(RuntimeError): - """An error occurred extracting a resource - - The following attributes are available from instances of this exception: - - manager - The resource manager that raised this exception - - cache_path - The base directory for resource extraction - - original_error - The exception instance that caused extraction to fail - """ - - -class ResourceManager: - """Manage resource extraction and packages""" - - extraction_path = None - - def __init__(self): - self.cached_files = {} - - def resource_exists(self, package_or_requirement, resource_name): - """Does the named resource exist?""" - return get_provider(package_or_requirement).has_resource(resource_name) - - def resource_isdir(self, package_or_requirement, resource_name): - """Is the named resource an existing directory?""" - return get_provider(package_or_requirement).resource_isdir(resource_name) - - def resource_filename(self, package_or_requirement, resource_name): - """Return a true filesystem path for specified resource""" - return get_provider(package_or_requirement).get_resource_filename( - self, resource_name - ) - - def resource_stream(self, package_or_requirement, resource_name): - """Return a readable file-like object for specified resource""" - return get_provider(package_or_requirement).get_resource_stream( - self, resource_name - ) - - def resource_string(self, package_or_requirement, resource_name): - """Return specified resource as a string""" - return get_provider(package_or_requirement).get_resource_string( - self, resource_name - ) - - def resource_listdir(self, package_or_requirement, resource_name): - """List the contents of the named resource directory""" - return get_provider(package_or_requirement).resource_listdir(resource_name) - - def extraction_error(self): - """Give an error message for problems extracting file(s)""" - - old_exc = sys.exc_info()[1] - cache_path = self.extraction_path or get_default_cache() - - tmpl = textwrap.dedent( - """ - Can't extract file(s) to egg cache - - The following error occurred while trying to extract file(s) - to the Python egg cache: - - {old_exc} - - The Python egg cache directory is currently set to: - - {cache_path} - - Perhaps your account does not have write access to this directory? - You can change the cache directory by setting the PYTHON_EGG_CACHE - environment variable to point to an accessible directory. - """ - ).lstrip() - err = ExtractionError(tmpl.format(**locals())) - err.manager = self - err.cache_path = cache_path - err.original_error = old_exc - raise err - - def get_cache_path(self, archive_name, names=()): - """Return absolute location in cache for `archive_name` and `names` - - The parent directory of the resulting path will be created if it does - not already exist. `archive_name` should be the base filename of the - enclosing egg (which may not be the name of the enclosing zipfile!), - including its ".egg" extension. `names`, if provided, should be a - sequence of path name parts "under" the egg's extraction location. - - This method should only be called by resource providers that need to - obtain an extraction location, and only for names they intend to - extract, as it tracks the generated names for possible cleanup later. - """ - extract_path = self.extraction_path or get_default_cache() - target_path = os.path.join(extract_path, archive_name + '-tmp', *names) - try: - _bypass_ensure_directory(target_path) - except Exception: - self.extraction_error() - - self._warn_unsafe_extraction_path(extract_path) - - self.cached_files[target_path] = 1 - return target_path - - @staticmethod - def _warn_unsafe_extraction_path(path): - """ - If the default extraction path is overridden and set to an insecure - location, such as /tmp, it opens up an opportunity for an attacker to - replace an extracted file with an unauthorized payload. Warn the user - if a known insecure location is used. - - See Distribute #375 for more details. - """ - if os.name == 'nt' and not path.startswith(os.environ['windir']): - # On Windows, permissions are generally restrictive by default - # and temp directories are not writable by other users, so - # bypass the warning. - return - mode = os.stat(path).st_mode - if mode & stat.S_IWOTH or mode & stat.S_IWGRP: - msg = ( - "Extraction path is writable by group/others " - "and vulnerable to attack when " - "used with get_resource_filename ({path}). " - "Consider a more secure " - "location (set with .set_extraction_path or the " - "PYTHON_EGG_CACHE environment variable)." - ).format(**locals()) - warnings.warn(msg, UserWarning) - - def postprocess(self, tempname, filename): - """Perform any platform-specific postprocessing of `tempname` - - This is where Mac header rewrites should be done; other platforms don't - have anything special they should do. - - Resource providers should call this method ONLY after successfully - extracting a compressed resource. They must NOT call it on resources - that are already in the filesystem. - - `tempname` is the current (temporary) name of the file, and `filename` - is the name it will be renamed to by the caller after this routine - returns. - """ - - if os.name == 'posix': - # Make the resource executable - mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777 - os.chmod(tempname, mode) - - def set_extraction_path(self, path): - """Set the base path where resources will be extracted to, if needed. - - If you do not call this routine before any extractions take place, the - path defaults to the return value of ``get_default_cache()``. (Which - is based on the ``PYTHON_EGG_CACHE`` environment variable, with various - platform-specific fallbacks. See that routine's documentation for more - details.) - - Resources are extracted to subdirectories of this path based upon - information given by the ``IResourceProvider``. You may set this to a - temporary directory, but then you must call ``cleanup_resources()`` to - delete the extracted files when done. There is no guarantee that - ``cleanup_resources()`` will be able to remove all extracted files. - - (Note: you may not change the extraction path for a given resource - manager once resources have been extracted, unless you first call - ``cleanup_resources()``.) - """ - if self.cached_files: - raise ValueError("Can't change extraction path, files already extracted") - - self.extraction_path = path - - def cleanup_resources(self, force=False): - """ - Delete all extracted resource files and directories, returning a list - of the file and directory names that could not be successfully removed. - This function does not have any concurrency protection, so it should - generally only be called when the extraction path is a temporary - directory exclusive to a single process. This method is not - automatically called; you must call it explicitly or register it as an - ``atexit`` function if you wish to ensure cleanup of a temporary - directory used for extractions. - """ - # XXX - - -def get_default_cache(): - """ - Return the ``PYTHON_EGG_CACHE`` environment variable - or a platform-relevant user cache dir for an app - named "Python-Eggs". - """ - return os.environ.get('PYTHON_EGG_CACHE') or platformdirs.user_cache_dir( - appname='Python-Eggs' - ) - - -def safe_name(name): - """Convert an arbitrary string to a standard distribution name - - Any runs of non-alphanumeric/. characters are replaced with a single '-'. - """ - return re.sub('[^A-Za-z0-9.]+', '-', name) - - -def safe_version(version): - """ - Convert an arbitrary string to a standard version string - """ - try: - # normalize the version - return str(packaging.version.Version(version)) - except packaging.version.InvalidVersion: - version = version.replace(' ', '.') - return re.sub('[^A-Za-z0-9.]+', '-', version) - - -def _forgiving_version(version): - """Fallback when ``safe_version`` is not safe enough - >>> parse_version(_forgiving_version('0.23ubuntu1')) - - >>> parse_version(_forgiving_version('0.23-')) - - >>> parse_version(_forgiving_version('0.-_')) - - >>> parse_version(_forgiving_version('42.+?1')) - - >>> parse_version(_forgiving_version('hello world')) - - """ - version = version.replace(' ', '.') - match = _PEP440_FALLBACK.search(version) - if match: - safe = match["safe"] - rest = version[len(safe):] - else: - safe = "0" - rest = version - local = f"sanitized.{_safe_segment(rest)}".strip(".") - return f"{safe}.dev0+{local}" - - -def _safe_segment(segment): - """Convert an arbitrary string into a safe segment""" - segment = re.sub('[^A-Za-z0-9.]+', '-', segment) - segment = re.sub('-[^A-Za-z0-9]+', '-', segment) - return re.sub(r'\.[^A-Za-z0-9]+', '.', segment).strip(".-") - - -def safe_extra(extra): - """Convert an arbitrary string to a standard 'extra' name - - Any runs of non-alphanumeric characters are replaced with a single '_', - and the result is always lowercased. - """ - return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower() - - -def to_filename(name): - """Convert a project or version name to its filename-escaped form - - Any '-' characters are currently replaced with '_'. - """ - return name.replace('-', '_') - - -def invalid_marker(text): - """ - Validate text as a PEP 508 environment marker; return an exception - if invalid or False otherwise. - """ - try: - evaluate_marker(text) - except SyntaxError as e: - e.filename = None - e.lineno = None - return e - return False - - -def evaluate_marker(text, extra=None): - """ - Evaluate a PEP 508 environment marker. - Return a boolean indicating the marker result in this environment. - Raise SyntaxError if marker is invalid. - - This implementation uses the 'pyparsing' module. - """ - try: - marker = packaging.markers.Marker(text) - return marker.evaluate() - except packaging.markers.InvalidMarker as e: - raise SyntaxError(e) from e - - -class NullProvider: - """Try to implement resources and metadata for arbitrary PEP 302 loaders""" - - egg_name = None - egg_info = None - loader = None - - def __init__(self, module): - self.loader = getattr(module, '__loader__', None) - self.module_path = os.path.dirname(getattr(module, '__file__', '')) - - def get_resource_filename(self, manager, resource_name): - return self._fn(self.module_path, resource_name) - - def get_resource_stream(self, manager, resource_name): - return io.BytesIO(self.get_resource_string(manager, resource_name)) - - def get_resource_string(self, manager, resource_name): - return self._get(self._fn(self.module_path, resource_name)) - - def has_resource(self, resource_name): - return self._has(self._fn(self.module_path, resource_name)) - - def _get_metadata_path(self, name): - return self._fn(self.egg_info, name) - - def has_metadata(self, name): - if not self.egg_info: - return self.egg_info - - path = self._get_metadata_path(name) - return self._has(path) - - def get_metadata(self, name): - if not self.egg_info: - return "" - path = self._get_metadata_path(name) - value = self._get(path) - try: - return value.decode('utf-8') - except UnicodeDecodeError as exc: - # Include the path in the error message to simplify - # troubleshooting, and without changing the exception type. - exc.reason += ' in {} file at path: {}'.format(name, path) - raise - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - def resource_isdir(self, resource_name): - return self._isdir(self._fn(self.module_path, resource_name)) - - def metadata_isdir(self, name): - return self.egg_info and self._isdir(self._fn(self.egg_info, name)) - - def resource_listdir(self, resource_name): - return self._listdir(self._fn(self.module_path, resource_name)) - - def metadata_listdir(self, name): - if self.egg_info: - return self._listdir(self._fn(self.egg_info, name)) - return [] - - def run_script(self, script_name, namespace): - script = 'scripts/' + script_name - if not self.has_metadata(script): - raise ResolutionError( - "Script {script!r} not found in metadata at {self.egg_info!r}".format( - **locals() - ), - ) - script_text = self.get_metadata(script).replace('\r\n', '\n') - script_text = script_text.replace('\r', '\n') - script_filename = self._fn(self.egg_info, script) - namespace['__file__'] = script_filename - if os.path.exists(script_filename): - with open(script_filename) as fid: - source = fid.read() - code = compile(source, script_filename, 'exec') - exec(code, namespace, namespace) - else: - from linecache import cache - - cache[script_filename] = ( - len(script_text), - 0, - script_text.split('\n'), - script_filename, - ) - script_code = compile(script_text, script_filename, 'exec') - exec(script_code, namespace, namespace) - - def _has(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _isdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _listdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _fn(self, base, resource_name): - self._validate_resource_path(resource_name) - if resource_name: - return os.path.join(base, *resource_name.split('/')) - return base - - @staticmethod - def _validate_resource_path(path): - """ - Validate the resource paths according to the docs. - https://setuptools.pypa.io/en/latest/pkg_resources.html#basic-resource-access - - >>> warned = getfixture('recwarn') - >>> warnings.simplefilter('always') - >>> vrp = NullProvider._validate_resource_path - >>> vrp('foo/bar.txt') - >>> bool(warned) - False - >>> vrp('../foo/bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('/foo/bar.txt') - >>> bool(warned) - True - >>> vrp('foo/../../bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('foo/f../bar.txt') - >>> bool(warned) - False - - Windows path separators are straight-up disallowed. - >>> vrp(r'\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - >>> vrp(r'C:\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - Blank values are allowed - - >>> vrp('') - >>> bool(warned) - False - - Non-string values are not. - - >>> vrp(None) - Traceback (most recent call last): - ... - AttributeError: ... - """ - invalid = ( - os.path.pardir in path.split(posixpath.sep) - or posixpath.isabs(path) - or ntpath.isabs(path) - ) - if not invalid: - return - - msg = "Use of .. or absolute path in a resource path is not allowed." - - # Aggressively disallow Windows absolute paths - if ntpath.isabs(path) and not posixpath.isabs(path): - raise ValueError(msg) - - # for compatibility, warn; in future - # raise ValueError(msg) - issue_warning( - msg[:-1] + " and will raise exceptions in a future release.", - DeprecationWarning, - ) - - def _get(self, path): - if hasattr(self.loader, 'get_data'): - return self.loader.get_data(path) - raise NotImplementedError( - "Can't perform this operation for loaders without 'get_data()'" - ) - - -register_loader_type(object, NullProvider) - - -def _parents(path): - """ - yield all parents of path including path - """ - last = None - while path != last: - yield path - last = path - path, _ = os.path.split(path) - - -class EggProvider(NullProvider): - """Provider based on a virtual filesystem""" - - def __init__(self, module): - super().__init__(module) - self._setup_prefix() - - def _setup_prefix(self): - # Assume that metadata may be nested inside a "basket" - # of multiple eggs and use module_path instead of .archive. - eggs = filter(_is_egg_path, _parents(self.module_path)) - egg = next(eggs, None) - egg and self._set_egg(egg) - - def _set_egg(self, path): - self.egg_name = os.path.basename(path) - self.egg_info = os.path.join(path, 'EGG-INFO') - self.egg_root = path - - -class DefaultProvider(EggProvider): - """Provides access to package resources in the filesystem""" - - def _has(self, path): - return os.path.exists(path) - - def _isdir(self, path): - return os.path.isdir(path) - - def _listdir(self, path): - return os.listdir(path) - - def get_resource_stream(self, manager, resource_name): - return open(self._fn(self.module_path, resource_name), 'rb') - - def _get(self, path): - with open(path, 'rb') as stream: - return stream.read() - - @classmethod - def _register(cls): - loader_names = ( - 'SourceFileLoader', - 'SourcelessFileLoader', - ) - for name in loader_names: - loader_cls = getattr(importlib_machinery, name, type(None)) - register_loader_type(loader_cls, cls) - - -DefaultProvider._register() - - -class EmptyProvider(NullProvider): - """Provider that returns nothing for all requests""" - - module_path = None - - _isdir = _has = lambda self, path: False - - def _get(self, path): - return '' - - def _listdir(self, path): - return [] - - def __init__(self): - pass - - -empty_provider = EmptyProvider() - - -class ZipManifests(dict): - """ - zip manifest builder - """ - - @classmethod - def build(cls, path): - """ - Build a dictionary similar to the zipimport directory - caches, except instead of tuples, store ZipInfo objects. - - Use a platform-specific path separator (os.sep) for the path keys - for compatibility with pypy on Windows. - """ - with zipfile.ZipFile(path) as zfile: - items = ( - ( - name.replace('/', os.sep), - zfile.getinfo(name), - ) - for name in zfile.namelist() - ) - return dict(items) - - load = build - - -class MemoizedZipManifests(ZipManifests): - """ - Memoized zipfile manifests. - """ - - manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime') - - def load(self, path): - """ - Load a manifest at path or return a suitable manifest already loaded. - """ - path = os.path.normpath(path) - mtime = os.stat(path).st_mtime - - if path not in self or self[path].mtime != mtime: - manifest = self.build(path) - self[path] = self.manifest_mod(manifest, mtime) - - return self[path].manifest - - -class ZipProvider(EggProvider): - """Resource support for zips and eggs""" - - eagers = None - _zip_manifests = MemoizedZipManifests() - - def __init__(self, module): - super().__init__(module) - self.zip_pre = self.loader.archive + os.sep - - def _zipinfo_name(self, fspath): - # Convert a virtual filename (full path to file) into a zipfile subpath - # usable with the zipimport directory cache for our target archive - fspath = fspath.rstrip(os.sep) - if fspath == self.loader.archive: - return '' - if fspath.startswith(self.zip_pre): - return fspath[len(self.zip_pre) :] - raise AssertionError("%s is not a subpath of %s" % (fspath, self.zip_pre)) - - def _parts(self, zip_path): - # Convert a zipfile subpath into an egg-relative path part list. - # pseudo-fs path - fspath = self.zip_pre + zip_path - if fspath.startswith(self.egg_root + os.sep): - return fspath[len(self.egg_root) + 1 :].split(os.sep) - raise AssertionError("%s is not a subpath of %s" % (fspath, self.egg_root)) - - @property - def zipinfo(self): - return self._zip_manifests.load(self.loader.archive) - - def get_resource_filename(self, manager, resource_name): - if not self.egg_name: - raise NotImplementedError( - "resource_filename() only supported for .egg, not .zip" - ) - # no need to lock for extraction, since we use temp names - zip_path = self._resource_to_zip(resource_name) - eagers = self._get_eager_resources() - if '/'.join(self._parts(zip_path)) in eagers: - for name in eagers: - self._extract_resource(manager, self._eager_to_zip(name)) - return self._extract_resource(manager, zip_path) - - @staticmethod - def _get_date_and_size(zip_stat): - size = zip_stat.file_size - # ymdhms+wday, yday, dst - date_time = zip_stat.date_time + (0, 0, -1) - # 1980 offset already done - timestamp = time.mktime(date_time) - return timestamp, size - - # FIXME: 'ZipProvider._extract_resource' is too complex (12) - def _extract_resource(self, manager, zip_path): # noqa: C901 - if zip_path in self._index(): - for name in self._index()[zip_path]: - last = self._extract_resource(manager, os.path.join(zip_path, name)) - # return the extracted directory name - return os.path.dirname(last) - - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - - if not WRITE_SUPPORT: - raise IOError( - '"os.rename" and "os.unlink" are not supported ' 'on this platform' - ) - try: - real_path = manager.get_cache_path(self.egg_name, self._parts(zip_path)) - - if self._is_current(real_path, zip_path): - return real_path - - outf, tmpnam = _mkstemp( - ".$extract", - dir=os.path.dirname(real_path), - ) - os.write(outf, self.loader.get_data(zip_path)) - os.close(outf) - utime(tmpnam, (timestamp, timestamp)) - manager.postprocess(tmpnam, real_path) - - try: - rename(tmpnam, real_path) - - except os.error: - if os.path.isfile(real_path): - if self._is_current(real_path, zip_path): - # the file became current since it was checked above, - # so proceed. - return real_path - # Windows, del old file and retry - elif os.name == 'nt': - unlink(real_path) - rename(tmpnam, real_path) - return real_path - raise - - except os.error: - # report a user-friendly error - manager.extraction_error() - - return real_path - - def _is_current(self, file_path, zip_path): - """ - Return True if the file_path is current for this zip_path - """ - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - if not os.path.isfile(file_path): - return False - stat = os.stat(file_path) - if stat.st_size != size or stat.st_mtime != timestamp: - return False - # check that the contents match - zip_contents = self.loader.get_data(zip_path) - with open(file_path, 'rb') as f: - file_contents = f.read() - return zip_contents == file_contents - - def _get_eager_resources(self): - if self.eagers is None: - eagers = [] - for name in ('native_libs.txt', 'eager_resources.txt'): - if self.has_metadata(name): - eagers.extend(self.get_metadata_lines(name)) - self.eagers = eagers - return self.eagers - - def _index(self): - try: - return self._dirindex - except AttributeError: - ind = {} - for path in self.zipinfo: - parts = path.split(os.sep) - while parts: - parent = os.sep.join(parts[:-1]) - if parent in ind: - ind[parent].append(parts[-1]) - break - else: - ind[parent] = [parts.pop()] - self._dirindex = ind - return ind - - def _has(self, fspath): - zip_path = self._zipinfo_name(fspath) - return zip_path in self.zipinfo or zip_path in self._index() - - def _isdir(self, fspath): - return self._zipinfo_name(fspath) in self._index() - - def _listdir(self, fspath): - return list(self._index().get(self._zipinfo_name(fspath), ())) - - def _eager_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.egg_root, resource_name)) - - def _resource_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.module_path, resource_name)) - - -register_loader_type(zipimport.zipimporter, ZipProvider) - - -class FileMetadata(EmptyProvider): - """Metadata handler for standalone PKG-INFO files - - Usage:: - - metadata = FileMetadata("/path/to/PKG-INFO") - - This provider rejects all data and metadata requests except for PKG-INFO, - which is treated as existing, and will be the contents of the file at - the provided location. - """ - - def __init__(self, path): - self.path = path - - def _get_metadata_path(self, name): - return self.path - - def has_metadata(self, name): - return name == 'PKG-INFO' and os.path.isfile(self.path) - - def get_metadata(self, name): - if name != 'PKG-INFO': - raise KeyError("No metadata except PKG-INFO is available") - - with io.open(self.path, encoding='utf-8', errors="replace") as f: - metadata = f.read() - self._warn_on_replacement(metadata) - return metadata - - def _warn_on_replacement(self, metadata): - replacement_char = '�' - if replacement_char in metadata: - tmpl = "{self.path} could not be properly decoded in UTF-8" - msg = tmpl.format(**locals()) - warnings.warn(msg) - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - -class PathMetadata(DefaultProvider): - """Metadata provider for egg directories - - Usage:: - - # Development eggs: - - egg_info = "/path/to/PackageName.egg-info" - base_dir = os.path.dirname(egg_info) - metadata = PathMetadata(base_dir, egg_info) - dist_name = os.path.splitext(os.path.basename(egg_info))[0] - dist = Distribution(basedir, project_name=dist_name, metadata=metadata) - - # Unpacked egg directories: - - egg_path = "/path/to/PackageName-ver-pyver-etc.egg" - metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO')) - dist = Distribution.from_filename(egg_path, metadata=metadata) - """ - - def __init__(self, path, egg_info): - self.module_path = path - self.egg_info = egg_info - - -class EggMetadata(ZipProvider): - """Metadata provider for .egg files""" - - def __init__(self, importer): - """Create a metadata provider from a zipimporter""" - - self.zip_pre = importer.archive + os.sep - self.loader = importer - if importer.prefix: - self.module_path = os.path.join(importer.archive, importer.prefix) - else: - self.module_path = importer.archive - self._setup_prefix() - - -_declare_state('dict', _distribution_finders={}) - - -def register_finder(importer_type, distribution_finder): - """Register `distribution_finder` to find distributions in sys.path items - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `distribution_finder` is a callable that, passed a path - item and the importer instance, yields ``Distribution`` instances found on - that path item. See ``pkg_resources.find_on_path`` for an example.""" - _distribution_finders[importer_type] = distribution_finder - - -def find_distributions(path_item, only=False): - """Yield distributions accessible via `path_item`""" - importer = get_importer(path_item) - finder = _find_adapter(_distribution_finders, importer) - return finder(importer, path_item, only) - - -def find_eggs_in_zip(importer, path_item, only=False): - """ - Find eggs in zip files; possibly multiple nested eggs. - """ - if importer.archive.endswith('.whl'): - # wheels are not supported with this finder - # they don't have PKG-INFO metadata, and won't ever contain eggs - return - metadata = EggMetadata(importer) - if metadata.has_metadata('PKG-INFO'): - yield Distribution.from_filename(path_item, metadata=metadata) - if only: - # don't yield nested distros - return - for subitem in metadata.resource_listdir(''): - if _is_egg_path(subitem): - subpath = os.path.join(path_item, subitem) - dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath) - for dist in dists: - yield dist - elif subitem.lower().endswith(('.dist-info', '.egg-info')): - subpath = os.path.join(path_item, subitem) - submeta = EggMetadata(zipimport.zipimporter(subpath)) - submeta.egg_info = subpath - yield Distribution.from_location(path_item, subitem, submeta) - - -register_finder(zipimport.zipimporter, find_eggs_in_zip) - - -def find_nothing(importer, path_item, only=False): - return () - - -register_finder(object, find_nothing) - - -def find_on_path(importer, path_item, only=False): - """Yield distributions accessible on a sys.path directory""" - path_item = _normalize_cached(path_item) - - if _is_unpacked_egg(path_item): - yield Distribution.from_filename( - path_item, - metadata=PathMetadata(path_item, os.path.join(path_item, 'EGG-INFO')), - ) - return - - entries = (os.path.join(path_item, child) for child in safe_listdir(path_item)) - - # scan for .egg and .egg-info in directory - for entry in sorted(entries): - fullpath = os.path.join(path_item, entry) - factory = dist_factory(path_item, entry, only) - for dist in factory(fullpath): - yield dist - - -def dist_factory(path_item, entry, only): - """Return a dist_factory for the given entry.""" - lower = entry.lower() - is_egg_info = lower.endswith('.egg-info') - is_dist_info = lower.endswith('.dist-info') and os.path.isdir( - os.path.join(path_item, entry) - ) - is_meta = is_egg_info or is_dist_info - return ( - distributions_from_metadata - if is_meta - else find_distributions - if not only and _is_egg_path(entry) - else resolve_egg_link - if not only and lower.endswith('.egg-link') - else NoDists() - ) - - -class NoDists: - """ - >>> bool(NoDists()) - False - - >>> list(NoDists()('anything')) - [] - """ - - def __bool__(self): - return False - - def __call__(self, fullpath): - return iter(()) - - -def safe_listdir(path): - """ - Attempt to list contents of path, but suppress some exceptions. - """ - try: - return os.listdir(path) - except (PermissionError, NotADirectoryError): - pass - except OSError as e: - # Ignore the directory if does not exist, not a directory or - # permission denied - if e.errno not in (errno.ENOTDIR, errno.EACCES, errno.ENOENT): - raise - return () - - -def distributions_from_metadata(path): - root = os.path.dirname(path) - if os.path.isdir(path): - if len(os.listdir(path)) == 0: - # empty metadata dir; skip - return - metadata = PathMetadata(root, path) - else: - metadata = FileMetadata(path) - entry = os.path.basename(path) - yield Distribution.from_location( - root, - entry, - metadata, - precedence=DEVELOP_DIST, - ) - - -def non_empty_lines(path): - """ - Yield non-empty lines from file at path - """ - with open(path) as f: - for line in f: - line = line.strip() - if line: - yield line - - -def resolve_egg_link(path): - """ - Given a path to an .egg-link, resolve distributions - present in the referenced path. - """ - referenced_paths = non_empty_lines(path) - resolved_paths = ( - os.path.join(os.path.dirname(path), ref) for ref in referenced_paths - ) - dist_groups = map(find_distributions, resolved_paths) - return next(dist_groups, ()) - - -if hasattr(pkgutil, 'ImpImporter'): - register_finder(pkgutil.ImpImporter, find_on_path) - -register_finder(importlib_machinery.FileFinder, find_on_path) - -_declare_state('dict', _namespace_handlers={}) -_declare_state('dict', _namespace_packages={}) - - -def register_namespace_handler(importer_type, namespace_handler): - """Register `namespace_handler` to declare namespace packages - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `namespace_handler` is a callable like this:: - - def namespace_handler(importer, path_entry, moduleName, module): - # return a path_entry to use for child packages - - Namespace handlers are only called if the importer object has already - agreed that it can handle the relevant path item, and they should only - return a subpath if the module __path__ does not already contain an - equivalent subpath. For an example namespace handler, see - ``pkg_resources.file_ns_handler``. - """ - _namespace_handlers[importer_type] = namespace_handler - - -def _handle_ns(packageName, path_item): - """Ensure that named package includes a subpath of path_item (if needed)""" - - importer = get_importer(path_item) - if importer is None: - return None - - # use find_spec (PEP 451) and fall-back to find_module (PEP 302) - try: - spec = importer.find_spec(packageName) - except AttributeError: - # capture warnings due to #1111 - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - loader = importer.find_module(packageName) - else: - loader = spec.loader if spec else None - - if loader is None: - return None - module = sys.modules.get(packageName) - if module is None: - module = sys.modules[packageName] = types.ModuleType(packageName) - module.__path__ = [] - _set_parent_ns(packageName) - elif not hasattr(module, '__path__'): - raise TypeError("Not a package:", packageName) - handler = _find_adapter(_namespace_handlers, importer) - subpath = handler(importer, path_item, packageName, module) - if subpath is not None: - path = module.__path__ - path.append(subpath) - importlib.import_module(packageName) - _rebuild_mod_path(path, packageName, module) - return subpath - - -def _rebuild_mod_path(orig_path, package_name, module): - """ - Rebuild module.__path__ ensuring that all entries are ordered - corresponding to their sys.path order - """ - sys_path = [_normalize_cached(p) for p in sys.path] - - def safe_sys_path_index(entry): - """ - Workaround for #520 and #513. - """ - try: - return sys_path.index(entry) - except ValueError: - return float('inf') - - def position_in_sys_path(path): - """ - Return the ordinal of the path based on its position in sys.path - """ - path_parts = path.split(os.sep) - module_parts = package_name.count('.') + 1 - parts = path_parts[:-module_parts] - return safe_sys_path_index(_normalize_cached(os.sep.join(parts))) - - new_path = sorted(orig_path, key=position_in_sys_path) - new_path = [_normalize_cached(p) for p in new_path] - - if isinstance(module.__path__, list): - module.__path__[:] = new_path - else: - module.__path__ = new_path - - -def declare_namespace(packageName): - """Declare that package 'packageName' is a namespace package""" - - msg = ( - f"Deprecated call to `pkg_resources.declare_namespace({packageName!r})`.\n" - "Implementing implicit namespace packages (as specified in PEP 420) " - "is preferred to `pkg_resources.declare_namespace`. " - "See https://setuptools.pypa.io/en/latest/references/" - "keywords.html#keyword-namespace-packages" - ) - warnings.warn(msg, DeprecationWarning, stacklevel=2) - - _imp.acquire_lock() - try: - if packageName in _namespace_packages: - return - - path = sys.path - parent, _, _ = packageName.rpartition('.') - - if parent: - declare_namespace(parent) - if parent not in _namespace_packages: - __import__(parent) - try: - path = sys.modules[parent].__path__ - except AttributeError as e: - raise TypeError("Not a package:", parent) from e - - # Track what packages are namespaces, so when new path items are added, - # they can be updated - _namespace_packages.setdefault(parent or None, []).append(packageName) - _namespace_packages.setdefault(packageName, []) - - for path_item in path: - # Ensure all the parent's path items are reflected in the child, - # if they apply - _handle_ns(packageName, path_item) - - finally: - _imp.release_lock() - - -def fixup_namespace_packages(path_item, parent=None): - """Ensure that previously-declared namespace packages include path_item""" - _imp.acquire_lock() - try: - for package in _namespace_packages.get(parent, ()): - subpath = _handle_ns(package, path_item) - if subpath: - fixup_namespace_packages(subpath, package) - finally: - _imp.release_lock() - - -def file_ns_handler(importer, path_item, packageName, module): - """Compute an ns-package subpath for a filesystem or zipfile importer""" - - subpath = os.path.join(path_item, packageName.split('.')[-1]) - normalized = _normalize_cached(subpath) - for item in module.__path__: - if _normalize_cached(item) == normalized: - break - else: - # Only return the path if it's not already there - return subpath - - -if hasattr(pkgutil, 'ImpImporter'): - register_namespace_handler(pkgutil.ImpImporter, file_ns_handler) - -register_namespace_handler(zipimport.zipimporter, file_ns_handler) -register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler) - - -def null_ns_handler(importer, path_item, packageName, module): - return None - - -register_namespace_handler(object, null_ns_handler) - - -def normalize_path(filename): - """Normalize a file/dir name for comparison purposes""" - return os.path.normcase(os.path.realpath(os.path.normpath(_cygwin_patch(filename)))) - - -def _cygwin_patch(filename): # pragma: nocover - """ - Contrary to POSIX 2008, on Cygwin, getcwd (3) contains - symlink components. Using - os.path.abspath() works around this limitation. A fix in os.getcwd() - would probably better, in Cygwin even more so, except - that this seems to be by design... - """ - return os.path.abspath(filename) if sys.platform == 'cygwin' else filename - - -def _normalize_cached(filename, _cache={}): - try: - return _cache[filename] - except KeyError: - _cache[filename] = result = normalize_path(filename) - return result - - -def _is_egg_path(path): - """ - Determine if given path appears to be an egg. - """ - return _is_zip_egg(path) or _is_unpacked_egg(path) - - -def _is_zip_egg(path): - return ( - path.lower().endswith('.egg') - and os.path.isfile(path) - and zipfile.is_zipfile(path) - ) - - -def _is_unpacked_egg(path): - """ - Determine if given path appears to be an unpacked egg. - """ - return path.lower().endswith('.egg') and os.path.isfile( - os.path.join(path, 'EGG-INFO', 'PKG-INFO') - ) - - -def _set_parent_ns(packageName): - parts = packageName.split('.') - name = parts.pop() - if parts: - parent = '.'.join(parts) - setattr(sys.modules[parent], name, sys.modules[packageName]) - - -MODULE = re.compile(r"\w+(\.\w+)*$").match -EGG_NAME = re.compile( - r""" - (?P[^-]+) ( - -(?P[^-]+) ( - -py(?P[^-]+) ( - -(?P.+) - )? - )? - )? - """, - re.VERBOSE | re.IGNORECASE, -).match - - -class EntryPoint: - """Object representing an advertised importable object""" - - def __init__(self, name, module_name, attrs=(), extras=(), dist=None): - if not MODULE(module_name): - raise ValueError("Invalid module name", module_name) - self.name = name - self.module_name = module_name - self.attrs = tuple(attrs) - self.extras = tuple(extras) - self.dist = dist - - def __str__(self): - s = "%s = %s" % (self.name, self.module_name) - if self.attrs: - s += ':' + '.'.join(self.attrs) - if self.extras: - s += ' [%s]' % ','.join(self.extras) - return s - - def __repr__(self): - return "EntryPoint.parse(%r)" % str(self) - - def load(self, require=True, *args, **kwargs): - """ - Require packages for this EntryPoint, then resolve it. - """ - if not require or args or kwargs: - warnings.warn( - "Parameters to load are deprecated. Call .resolve and " - ".require separately.", - PkgResourcesDeprecationWarning, - stacklevel=2, - ) - if require: - self.require(*args, **kwargs) - return self.resolve() - - def resolve(self): - """ - Resolve the entry point from its module and attrs. - """ - module = __import__(self.module_name, fromlist=['__name__'], level=0) - try: - return functools.reduce(getattr, self.attrs, module) - except AttributeError as exc: - raise ImportError(str(exc)) from exc - - def require(self, env=None, installer=None): - if self.extras and not self.dist: - raise UnknownExtra("Can't require() without a distribution", self) - - # Get the requirements for this entry point with all its extras and - # then resolve them. We have to pass `extras` along when resolving so - # that the working set knows what extras we want. Otherwise, for - # dist-info distributions, the working set will assume that the - # requirements for that extra are purely optional and skip over them. - reqs = self.dist.requires(self.extras) - items = working_set.resolve(reqs, env, installer, extras=self.extras) - list(map(working_set.add, items)) - - pattern = re.compile( - r'\s*' - r'(?P.+?)\s*' - r'=\s*' - r'(?P[\w.]+)\s*' - r'(:\s*(?P[\w.]+))?\s*' - r'(?P\[.*\])?\s*$' - ) - - @classmethod - def parse(cls, src, dist=None): - """Parse a single entry point from string `src` - - Entry point syntax follows the form:: - - name = some.module:some.attr [extra1, extra2] - - The entry name and module name are required, but the ``:attrs`` and - ``[extras]`` parts are optional - """ - m = cls.pattern.match(src) - if not m: - msg = "EntryPoint must be in 'name=module:attrs [extras]' format" - raise ValueError(msg, src) - res = m.groupdict() - extras = cls._parse_extras(res['extras']) - attrs = res['attr'].split('.') if res['attr'] else () - return cls(res['name'], res['module'], attrs, extras, dist) - - @classmethod - def _parse_extras(cls, extras_spec): - if not extras_spec: - return () - req = Requirement.parse('x' + extras_spec) - if req.specs: - raise ValueError() - return req.extras - - @classmethod - def parse_group(cls, group, lines, dist=None): - """Parse an entry point group""" - if not MODULE(group): - raise ValueError("Invalid group name", group) - this = {} - for line in yield_lines(lines): - ep = cls.parse(line, dist) - if ep.name in this: - raise ValueError("Duplicate entry point", group, ep.name) - this[ep.name] = ep - return this - - @classmethod - def parse_map(cls, data, dist=None): - """Parse a map of entry point groups""" - if isinstance(data, dict): - data = data.items() - else: - data = split_sections(data) - maps = {} - for group, lines in data: - if group is None: - if not lines: - continue - raise ValueError("Entry points must be listed in groups") - group = group.strip() - if group in maps: - raise ValueError("Duplicate group name", group) - maps[group] = cls.parse_group(group, lines, dist) - return maps - - -def _version_from_file(lines): - """ - Given an iterable of lines from a Metadata file, return - the value of the Version field, if present, or None otherwise. - """ - - def is_version_line(line): - return line.lower().startswith('version:') - - version_lines = filter(is_version_line, lines) - line = next(iter(version_lines), '') - _, _, value = line.partition(':') - return safe_version(value.strip()) or None - - -class Distribution: - """Wrap an actual or potential sys.path entry w/metadata""" - - PKG_INFO = 'PKG-INFO' - - def __init__( - self, - location=None, - metadata=None, - project_name=None, - version=None, - py_version=PY_MAJOR, - platform=None, - precedence=EGG_DIST, - ): - self.project_name = safe_name(project_name or 'Unknown') - if version is not None: - self._version = safe_version(version) - self.py_version = py_version - self.platform = platform - self.location = location - self.precedence = precedence - self._provider = metadata or empty_provider - - @classmethod - def from_location(cls, location, basename, metadata=None, **kw): - project_name, version, py_version, platform = [None] * 4 - basename, ext = os.path.splitext(basename) - if ext.lower() in _distributionImpl: - cls = _distributionImpl[ext.lower()] - - match = EGG_NAME(basename) - if match: - project_name, version, py_version, platform = match.group( - 'name', 'ver', 'pyver', 'plat' - ) - return cls( - location, - metadata, - project_name=project_name, - version=version, - py_version=py_version, - platform=platform, - **kw, - )._reload_version() - - def _reload_version(self): - return self - - @property - def hashcmp(self): - return ( - self._forgiving_parsed_version, - self.precedence, - self.key, - self.location, - self.py_version or '', - self.platform or '', - ) - - def __hash__(self): - return hash(self.hashcmp) - - def __lt__(self, other): - return self.hashcmp < other.hashcmp - - def __le__(self, other): - return self.hashcmp <= other.hashcmp - - def __gt__(self, other): - return self.hashcmp > other.hashcmp - - def __ge__(self, other): - return self.hashcmp >= other.hashcmp - - def __eq__(self, other): - if not isinstance(other, self.__class__): - # It's not a Distribution, so they are not equal - return False - return self.hashcmp == other.hashcmp - - def __ne__(self, other): - return not self == other - - # These properties have to be lazy so that we don't have to load any - # metadata until/unless it's actually needed. (i.e., some distributions - # may not know their name or version without loading PKG-INFO) - - @property - def key(self): - try: - return self._key - except AttributeError: - self._key = key = self.project_name.lower() - return key - - @property - def parsed_version(self): - if not hasattr(self, "_parsed_version"): - try: - self._parsed_version = parse_version(self.version) - except packaging.version.InvalidVersion as ex: - info = f"(package: {self.project_name})" - if hasattr(ex, "add_note"): - ex.add_note(info) # PEP 678 - raise - raise packaging.version.InvalidVersion(f"{str(ex)} {info}") from None - - return self._parsed_version - - @property - def _forgiving_parsed_version(self): - try: - return self.parsed_version - except packaging.version.InvalidVersion as ex: - self._parsed_version = parse_version(_forgiving_version(self.version)) - - notes = "\n".join(getattr(ex, "__notes__", [])) # PEP 678 - msg = f"""!!\n\n - ************************************************************************* - {str(ex)}\n{notes} - - This is a long overdue deprecation. - For the time being, `pkg_resources` will use `{self._parsed_version}` - as a replacement to avoid breaking existing environments, - but no future compatibility is guaranteed. - - If you maintain package {self.project_name} you should implement - the relevant changes to adequate the project to PEP 440 immediately. - ************************************************************************* - \n\n!! - """ - warnings.warn(msg, DeprecationWarning) - - return self._parsed_version - - @property - def version(self): - try: - return self._version - except AttributeError as e: - version = self._get_version() - if version is None: - path = self._get_metadata_path_for_display(self.PKG_INFO) - msg = ("Missing 'Version:' header and/or {} file at path: {}").format( - self.PKG_INFO, path - ) - raise ValueError(msg, self) from e - - return version - - @property - def _dep_map(self): - """ - A map of extra to its list of (direct) requirements - for this distribution, including the null extra. - """ - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._filter_extras(self._build_dep_map()) - return self.__dep_map - - @staticmethod - def _filter_extras(dm): - """ - Given a mapping of extras to dependencies, strip off - environment markers and filter out any dependencies - not matching the markers. - """ - for extra in list(filter(None, dm)): - new_extra = extra - reqs = dm.pop(extra) - new_extra, _, marker = extra.partition(':') - fails_marker = marker and ( - invalid_marker(marker) or not evaluate_marker(marker) - ) - if fails_marker: - reqs = [] - new_extra = safe_extra(new_extra) or None - - dm.setdefault(new_extra, []).extend(reqs) - return dm - - def _build_dep_map(self): - dm = {} - for name in 'requires.txt', 'depends.txt': - for extra, reqs in split_sections(self._get_metadata(name)): - dm.setdefault(extra, []).extend(parse_requirements(reqs)) - return dm - - def requires(self, extras=()): - """List of Requirements needed for this distro if `extras` are used""" - dm = self._dep_map - deps = [] - deps.extend(dm.get(None, ())) - for ext in extras: - try: - deps.extend(dm[safe_extra(ext)]) - except KeyError as e: - raise UnknownExtra( - "%s has no such extra feature %r" % (self, ext) - ) from e - return deps - - def _get_metadata_path_for_display(self, name): - """ - Return the path to the given metadata file, if available. - """ - try: - # We need to access _get_metadata_path() on the provider object - # directly rather than through this class's __getattr__() - # since _get_metadata_path() is marked private. - path = self._provider._get_metadata_path(name) - - # Handle exceptions e.g. in case the distribution's metadata - # provider doesn't support _get_metadata_path(). - except Exception: - return '[could not detect]' - - return path - - def _get_metadata(self, name): - if self.has_metadata(name): - for line in self.get_metadata_lines(name): - yield line - - def _get_version(self): - lines = self._get_metadata(self.PKG_INFO) - version = _version_from_file(lines) - - return version - - def activate(self, path=None, replace=False): - """Ensure distribution is importable on `path` (default=sys.path)""" - if path is None: - path = sys.path - self.insert_on(path, replace=replace) - if path is sys.path: - fixup_namespace_packages(self.location) - for pkg in self._get_metadata('namespace_packages.txt'): - if pkg in sys.modules: - declare_namespace(pkg) - - def egg_name(self): - """Return what this distribution's standard .egg filename should be""" - filename = "%s-%s-py%s" % ( - to_filename(self.project_name), - to_filename(self.version), - self.py_version or PY_MAJOR, - ) - - if self.platform: - filename += '-' + self.platform - return filename - - def __repr__(self): - if self.location: - return "%s (%s)" % (self, self.location) - else: - return str(self) - - def __str__(self): - try: - version = getattr(self, 'version', None) - except ValueError: - version = None - version = version or "[unknown version]" - return "%s %s" % (self.project_name, version) - - def __getattr__(self, attr): - """Delegate all unrecognized public attributes to .metadata provider""" - if attr.startswith('_'): - raise AttributeError(attr) - return getattr(self._provider, attr) - - def __dir__(self): - return list( - set(super(Distribution, self).__dir__()) - | set(attr for attr in self._provider.__dir__() if not attr.startswith('_')) - ) - - @classmethod - def from_filename(cls, filename, metadata=None, **kw): - return cls.from_location( - _normalize_cached(filename), os.path.basename(filename), metadata, **kw - ) - - def as_requirement(self): - """Return a ``Requirement`` that matches this distribution exactly""" - if isinstance(self.parsed_version, packaging.version.Version): - spec = "%s==%s" % (self.project_name, self.parsed_version) - else: - spec = "%s===%s" % (self.project_name, self.parsed_version) - - return Requirement.parse(spec) - - def load_entry_point(self, group, name): - """Return the `name` entry point of `group` or raise ImportError""" - ep = self.get_entry_info(group, name) - if ep is None: - raise ImportError("Entry point %r not found" % ((group, name),)) - return ep.load() - - def get_entry_map(self, group=None): - """Return the entry point map for `group`, or the full entry map""" - try: - ep_map = self._ep_map - except AttributeError: - ep_map = self._ep_map = EntryPoint.parse_map( - self._get_metadata('entry_points.txt'), self - ) - if group is not None: - return ep_map.get(group, {}) - return ep_map - - def get_entry_info(self, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return self.get_entry_map(group).get(name) - - # FIXME: 'Distribution.insert_on' is too complex (13) - def insert_on(self, path, loc=None, replace=False): # noqa: C901 - """Ensure self.location is on path - - If replace=False (default): - - If location is already in path anywhere, do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent. - - Else: add to the end of path. - If replace=True: - - If location is already on path anywhere (not eggs) - or higher priority than its parent (eggs) - do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent, - removing any lower-priority entries. - - Else: add it to the front of path. - """ - - loc = loc or self.location - if not loc: - return - - nloc = _normalize_cached(loc) - bdir = os.path.dirname(nloc) - npath = [(p and _normalize_cached(p) or p) for p in path] - - for p, item in enumerate(npath): - if item == nloc: - if replace: - break - else: - # don't modify path (even removing duplicates) if - # found and not replace - return - elif item == bdir and self.precedence == EGG_DIST: - # if it's an .egg, give it precedence over its directory - # UNLESS it's already been added to sys.path and replace=False - if (not replace) and nloc in npath[p:]: - return - if path is sys.path: - self.check_version_conflict() - path.insert(p, loc) - npath.insert(p, nloc) - break - else: - if path is sys.path: - self.check_version_conflict() - if replace: - path.insert(0, loc) - else: - path.append(loc) - return - - # p is the spot where we found or inserted loc; now remove duplicates - while True: - try: - np = npath.index(nloc, p + 1) - except ValueError: - break - else: - del npath[np], path[np] - # ha! - p = np - - return - - def check_version_conflict(self): - if self.key == 'setuptools': - # ignore the inevitable setuptools self-conflicts :( - return - - nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt')) - loc = normalize_path(self.location) - for modname in self._get_metadata('top_level.txt'): - if ( - modname not in sys.modules - or modname in nsp - or modname in _namespace_packages - ): - continue - if modname in ('pkg_resources', 'setuptools', 'site'): - continue - fn = getattr(sys.modules[modname], '__file__', None) - if fn and ( - normalize_path(fn).startswith(loc) or fn.startswith(self.location) - ): - continue - issue_warning( - "Module %s was already imported from %s, but %s is being added" - " to sys.path" % (modname, fn, self.location), - ) - - def has_version(self): - try: - self.version - except ValueError: - issue_warning("Unbuilt egg for " + repr(self)) - return False - except SystemError: - # TODO: remove this except clause when python/cpython#103632 is fixed. - return False - return True - - def clone(self, **kw): - """Copy this distribution, substituting in any changed keyword args""" - names = 'project_name version py_version platform location precedence' - for attr in names.split(): - kw.setdefault(attr, getattr(self, attr, None)) - kw.setdefault('metadata', self._provider) - return self.__class__(**kw) - - @property - def extras(self): - return [dep for dep in self._dep_map if dep] - - -class EggInfoDistribution(Distribution): - def _reload_version(self): - """ - Packages installed by distutils (e.g. numpy or scipy), - which uses an old safe_version, and so - their version numbers can get mangled when - converted to filenames (e.g., 1.11.0.dev0+2329eae to - 1.11.0.dev0_2329eae). These distributions will not be - parsed properly - downstream by Distribution and safe_version, so - take an extra step and try to get the version number from - the metadata file itself instead of the filename. - """ - md_version = self._get_version() - if md_version: - self._version = md_version - return self - - -class DistInfoDistribution(Distribution): - """ - Wrap an actual or potential sys.path entry - w/metadata, .dist-info style. - """ - - PKG_INFO = 'METADATA' - EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])") - - @property - def _parsed_pkg_info(self): - """Parse and cache metadata""" - try: - return self._pkg_info - except AttributeError: - metadata = self.get_metadata(self.PKG_INFO) - self._pkg_info = email.parser.Parser().parsestr(metadata) - return self._pkg_info - - @property - def _dep_map(self): - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._compute_dependencies() - return self.__dep_map - - def _compute_dependencies(self): - """Recompute this distribution's dependencies.""" - dm = self.__dep_map = {None: []} - - reqs = [] - # Including any condition expressions - for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: - reqs.extend(parse_requirements(req)) - - def reqs_for_extra(extra): - for req in reqs: - if not req.marker or req.marker.evaluate({'extra': extra}): - yield req - - common = types.MappingProxyType(dict.fromkeys(reqs_for_extra(None))) - dm[None].extend(common) - - for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []: - s_extra = safe_extra(extra.strip()) - dm[s_extra] = [r for r in reqs_for_extra(extra) if r not in common] - - return dm - - -_distributionImpl = { - '.egg': Distribution, - '.egg-info': EggInfoDistribution, - '.dist-info': DistInfoDistribution, -} - - -def issue_warning(*args, **kw): - level = 1 - g = globals() - try: - # find the first stack frame that is *not* code in - # the pkg_resources module, to use for the warning - while sys._getframe(level).f_globals is g: - level += 1 - except ValueError: - pass - warnings.warn(stacklevel=level + 1, *args, **kw) - - -def parse_requirements(strs): - """ - Yield ``Requirement`` objects for each specification in `strs`. - - `strs` must be a string, or a (possibly-nested) iterable thereof. - """ - return map(Requirement, join_continuation(map(drop_comment, yield_lines(strs)))) - - -class RequirementParseError(packaging.requirements.InvalidRequirement): - "Compatibility wrapper for InvalidRequirement" - - -class Requirement(packaging.requirements.Requirement): - def __init__(self, requirement_string): - """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!""" - super(Requirement, self).__init__(requirement_string) - self.unsafe_name = self.name - project_name = safe_name(self.name) - self.project_name, self.key = project_name, project_name.lower() - self.specs = [(spec.operator, spec.version) for spec in self.specifier] - self.extras = tuple(map(safe_extra, self.extras)) - self.hashCmp = ( - self.key, - self.url, - self.specifier, - frozenset(self.extras), - str(self.marker) if self.marker else None, - ) - self.__hash = hash(self.hashCmp) - - def __eq__(self, other): - return isinstance(other, Requirement) and self.hashCmp == other.hashCmp - - def __ne__(self, other): - return not self == other - - def __contains__(self, item): - if isinstance(item, Distribution): - if item.key != self.key: - return False - - item = item.version - - # Allow prereleases always in order to match the previous behavior of - # this method. In the future this should be smarter and follow PEP 440 - # more accurately. - return self.specifier.contains(item, prereleases=True) - - def __hash__(self): - return self.__hash - - def __repr__(self): - return "Requirement.parse(%r)" % str(self) - - @staticmethod - def parse(s): - (req,) = parse_requirements(s) - return req - - -def _always_object(classes): - """ - Ensure object appears in the mro even - for old-style classes. - """ - if object not in classes: - return classes + (object,) - return classes - - -def _find_adapter(registry, ob): - """Return an adapter factory for `ob` from `registry`""" - types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob)))) - for t in types: - if t in registry: - return registry[t] - - -def ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - os.makedirs(dirname, exist_ok=True) - - -def _bypass_ensure_directory(path): - """Sandbox-bypassing version of ensure_directory()""" - if not WRITE_SUPPORT: - raise IOError('"os.mkdir" not supported on this platform.') - dirname, filename = split(path) - if dirname and filename and not isdir(dirname): - _bypass_ensure_directory(dirname) - try: - mkdir(dirname, 0o755) - except FileExistsError: - pass - - -def split_sections(s): - """Split a string or iterable thereof into (section, content) pairs - - Each ``section`` is a stripped version of the section header ("[section]") - and each ``content`` is a list of stripped lines excluding blank lines and - comment-only lines. If there are any such lines before the first section - header, they're returned in a first ``section`` of ``None``. - """ - section = None - content = [] - for line in yield_lines(s): - if line.startswith("["): - if line.endswith("]"): - if section or content: - yield section, content - section = line[1:-1].strip() - content = [] - else: - raise ValueError("Invalid section heading", line) - else: - content.append(line) - - # wrap up last segment - yield section, content - - -def _mkstemp(*args, **kw): - old_open = os.open - try: - # temporarily bypass sandboxing - os.open = os_open - return tempfile.mkstemp(*args, **kw) - finally: - # and then put it back - os.open = old_open - - -# Silence the PEP440Warning by default, so that end users don't get hit by it -# randomly just because they use pkg_resources. We want to append the rule -# because we want earlier uses of filterwarnings to take precedence over this -# one. -warnings.filterwarnings("ignore", category=PEP440Warning, append=True) - - -# from jaraco.functools 1.3 -def _call_aside(f, *args, **kwargs): - f(*args, **kwargs) - return f - - -@_call_aside -def _initialize(g=globals()): - "Set up global resource manager (deliberately not state-saved)" - manager = ResourceManager() - g['_manager'] = manager - g.update( - (name, getattr(manager, name)) - for name in dir(manager) - if not name.startswith('_') - ) - - -class PkgResourcesDeprecationWarning(Warning): - """ - Base class for warning about deprecations in ``pkg_resources`` - - This class is not derived from ``DeprecationWarning``, and as such is - visible by default. - """ - - -@_call_aside -def _initialize_master_working_set(): - """ - Prepare the master working set and make the ``require()`` - API available. - - This function has explicit effects on the global state - of pkg_resources. It is intended to be invoked once at - the initialization of this module. - - Invocation by other packages is unsupported and done - at their own risk. - """ - working_set = WorkingSet._build_master() - _declare_state('object', working_set=working_set) - - require = working_set.require - iter_entry_points = working_set.iter_entry_points - add_activation_listener = working_set.subscribe - run_script = working_set.run_script - # backward compatibility - run_main = run_script - # Activate all distributions already on sys.path with replace=False and - # ensure that all distributions added to the working set in the future - # (e.g. by calling ``require()``) will get activated as well, - # with higher priority (replace=True). - tuple(dist.activate(replace=False) for dist in working_set) - add_activation_listener( - lambda dist: dist.activate(replace=True), - existing=False, - ) - working_set.entries = [] - # match order - list(map(working_set.add_entry, sys.path)) - globals().update(locals()) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Scripts/activate_this.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Scripts/activate_this.py deleted file mode 100644 index cdef4d72071a4b99a1300e1444905784433179d6..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Scripts/activate_this.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -Activate virtualenv for current interpreter: - -Use exec(open(this_file).read(), {'__file__': this_file}). - -This can be used when you must use an existing Python interpreter, not the virtualenv bin/python. -""" # noqa: D415 -from __future__ import annotations - -import os -import site -import sys - -try: - abs_file = os.path.abspath(__file__) -except NameError as exc: - msg = "You must use exec(open(this_file).read(), {'__file__': this_file}))" - raise AssertionError(msg) from exc - -bin_dir = os.path.dirname(abs_file) -base = bin_dir[: -len("Scripts") - 1] # strip away the bin part from the __file__, plus the path separator - -# prepend bin to PATH (this file is inside the bin directory) -os.environ["PATH"] = os.pathsep.join([bin_dir, *os.environ.get("PATH", "").split(os.pathsep)]) -os.environ["VIRTUAL_ENV"] = base # virtual env is right above bin directory -os.environ["VIRTUAL_ENV_PROMPT"] = "" or os.path.basename(base) # noqa: SIM222 - -# add the virtual environments libraries to the host python import mechanism -prev_length = len(sys.path) -for lib in "..\\Lib\\site-packages".split(os.pathsep): - path = os.path.realpath(os.path.join(bin_dir, lib)) - site.addsitedir(path.decode("utf-8") if "" else path) -sys.path[:] = sys.path[prev_length:] + sys.path[0:prev_length] - -sys.real_prefix = sys.prefix -sys.prefix = base diff --git a/spaces/Vikas01/Attendence_System/static/styles/style.css b/spaces/Vikas01/Attendence_System/static/styles/style.css deleted file mode 100644 index e4f0be3a9a3c6cbe92c166e3a32edb46f20e9c68..0000000000000000000000000000000000000000 --- a/spaces/Vikas01/Attendence_System/static/styles/style.css +++ /dev/null @@ -1,11046 +0,0 @@ -@charset "UTF-8"; -/*! -* Start Bootstrap - Freelancer v7.0.7 (https://startbootstrap.com/theme/freelancer) -* Copyright 2013-2023 Start Bootstrap -* Licensed under MIT (https://github.com/StartBootstrap/startbootstrap-freelancer/blob/master/LICENSE) -*/ -/*! - * Bootstrap v5.2.3 (https://getbootstrap.com/) - * Copyright 2011-2022 The Bootstrap Authors - * Copyright 2011-2022 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) - */ -:root { - --bs-blue: #0d6efd; - --bs-indigo: #6610f2; - --bs-purple: #6f42c1; - --bs-pink: #d63384; - --bs-red: #dc3545; - --bs-orange: #fd7e14; - --bs-yellow: #ffc107; - --bs-green: #198754; - --bs-teal: #1abc9c; - --bs-cyan: #0dcaf0; - --bs-black: #000; - --bs-white: #fff; - --bs-gray: #6c757d; - --bs-gray-dark: #343a40; - --bs-gray-100: #f8f9fa; - --bs-gray-200: #e9ecef; - --bs-gray-300: #dee2e6; - --bs-gray-400: #ced4da; - --bs-gray-500: #adb5bd; - --bs-gray-600: #6c757d; - --bs-gray-700: #495057; - --bs-gray-800: #343a40; - --bs-gray-900: #212529; - --bs-primary: #1abc9c; - --bs-secondary: #2c3e50; - --bs-success: #198754; - --bs-info: #0dcaf0; - --bs-warning: #ffc107; - --bs-danger: #dc3545; - --bs-light: #f8f9fa; - --bs-dark: #212529; - --bs-primary-rgb: 26, 188, 156; - --bs-secondary-rgb: 44, 62, 80; - --bs-success-rgb: 25, 135, 84; - --bs-info-rgb: 13, 202, 240; - --bs-warning-rgb: 255, 193, 7; - --bs-danger-rgb: 220, 53, 69; - --bs-light-rgb: 248, 249, 250; - --bs-dark-rgb: 33, 37, 41; - --bs-white-rgb: 255, 255, 255; - --bs-black-rgb: 0, 0, 0; - --bs-body-color-rgb: 33, 37, 41; - --bs-body-bg-rgb: 255, 255, 255; - --bs-font-sans-serif: "Lato", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"; - --bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; - --bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0)); - --bs-body-font-family: var(--bs-font-sans-serif); - --bs-body-font-size: 1rem; - --bs-body-font-weight: 400; - --bs-body-line-height: 1.5; - --bs-body-color: #212529; - --bs-body-bg: #fff; - --bs-border-width: 0.125rem; - --bs-border-style: solid; - --bs-border-color: #dee2e6; - --bs-border-color-translucent: rgba(0, 0, 0, 0.175); - --bs-border-radius: 0.5rem; - --bs-border-radius-sm: 0.25rem; - --bs-border-radius-lg: 0.75rem; - --bs-border-radius-xl: 1rem; - --bs-border-radius-2xl: 2rem; - --bs-border-radius-pill: 50rem; - --bs-link-color: #1abc9c; - --bs-link-hover-color: #15967d; - --bs-code-color: #d63384; - --bs-highlight-bg: #fff3cd; -} - -*, -*::before, -*::after { - box-sizing: border-box; -} - -@media (prefers-reduced-motion: no-preference) { - :root { - scroll-behavior: smooth; - } -} - -body { - margin: 0; - font-family: var(--bs-body-font-family); - font-size: var(--bs-body-font-size); - font-weight: var(--bs-body-font-weight); - line-height: var(--bs-body-line-height); - color: var(--bs-body-color); - text-align: var(--bs-body-text-align); - background-color: var(--bs-body-bg); - -webkit-text-size-adjust: 100%; - -webkit-tap-highlight-color: rgba(0, 0, 0, 0); -} - -hr { - margin: 1rem 0; - color: inherit; - border: 0; - border-top: 0.125rem solid; - opacity: 0.25; -} - -h6, .h6, h5, .h5, h4, .h4, h3, .h3, h2, .h2, h1, .h1 { - margin-top: 0; - margin-bottom: 0.5rem; - font-family: "Montserrat", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"; - font-weight: 700; - line-height: 1.2; -} - -h1, .h1 { - font-size: calc(1.375rem + 1.5vw); -} -@media (min-width: 1200px) { - h1, .h1 { - font-size: 2.5rem; - } -} - -h2, .h2 { - font-size: calc(1.325rem + 0.9vw); -} -@media (min-width: 1200px) { - h2, .h2 { - font-size: 2rem; - } -} - -h3, .h3 { - font-size: calc(1.3rem + 0.6vw); -} -@media (min-width: 1200px) { - h3, .h3 { - font-size: 1.75rem; - } -} - -h4, .h4 { - font-size: calc(1.275rem + 0.3vw); -} -@media (min-width: 1200px) { - h4, .h4 { - font-size: 1.5rem; - } -} - -h5, .h5 { - font-size: 1.25rem; -} - -h6, .h6 { - font-size: 1rem; -} - -p { - margin-top: 0; - margin-bottom: 1rem; -} - -abbr[title] { - -webkit-text-decoration: underline dotted; - text-decoration: underline dotted; - cursor: help; - -webkit-text-decoration-skip-ink: none; - text-decoration-skip-ink: none; -} - -address { - margin-bottom: 1rem; - font-style: normal; - line-height: inherit; -} - -ol, -ul { - padding-left: 2rem; -} - -ol, -ul, -dl { - margin-top: 0; - margin-bottom: 1rem; -} - -ol ol, -ul ul, -ol ul, -ul ol { - margin-bottom: 0; -} - -dt { - font-weight: 700; -} - -dd { - margin-bottom: 0.5rem; - margin-left: 0; -} - -blockquote { - margin: 0 0 1rem; -} - -b, -strong { - font-weight: bolder; -} - -small, .small { - font-size: 0.875em; -} - -mark, .mark { - padding: 0.1875em; - background-color: var(--bs-highlight-bg); -} - -sub, -sup { - position: relative; - font-size: 0.75em; - line-height: 0; - vertical-align: baseline; -} - -sub { - bottom: -0.25em; -} - -sup { - top: -0.5em; -} - -a { - color: var(--bs-link-color); - text-decoration: underline; -} -a:hover { - color: var(--bs-link-hover-color); -} - -a:not([href]):not([class]), a:not([href]):not([class]):hover { - color: inherit; - text-decoration: none; -} - -pre, -code, -kbd, -samp { - font-family: var(--bs-font-monospace); - font-size: 1em; -} - -pre { - display: block; - margin-top: 0; - margin-bottom: 1rem; - overflow: auto; - font-size: 0.875em; -} -pre code { - font-size: inherit; - color: inherit; - word-break: normal; -} - -code { - font-size: 0.875em; - color: var(--bs-code-color); - word-wrap: break-word; -} -a > code { - color: inherit; -} - -kbd { - padding: 0.1875rem 0.375rem; - font-size: 0.875em; - color: var(--bs-body-bg); - background-color: var(--bs-body-color); - border-radius: 0.25rem; -} -kbd kbd { - padding: 0; - font-size: 1em; -} - -figure { - margin: 0 0 1rem; -} - -img, -svg { - vertical-align: middle; -} - -table { - caption-side: bottom; - border-collapse: collapse; -} - -caption { - padding-top: 0.5rem; - padding-bottom: 0.5rem; - color: #6c757d; - text-align: left; -} - -th { - text-align: inherit; - text-align: -webkit-match-parent; -} - -thead, -tbody, -tfoot, -tr, -td, -th { - border-color: inherit; - border-style: solid; - border-width: 0; -} - -label { - display: inline-block; -} - -button { - border-radius: 0; -} - -button:focus:not(:focus-visible) { - outline: 0; -} - -input, -button, -select, -optgroup, -textarea { - margin: 0; - font-family: inherit; - font-size: inherit; - line-height: inherit; -} - -button, -select { - text-transform: none; -} - -[role=button] { - cursor: pointer; -} - -select { - word-wrap: normal; -} -select:disabled { - opacity: 1; -} - -[list]:not([type=date]):not([type=datetime-local]):not([type=month]):not([type=week]):not([type=time])::-webkit-calendar-picker-indicator { - display: none !important; -} - -button, -[type=button], -[type=reset], -[type=submit] { - -webkit-appearance: button; -} -button:not(:disabled), -[type=button]:not(:disabled), -[type=reset]:not(:disabled), -[type=submit]:not(:disabled) { - cursor: pointer; -} - -::-moz-focus-inner { - padding: 0; - border-style: none; -} - -textarea { - resize: vertical; -} - -fieldset { - min-width: 0; - padding: 0; - margin: 0; - border: 0; -} - -legend { - float: left; - width: 100%; - padding: 0; - margin-bottom: 0.5rem; - font-size: calc(1.275rem + 0.3vw); - line-height: inherit; -} -@media (min-width: 1200px) { - legend { - font-size: 1.5rem; - } -} -legend + * { - clear: left; -} - -::-webkit-datetime-edit-fields-wrapper, -::-webkit-datetime-edit-text, -::-webkit-datetime-edit-minute, -::-webkit-datetime-edit-hour-field, -::-webkit-datetime-edit-day-field, -::-webkit-datetime-edit-month-field, -::-webkit-datetime-edit-year-field { - padding: 0; -} - -::-webkit-inner-spin-button { - height: auto; -} - -[type=search] { - outline-offset: -2px; - -webkit-appearance: textfield; -} - -/* rtl:raw: -[type="tel"], -[type="url"], -[type="email"], -[type="number"] { - direction: ltr; -} -*/ -::-webkit-search-decoration { - -webkit-appearance: none; -} - -::-webkit-color-swatch-wrapper { - padding: 0; -} - -::file-selector-button { - font: inherit; - -webkit-appearance: button; -} - -output { - display: inline-block; -} - -iframe { - border: 0; -} - -summary { - display: list-item; - cursor: pointer; -} - -progress { - vertical-align: baseline; -} - -[hidden] { - display: none !important; -} - -.lead { - font-size: 1.25rem; - font-weight: 300; -} - -.display-1 { - font-size: calc(1.625rem + 4.5vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-1 { - font-size: 5rem; - } -} - -.display-2 { - font-size: calc(1.575rem + 3.9vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-2 { - font-size: 4.5rem; - } -} - -.display-3 { - font-size: calc(1.525rem + 3.3vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-3 { - font-size: 4rem; - } -} - -.display-4 { - font-size: calc(1.475rem + 2.7vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-4 { - font-size: 3.5rem; - } -} - -.display-5 { - font-size: calc(1.425rem + 2.1vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-5 { - font-size: 3rem; - } -} - -.display-6 { - font-size: calc(1.375rem + 1.5vw); - font-weight: 300; - line-height: 1.2; -} -@media (min-width: 1200px) { - .display-6 { - font-size: 2.5rem; - } -} - -.list-unstyled { - padding-left: 0; - list-style: none; -} - -.list-inline { - padding-left: 0; - list-style: none; -} - -.list-inline-item { - display: inline-block; -} -.list-inline-item:not(:last-child) { - margin-right: 0.5rem; -} - -.initialism { - font-size: 0.875em; - text-transform: uppercase; -} - -.blockquote { - margin-bottom: 1rem; - font-size: 1.25rem; -} -.blockquote > :last-child { - margin-bottom: 0; -} - -.blockquote-footer { - margin-top: -1rem; - margin-bottom: 1rem; - font-size: 0.875em; - color: #6c757d; -} -.blockquote-footer::before { - content: "— "; -} - -.img-fluid { - max-width: 100%; - height: auto; -} - -.img-thumbnail { - padding: 0.25rem; - background-color: #fff; - border: 0.125rem solid var(--bs-border-color); - border-radius: 0.5rem; - max-width: 100%; - height: auto; -} - -.figure { - display: inline-block; -} - -.figure-img { - margin-bottom: 0.5rem; - line-height: 1; -} - -.figure-caption { - font-size: 0.875em; - color: #6c757d; -} - -.container, -.container-fluid, -.container-xxl, -.container-xl, -.container-lg, -.container-md, -.container-sm { - --bs-gutter-x: 1.5rem; - --bs-gutter-y: 0; - width: 100%; - padding-right: calc(var(--bs-gutter-x) * 0.5); - padding-left: calc(var(--bs-gutter-x) * 0.5); - margin-right: auto; - margin-left: auto; -} - -@media (min-width: 576px) { - .container-sm, .container { - max-width: 540px; - } -} -@media (min-width: 768px) { - .container-md, .container-sm, .container { - max-width: 720px; - } -} -@media (min-width: 992px) { - .container-lg, .container-md, .container-sm, .container { - max-width: 960px; - } -} -@media (min-width: 1200px) { - .container-xl, .container-lg, .container-md, .container-sm, .container { - max-width: 1140px; - } -} -@media (min-width: 1400px) { - .container-xxl, .container-xl, .container-lg, .container-md, .container-sm, .container { - max-width: 1320px; - } -} -.row { - --bs-gutter-x: 1.5rem; - --bs-gutter-y: 0; - display: flex; - flex-wrap: wrap; - margin-top: calc(-1 * var(--bs-gutter-y)); - margin-right: calc(-0.5 * var(--bs-gutter-x)); - margin-left: calc(-0.5 * var(--bs-gutter-x)); -} -.row > * { - flex-shrink: 0; - width: 100%; - max-width: 100%; - padding-right: calc(var(--bs-gutter-x) * 0.5); - padding-left: calc(var(--bs-gutter-x) * 0.5); - margin-top: var(--bs-gutter-y); -} - -.col { - flex: 1 0 0%; -} - -.row-cols-auto > * { - flex: 0 0 auto; - width: auto; -} - -.row-cols-1 > * { - flex: 0 0 auto; - width: 100%; -} - -.row-cols-2 > * { - flex: 0 0 auto; - width: 50%; -} - -.row-cols-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; -} - -.row-cols-4 > * { - flex: 0 0 auto; - width: 25%; -} - -.row-cols-5 > * { - flex: 0 0 auto; - width: 20%; -} - -.row-cols-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; -} - -.col-auto { - flex: 0 0 auto; - width: auto; -} - -.col-1 { - flex: 0 0 auto; - width: 8.33333333%; -} - -.col-2 { - flex: 0 0 auto; - width: 16.66666667%; -} - -.col-3 { - flex: 0 0 auto; - width: 25%; -} - -.col-4 { - flex: 0 0 auto; - width: 33.33333333%; -} - -.col-5 { - flex: 0 0 auto; - width: 41.66666667%; -} - -.col-6 { - flex: 0 0 auto; - width: 50%; -} - -.col-7 { - flex: 0 0 auto; - width: 58.33333333%; -} - -.col-8 { - flex: 0 0 auto; - width: 66.66666667%; -} - -.col-9 { - flex: 0 0 auto; - width: 75%; -} - -.col-10 { - flex: 0 0 auto; - width: 83.33333333%; -} - -.col-11 { - flex: 0 0 auto; - width: 91.66666667%; -} - -.col-12 { - flex: 0 0 auto; - width: 100%; -} - -.offset-1 { - margin-left: 8.33333333%; -} - -.offset-2 { - margin-left: 16.66666667%; -} - -.offset-3 { - margin-left: 25%; -} - -.offset-4 { - margin-left: 33.33333333%; -} - -.offset-5 { - margin-left: 41.66666667%; -} - -.offset-6 { - margin-left: 50%; -} - -.offset-7 { - margin-left: 58.33333333%; -} - -.offset-8 { - margin-left: 66.66666667%; -} - -.offset-9 { - margin-left: 75%; -} - -.offset-10 { - margin-left: 83.33333333%; -} - -.offset-11 { - margin-left: 91.66666667%; -} - -.g-0, -.gx-0 { - --bs-gutter-x: 0; -} - -.g-0, -.gy-0 { - --bs-gutter-y: 0; -} - -.g-1, -.gx-1 { - --bs-gutter-x: 0.25rem; -} - -.g-1, -.gy-1 { - --bs-gutter-y: 0.25rem; -} - -.g-2, -.gx-2 { - --bs-gutter-x: 0.5rem; -} - -.g-2, -.gy-2 { - --bs-gutter-y: 0.5rem; -} - -.g-3, -.gx-3 { - --bs-gutter-x: 1rem; -} - -.g-3, -.gy-3 { - --bs-gutter-y: 1rem; -} - -.g-4, -.gx-4 { - --bs-gutter-x: 1.5rem; -} - -.g-4, -.gy-4 { - --bs-gutter-y: 1.5rem; -} - -.g-5, -.gx-5 { - --bs-gutter-x: 3rem; -} - -.g-5, -.gy-5 { - --bs-gutter-y: 3rem; -} - -@media (min-width: 576px) { - .col-sm { - flex: 1 0 0%; - } - .row-cols-sm-auto > * { - flex: 0 0 auto; - width: auto; - } - .row-cols-sm-1 > * { - flex: 0 0 auto; - width: 100%; - } - .row-cols-sm-2 > * { - flex: 0 0 auto; - width: 50%; - } - .row-cols-sm-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; - } - .row-cols-sm-4 > * { - flex: 0 0 auto; - width: 25%; - } - .row-cols-sm-5 > * { - flex: 0 0 auto; - width: 20%; - } - .row-cols-sm-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; - } - .col-sm-auto { - flex: 0 0 auto; - width: auto; - } - .col-sm-1 { - flex: 0 0 auto; - width: 8.33333333%; - } - .col-sm-2 { - flex: 0 0 auto; - width: 16.66666667%; - } - .col-sm-3 { - flex: 0 0 auto; - width: 25%; - } - .col-sm-4 { - flex: 0 0 auto; - width: 33.33333333%; - } - .col-sm-5 { - flex: 0 0 auto; - width: 41.66666667%; - } - .col-sm-6 { - flex: 0 0 auto; - width: 50%; - } - .col-sm-7 { - flex: 0 0 auto; - width: 58.33333333%; - } - .col-sm-8 { - flex: 0 0 auto; - width: 66.66666667%; - } - .col-sm-9 { - flex: 0 0 auto; - width: 75%; - } - .col-sm-10 { - flex: 0 0 auto; - width: 83.33333333%; - } - .col-sm-11 { - flex: 0 0 auto; - width: 91.66666667%; - } - .col-sm-12 { - flex: 0 0 auto; - width: 100%; - } - .offset-sm-0 { - margin-left: 0; - } - .offset-sm-1 { - margin-left: 8.33333333%; - } - .offset-sm-2 { - margin-left: 16.66666667%; - } - .offset-sm-3 { - margin-left: 25%; - } - .offset-sm-4 { - margin-left: 33.33333333%; - } - .offset-sm-5 { - margin-left: 41.66666667%; - } - .offset-sm-6 { - margin-left: 50%; - } - .offset-sm-7 { - margin-left: 58.33333333%; - } - .offset-sm-8 { - margin-left: 66.66666667%; - } - .offset-sm-9 { - margin-left: 75%; - } - .offset-sm-10 { - margin-left: 83.33333333%; - } - .offset-sm-11 { - margin-left: 91.66666667%; - } - .g-sm-0, - .gx-sm-0 { - --bs-gutter-x: 0; - } - .g-sm-0, - .gy-sm-0 { - --bs-gutter-y: 0; - } - .g-sm-1, - .gx-sm-1 { - --bs-gutter-x: 0.25rem; - } - .g-sm-1, - .gy-sm-1 { - --bs-gutter-y: 0.25rem; - } - .g-sm-2, - .gx-sm-2 { - --bs-gutter-x: 0.5rem; - } - .g-sm-2, - .gy-sm-2 { - --bs-gutter-y: 0.5rem; - } - .g-sm-3, - .gx-sm-3 { - --bs-gutter-x: 1rem; - } - .g-sm-3, - .gy-sm-3 { - --bs-gutter-y: 1rem; - } - .g-sm-4, - .gx-sm-4 { - --bs-gutter-x: 1.5rem; - } - .g-sm-4, - .gy-sm-4 { - --bs-gutter-y: 1.5rem; - } - .g-sm-5, - .gx-sm-5 { - --bs-gutter-x: 3rem; - } - .g-sm-5, - .gy-sm-5 { - --bs-gutter-y: 3rem; - } -} -@media (min-width: 768px) { - .col-md { - flex: 1 0 0%; - } - .row-cols-md-auto > * { - flex: 0 0 auto; - width: auto; - } - .row-cols-md-1 > * { - flex: 0 0 auto; - width: 100%; - } - .row-cols-md-2 > * { - flex: 0 0 auto; - width: 50%; - } - .row-cols-md-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; - } - .row-cols-md-4 > * { - flex: 0 0 auto; - width: 25%; - } - .row-cols-md-5 > * { - flex: 0 0 auto; - width: 20%; - } - .row-cols-md-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; - } - .col-md-auto { - flex: 0 0 auto; - width: auto; - } - .col-md-1 { - flex: 0 0 auto; - width: 8.33333333%; - } - .col-md-2 { - flex: 0 0 auto; - width: 16.66666667%; - } - .col-md-3 { - flex: 0 0 auto; - width: 25%; - } - .col-md-4 { - flex: 0 0 auto; - width: 33.33333333%; - } - .col-md-5 { - flex: 0 0 auto; - width: 41.66666667%; - } - .col-md-6 { - flex: 0 0 auto; - width: 50%; - } - .col-md-7 { - flex: 0 0 auto; - width: 58.33333333%; - } - .col-md-8 { - flex: 0 0 auto; - width: 66.66666667%; - } - .col-md-9 { - flex: 0 0 auto; - width: 75%; - } - .col-md-10 { - flex: 0 0 auto; - width: 83.33333333%; - } - .col-md-11 { - flex: 0 0 auto; - width: 91.66666667%; - } - .col-md-12 { - flex: 0 0 auto; - width: 100%; - } - .offset-md-0 { - margin-left: 0; - } - .offset-md-1 { - margin-left: 8.33333333%; - } - .offset-md-2 { - margin-left: 16.66666667%; - } - .offset-md-3 { - margin-left: 25%; - } - .offset-md-4 { - margin-left: 33.33333333%; - } - .offset-md-5 { - margin-left: 41.66666667%; - } - .offset-md-6 { - margin-left: 50%; - } - .offset-md-7 { - margin-left: 58.33333333%; - } - .offset-md-8 { - margin-left: 66.66666667%; - } - .offset-md-9 { - margin-left: 75%; - } - .offset-md-10 { - margin-left: 83.33333333%; - } - .offset-md-11 { - margin-left: 91.66666667%; - } - .g-md-0, - .gx-md-0 { - --bs-gutter-x: 0; - } - .g-md-0, - .gy-md-0 { - --bs-gutter-y: 0; - } - .g-md-1, - .gx-md-1 { - --bs-gutter-x: 0.25rem; - } - .g-md-1, - .gy-md-1 { - --bs-gutter-y: 0.25rem; - } - .g-md-2, - .gx-md-2 { - --bs-gutter-x: 0.5rem; - } - .g-md-2, - .gy-md-2 { - --bs-gutter-y: 0.5rem; - } - .g-md-3, - .gx-md-3 { - --bs-gutter-x: 1rem; - } - .g-md-3, - .gy-md-3 { - --bs-gutter-y: 1rem; - } - .g-md-4, - .gx-md-4 { - --bs-gutter-x: 1.5rem; - } - .g-md-4, - .gy-md-4 { - --bs-gutter-y: 1.5rem; - } - .g-md-5, - .gx-md-5 { - --bs-gutter-x: 3rem; - } - .g-md-5, - .gy-md-5 { - --bs-gutter-y: 3rem; - } -} -@media (min-width: 992px) { - .col-lg { - flex: 1 0 0%; - } - .row-cols-lg-auto > * { - flex: 0 0 auto; - width: auto; - } - .row-cols-lg-1 > * { - flex: 0 0 auto; - width: 100%; - } - .row-cols-lg-2 > * { - flex: 0 0 auto; - width: 50%; - } - .row-cols-lg-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; - } - .row-cols-lg-4 > * { - flex: 0 0 auto; - width: 25%; - } - .row-cols-lg-5 > * { - flex: 0 0 auto; - width: 20%; - } - .row-cols-lg-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; - } - .col-lg-auto { - flex: 0 0 auto; - width: auto; - } - .col-lg-1 { - flex: 0 0 auto; - width: 8.33333333%; - } - .col-lg-2 { - flex: 0 0 auto; - width: 16.66666667%; - } - .col-lg-3 { - flex: 0 0 auto; - width: 25%; - } - .col-lg-4 { - flex: 0 0 auto; - width: 33.33333333%; - } - .col-lg-5 { - flex: 0 0 auto; - width: 41.66666667%; - } - .col-lg-6 { - flex: 0 0 auto; - width: 50%; - } - .col-lg-7 { - flex: 0 0 auto; - width: 58.33333333%; - } - .col-lg-8 { - flex: 0 0 auto; - width: 66.66666667%; - } - .col-lg-9 { - flex: 0 0 auto; - width: 75%; - } - .col-lg-10 { - flex: 0 0 auto; - width: 83.33333333%; - } - .col-lg-11 { - flex: 0 0 auto; - width: 91.66666667%; - } - .col-lg-12 { - flex: 0 0 auto; - width: 100%; - } - .offset-lg-0 { - margin-left: 0; - } - .offset-lg-1 { - margin-left: 8.33333333%; - } - .offset-lg-2 { - margin-left: 16.66666667%; - } - .offset-lg-3 { - margin-left: 25%; - } - .offset-lg-4 { - margin-left: 33.33333333%; - } - .offset-lg-5 { - margin-left: 41.66666667%; - } - .offset-lg-6 { - margin-left: 50%; - } - .offset-lg-7 { - margin-left: 58.33333333%; - } - .offset-lg-8 { - margin-left: 66.66666667%; - } - .offset-lg-9 { - margin-left: 75%; - } - .offset-lg-10 { - margin-left: 83.33333333%; - } - .offset-lg-11 { - margin-left: 91.66666667%; - } - .g-lg-0, - .gx-lg-0 { - --bs-gutter-x: 0; - } - .g-lg-0, - .gy-lg-0 { - --bs-gutter-y: 0; - } - .g-lg-1, - .gx-lg-1 { - --bs-gutter-x: 0.25rem; - } - .g-lg-1, - .gy-lg-1 { - --bs-gutter-y: 0.25rem; - } - .g-lg-2, - .gx-lg-2 { - --bs-gutter-x: 0.5rem; - } - .g-lg-2, - .gy-lg-2 { - --bs-gutter-y: 0.5rem; - } - .g-lg-3, - .gx-lg-3 { - --bs-gutter-x: 1rem; - } - .g-lg-3, - .gy-lg-3 { - --bs-gutter-y: 1rem; - } - .g-lg-4, - .gx-lg-4 { - --bs-gutter-x: 1.5rem; - } - .g-lg-4, - .gy-lg-4 { - --bs-gutter-y: 1.5rem; - } - .g-lg-5, - .gx-lg-5 { - --bs-gutter-x: 3rem; - } - .g-lg-5, - .gy-lg-5 { - --bs-gutter-y: 3rem; - } -} -@media (min-width: 1200px) { - .col-xl { - flex: 1 0 0%; - } - .row-cols-xl-auto > * { - flex: 0 0 auto; - width: auto; - } - .row-cols-xl-1 > * { - flex: 0 0 auto; - width: 100%; - } - .row-cols-xl-2 > * { - flex: 0 0 auto; - width: 50%; - } - .row-cols-xl-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; - } - .row-cols-xl-4 > * { - flex: 0 0 auto; - width: 25%; - } - .row-cols-xl-5 > * { - flex: 0 0 auto; - width: 20%; - } - .row-cols-xl-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; - } - .col-xl-auto { - flex: 0 0 auto; - width: auto; - } - .col-xl-1 { - flex: 0 0 auto; - width: 8.33333333%; - } - .col-xl-2 { - flex: 0 0 auto; - width: 16.66666667%; - } - .col-xl-3 { - flex: 0 0 auto; - width: 25%; - } - .col-xl-4 { - flex: 0 0 auto; - width: 33.33333333%; - } - .col-xl-5 { - flex: 0 0 auto; - width: 41.66666667%; - } - .col-xl-6 { - flex: 0 0 auto; - width: 50%; - } - .col-xl-7 { - flex: 0 0 auto; - width: 58.33333333%; - } - .col-xl-8 { - flex: 0 0 auto; - width: 66.66666667%; - } - .col-xl-9 { - flex: 0 0 auto; - width: 75%; - } - .col-xl-10 { - flex: 0 0 auto; - width: 83.33333333%; - } - .col-xl-11 { - flex: 0 0 auto; - width: 91.66666667%; - } - .col-xl-12 { - flex: 0 0 auto; - width: 100%; - } - .offset-xl-0 { - margin-left: 0; - } - .offset-xl-1 { - margin-left: 8.33333333%; - } - .offset-xl-2 { - margin-left: 16.66666667%; - } - .offset-xl-3 { - margin-left: 25%; - } - .offset-xl-4 { - margin-left: 33.33333333%; - } - .offset-xl-5 { - margin-left: 41.66666667%; - } - .offset-xl-6 { - margin-left: 50%; - } - .offset-xl-7 { - margin-left: 58.33333333%; - } - .offset-xl-8 { - margin-left: 66.66666667%; - } - .offset-xl-9 { - margin-left: 75%; - } - .offset-xl-10 { - margin-left: 83.33333333%; - } - .offset-xl-11 { - margin-left: 91.66666667%; - } - .g-xl-0, - .gx-xl-0 { - --bs-gutter-x: 0; - } - .g-xl-0, - .gy-xl-0 { - --bs-gutter-y: 0; - } - .g-xl-1, - .gx-xl-1 { - --bs-gutter-x: 0.25rem; - } - .g-xl-1, - .gy-xl-1 { - --bs-gutter-y: 0.25rem; - } - .g-xl-2, - .gx-xl-2 { - --bs-gutter-x: 0.5rem; - } - .g-xl-2, - .gy-xl-2 { - --bs-gutter-y: 0.5rem; - } - .g-xl-3, - .gx-xl-3 { - --bs-gutter-x: 1rem; - } - .g-xl-3, - .gy-xl-3 { - --bs-gutter-y: 1rem; - } - .g-xl-4, - .gx-xl-4 { - --bs-gutter-x: 1.5rem; - } - .g-xl-4, - .gy-xl-4 { - --bs-gutter-y: 1.5rem; - } - .g-xl-5, - .gx-xl-5 { - --bs-gutter-x: 3rem; - } - .g-xl-5, - .gy-xl-5 { - --bs-gutter-y: 3rem; - } -} -@media (min-width: 1400px) { - .col-xxl { - flex: 1 0 0%; - } - .row-cols-xxl-auto > * { - flex: 0 0 auto; - width: auto; - } - .row-cols-xxl-1 > * { - flex: 0 0 auto; - width: 100%; - } - .row-cols-xxl-2 > * { - flex: 0 0 auto; - width: 50%; - } - .row-cols-xxl-3 > * { - flex: 0 0 auto; - width: 33.3333333333%; - } - .row-cols-xxl-4 > * { - flex: 0 0 auto; - width: 25%; - } - .row-cols-xxl-5 > * { - flex: 0 0 auto; - width: 20%; - } - .row-cols-xxl-6 > * { - flex: 0 0 auto; - width: 16.6666666667%; - } - .col-xxl-auto { - flex: 0 0 auto; - width: auto; - } - .col-xxl-1 { - flex: 0 0 auto; - width: 8.33333333%; - } - .col-xxl-2 { - flex: 0 0 auto; - width: 16.66666667%; - } - .col-xxl-3 { - flex: 0 0 auto; - width: 25%; - } - .col-xxl-4 { - flex: 0 0 auto; - width: 33.33333333%; - } - .col-xxl-5 { - flex: 0 0 auto; - width: 41.66666667%; - } - .col-xxl-6 { - flex: 0 0 auto; - width: 50%; - } - .col-xxl-7 { - flex: 0 0 auto; - width: 58.33333333%; - } - .col-xxl-8 { - flex: 0 0 auto; - width: 66.66666667%; - } - .col-xxl-9 { - flex: 0 0 auto; - width: 75%; - } - .col-xxl-10 { - flex: 0 0 auto; - width: 83.33333333%; - } - .col-xxl-11 { - flex: 0 0 auto; - width: 91.66666667%; - } - .col-xxl-12 { - flex: 0 0 auto; - width: 100%; - } - .offset-xxl-0 { - margin-left: 0; - } - .offset-xxl-1 { - margin-left: 8.33333333%; - } - .offset-xxl-2 { - margin-left: 16.66666667%; - } - .offset-xxl-3 { - margin-left: 25%; - } - .offset-xxl-4 { - margin-left: 33.33333333%; - } - .offset-xxl-5 { - margin-left: 41.66666667%; - } - .offset-xxl-6 { - margin-left: 50%; - } - .offset-xxl-7 { - margin-left: 58.33333333%; - } - .offset-xxl-8 { - margin-left: 66.66666667%; - } - .offset-xxl-9 { - margin-left: 75%; - } - .offset-xxl-10 { - margin-left: 83.33333333%; - } - .offset-xxl-11 { - margin-left: 91.66666667%; - } - .g-xxl-0, - .gx-xxl-0 { - --bs-gutter-x: 0; - } - .g-xxl-0, - .gy-xxl-0 { - --bs-gutter-y: 0; - } - .g-xxl-1, - .gx-xxl-1 { - --bs-gutter-x: 0.25rem; - } - .g-xxl-1, - .gy-xxl-1 { - --bs-gutter-y: 0.25rem; - } - .g-xxl-2, - .gx-xxl-2 { - --bs-gutter-x: 0.5rem; - } - .g-xxl-2, - .gy-xxl-2 { - --bs-gutter-y: 0.5rem; - } - .g-xxl-3, - .gx-xxl-3 { - --bs-gutter-x: 1rem; - } - .g-xxl-3, - .gy-xxl-3 { - --bs-gutter-y: 1rem; - } - .g-xxl-4, - .gx-xxl-4 { - --bs-gutter-x: 1.5rem; - } - .g-xxl-4, - .gy-xxl-4 { - --bs-gutter-y: 1.5rem; - } - .g-xxl-5, - .gx-xxl-5 { - --bs-gutter-x: 3rem; - } - .g-xxl-5, - .gy-xxl-5 { - --bs-gutter-y: 3rem; - } -} -.table { - --bs-table-color: var(--bs-body-color); - --bs-table-bg: transparent; - --bs-table-border-color: var(--bs-border-color); - --bs-table-accent-bg: transparent; - --bs-table-striped-color: var(--bs-body-color); - --bs-table-striped-bg: rgba(0, 0, 0, 0.05); - --bs-table-active-color: var(--bs-body-color); - --bs-table-active-bg: rgba(0, 0, 0, 0.1); - --bs-table-hover-color: var(--bs-body-color); - --bs-table-hover-bg: rgba(0, 0, 0, 0.075); - width: 100%; - margin-bottom: 1rem; - color: var(--bs-table-color); - vertical-align: top; - border-color: var(--bs-table-border-color); -} -.table > :not(caption) > * > * { - padding: 0.5rem 0.5rem; - background-color: var(--bs-table-bg); - border-bottom-width: 0.125rem; - box-shadow: inset 0 0 0 9999px var(--bs-table-accent-bg); -} -.table > tbody { - vertical-align: inherit; -} -.table > thead { - vertical-align: bottom; -} - -.table-group-divider { - border-top: 0.25rem solid currentcolor; -} - -.caption-top { - caption-side: top; -} - -.table-sm > :not(caption) > * > * { - padding: 0.25rem 0.25rem; -} - -.table-bordered > :not(caption) > * { - border-width: 0.125rem 0; -} -.table-bordered > :not(caption) > * > * { - border-width: 0 0.125rem; -} - -.table-borderless > :not(caption) > * > * { - border-bottom-width: 0; -} -.table-borderless > :not(:first-child) { - border-top-width: 0; -} - -.table-striped > tbody > tr:nth-of-type(odd) > * { - --bs-table-accent-bg: var(--bs-table-striped-bg); - color: var(--bs-table-striped-color); -} - -.table-striped-columns > :not(caption) > tr > :nth-child(even) { - --bs-table-accent-bg: var(--bs-table-striped-bg); - color: var(--bs-table-striped-color); -} - -.table-active { - --bs-table-accent-bg: var(--bs-table-active-bg); - color: var(--bs-table-active-color); -} - -.table-hover > tbody > tr:hover > * { - --bs-table-accent-bg: var(--bs-table-hover-bg); - color: var(--bs-table-hover-color); -} - -.table-primary { - --bs-table-color: #000; - --bs-table-bg: #d1f2eb; - --bs-table-border-color: #bcdad4; - --bs-table-striped-bg: #c7e6df; - --bs-table-striped-color: #000; - --bs-table-active-bg: #bcdad4; - --bs-table-active-color: #000; - --bs-table-hover-bg: #c1e0d9; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-secondary { - --bs-table-color: #000; - --bs-table-bg: #d5d8dc; - --bs-table-border-color: #c0c2c6; - --bs-table-striped-bg: #cacdd1; - --bs-table-striped-color: #000; - --bs-table-active-bg: #c0c2c6; - --bs-table-active-color: #000; - --bs-table-hover-bg: #c5c8cc; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-success { - --bs-table-color: #000; - --bs-table-bg: #d1e7dd; - --bs-table-border-color: #bcd0c7; - --bs-table-striped-bg: #c7dbd2; - --bs-table-striped-color: #000; - --bs-table-active-bg: #bcd0c7; - --bs-table-active-color: #000; - --bs-table-hover-bg: #c1d6cc; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-info { - --bs-table-color: #000; - --bs-table-bg: #cff4fc; - --bs-table-border-color: #badce3; - --bs-table-striped-bg: #c5e8ef; - --bs-table-striped-color: #000; - --bs-table-active-bg: #badce3; - --bs-table-active-color: #000; - --bs-table-hover-bg: #bfe2e9; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-warning { - --bs-table-color: #000; - --bs-table-bg: #fff3cd; - --bs-table-border-color: #e6dbb9; - --bs-table-striped-bg: #f2e7c3; - --bs-table-striped-color: #000; - --bs-table-active-bg: #e6dbb9; - --bs-table-active-color: #000; - --bs-table-hover-bg: #ece1be; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-danger { - --bs-table-color: #000; - --bs-table-bg: #f8d7da; - --bs-table-border-color: #dfc2c4; - --bs-table-striped-bg: #eccccf; - --bs-table-striped-color: #000; - --bs-table-active-bg: #dfc2c4; - --bs-table-active-color: #000; - --bs-table-hover-bg: #e5c7ca; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-light { - --bs-table-color: #000; - --bs-table-bg: #f8f9fa; - --bs-table-border-color: #dfe0e1; - --bs-table-striped-bg: #ecedee; - --bs-table-striped-color: #000; - --bs-table-active-bg: #dfe0e1; - --bs-table-active-color: #000; - --bs-table-hover-bg: #e5e6e7; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-dark { - --bs-table-color: #fff; - --bs-table-bg: #212529; - --bs-table-border-color: #373b3e; - --bs-table-striped-bg: #2c3034; - --bs-table-striped-color: #fff; - --bs-table-active-bg: #373b3e; - --bs-table-active-color: #fff; - --bs-table-hover-bg: #323539; - --bs-table-hover-color: #fff; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color); -} - -.table-responsive { - overflow-x: auto; - -webkit-overflow-scrolling: touch; -} - -@media (max-width: 575.98px) { - .table-responsive-sm { - overflow-x: auto; - -webkit-overflow-scrolling: touch; - } -} -@media (max-width: 767.98px) { - .table-responsive-md { - overflow-x: auto; - -webkit-overflow-scrolling: touch; - } -} -@media (max-width: 991.98px) { - .table-responsive-lg { - overflow-x: auto; - -webkit-overflow-scrolling: touch; - } -} -@media (max-width: 1199.98px) { - .table-responsive-xl { - overflow-x: auto; - -webkit-overflow-scrolling: touch; - } -} -@media (max-width: 1399.98px) { - .table-responsive-xxl { - overflow-x: auto; - -webkit-overflow-scrolling: touch; - } -} -.form-label { - margin-bottom: 0.5rem; -} - -.col-form-label { - padding-top: 0.5rem; - padding-bottom: 0.5rem; - margin-bottom: 0; - font-size: inherit; - line-height: 1.5; -} - -.col-form-label-lg { - padding-top: 0.625rem; - padding-bottom: 0.625rem; - font-size: 1.25rem; -} - -.col-form-label-sm { - padding-top: 0.375rem; - padding-bottom: 0.375rem; - font-size: 0.875rem; -} - -.form-text { - margin-top: 0.25rem; - font-size: 0.875em; - color: #6c757d; -} - -.form-control { - display: block; - width: 100%; - padding: 0.375rem 0.75rem; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #212529; - background-color: #fff; - background-clip: padding-box; - border: 0.125rem solid #ced4da; - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; - border-radius: 0.5rem; - transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .form-control { - transition: none; - } -} -.form-control[type=file] { - overflow: hidden; -} -.form-control[type=file]:not(:disabled):not([readonly]) { - cursor: pointer; -} -.form-control:focus { - color: #212529; - background-color: #fff; - border-color: #8ddece; - outline: 0; - box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); -} -.form-control::-webkit-date-and-time-value { - height: 1.5em; -} -.form-control::-moz-placeholder { - color: #6c757d; - opacity: 1; -} -.form-control::placeholder { - color: #6c757d; - opacity: 1; -} -.form-control:disabled { - background-color: #e9ecef; - opacity: 1; -} -.form-control::file-selector-button { - padding: 0.375rem 0.75rem; - margin: -0.375rem -0.75rem; - -webkit-margin-end: 0.75rem; - margin-inline-end: 0.75rem; - color: #212529; - background-color: #e9ecef; - pointer-events: none; - border-color: inherit; - border-style: solid; - border-width: 0; - border-inline-end-width: 0.125rem; - border-radius: 0; - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .form-control::file-selector-button { - transition: none; - } -} -.form-control:hover:not(:disabled):not([readonly])::file-selector-button { - background-color: #dde0e3; -} - -.form-control-plaintext { - display: block; - width: 100%; - padding: 0.375rem 0; - margin-bottom: 0; - line-height: 1.5; - color: #212529; - background-color: transparent; - border: solid transparent; - border-width: 0.125rem 0; -} -.form-control-plaintext:focus { - outline: 0; -} -.form-control-plaintext.form-control-sm, .form-control-plaintext.form-control-lg { - padding-right: 0; - padding-left: 0; -} - -.form-control-sm { - min-height: calc(1.5em + 0.75rem); - padding: 0.25rem 0.5rem; - font-size: 0.875rem; - border-radius: 0.25rem; -} -.form-control-sm::file-selector-button { - padding: 0.25rem 0.5rem; - margin: -0.25rem -0.5rem; - -webkit-margin-end: 0.5rem; - margin-inline-end: 0.5rem; -} - -.form-control-lg { - min-height: calc(1.5em + 1.25rem); - padding: 0.5rem 1rem; - font-size: 1.25rem; - border-radius: 0.75rem; -} -.form-control-lg::file-selector-button { - padding: 0.5rem 1rem; - margin: -0.5rem -1rem; - -webkit-margin-end: 1rem; - margin-inline-end: 1rem; -} - -textarea.form-control { - min-height: calc(1.5em + 1rem); -} -textarea.form-control-sm { - min-height: calc(1.5em + 0.75rem); -} -textarea.form-control-lg { - min-height: calc(1.5em + 1.25rem); -} - -.form-control-color { - width: 3rem; - height: calc(1.5em + 1rem); - padding: 0.375rem; -} -.form-control-color:not(:disabled):not([readonly]) { - cursor: pointer; -} -.form-control-color::-moz-color-swatch { - border: 0 !important; - border-radius: 0.5rem; -} -.form-control-color::-webkit-color-swatch { - border-radius: 0.5rem; -} -.form-control-color.form-control-sm { - height: calc(1.5em + 0.75rem); -} -.form-control-color.form-control-lg { - height: calc(1.5em + 1.25rem); -} - -.form-select { - display: block; - width: 100%; - padding: 0.375rem 2.25rem 0.375rem 0.75rem; - -moz-padding-start: calc(0.75rem - 3px); - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #212529; - background-color: #fff; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right 0.75rem center; - background-size: 16px 12px; - border: 0.125rem solid #ced4da; - border-radius: 0.5rem; - transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; -} -@media (prefers-reduced-motion: reduce) { - .form-select { - transition: none; - } -} -.form-select:focus { - border-color: #8ddece; - outline: 0; - box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); -} -.form-select[multiple], .form-select[size]:not([size="1"]) { - padding-right: 0.75rem; - background-image: none; -} -.form-select:disabled { - background-color: #e9ecef; -} -.form-select:-moz-focusring { - color: transparent; - text-shadow: 0 0 0 #212529; -} - -.form-select-sm { - padding-top: 0.25rem; - padding-bottom: 0.25rem; - padding-left: 0.5rem; - font-size: 0.875rem; - border-radius: 0.25rem; -} - -.form-select-lg { - padding-top: 0.5rem; - padding-bottom: 0.5rem; - padding-left: 1rem; - font-size: 1.25rem; - border-radius: 0.75rem; -} - -.form-check { - display: block; - min-height: 1.5rem; - padding-left: 1.5em; - margin-bottom: 0.125rem; -} -.form-check .form-check-input { - float: left; - margin-left: -1.5em; -} - -.form-check-reverse { - padding-right: 1.5em; - padding-left: 0; - text-align: right; -} -.form-check-reverse .form-check-input { - float: right; - margin-right: -1.5em; - margin-left: 0; -} - -.form-check-input { - width: 1em; - height: 1em; - margin-top: 0.25em; - vertical-align: top; - background-color: #fff; - background-repeat: no-repeat; - background-position: center; - background-size: contain; - border: 1px solid rgba(0, 0, 0, 0.25); - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; - -webkit-print-color-adjust: exact; - print-color-adjust: exact; -} -.form-check-input[type=checkbox] { - border-radius: 0.25em; -} -.form-check-input[type=radio] { - border-radius: 50%; -} -.form-check-input:active { - filter: brightness(90%); -} -.form-check-input:focus { - border-color: #8ddece; - outline: 0; - box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); -} -.form-check-input:checked { - background-color: #1abc9c; - border-color: #1abc9c; -} -.form-check-input:checked[type=checkbox] { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='m6 10 3 3 6-6'/%3e%3c/svg%3e"); -} -.form-check-input:checked[type=radio] { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e"); -} -.form-check-input[type=checkbox]:indeterminate { - background-color: #1abc9c; - border-color: #1abc9c; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e"); -} -.form-check-input:disabled { - pointer-events: none; - filter: none; - opacity: 0.5; -} -.form-check-input[disabled] ~ .form-check-label, .form-check-input:disabled ~ .form-check-label { - cursor: default; - opacity: 0.5; -} - -.form-switch { - padding-left: 2.5em; -} -.form-switch .form-check-input { - width: 2em; - margin-left: -2.5em; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e"); - background-position: left center; - border-radius: 2em; - transition: background-position 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .form-switch .form-check-input { - transition: none; - } -} -.form-switch .form-check-input:focus { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%238ddece'/%3e%3c/svg%3e"); -} -.form-switch .form-check-input:checked { - background-position: right center; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e"); -} -.form-switch.form-check-reverse { - padding-right: 2.5em; - padding-left: 0; -} -.form-switch.form-check-reverse .form-check-input { - margin-right: -2.5em; - margin-left: 0; -} - -.form-check-inline { - display: inline-block; - margin-right: 1rem; -} - -.btn-check { - position: absolute; - clip: rect(0, 0, 0, 0); - pointer-events: none; -} -.btn-check[disabled] + .btn, .btn-check:disabled + .btn { - pointer-events: none; - filter: none; - opacity: 0.65; -} - -.form-range { - width: 100%; - height: 1.5rem; - padding: 0; - background-color: transparent; - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; -} -.form-range:focus { - outline: 0; -} -.form-range:focus::-webkit-slider-thumb { - box-shadow: 0 0 0 1px #fff, 0 0 0 0.25rem rgba(26, 188, 156, 0.25); -} -.form-range:focus::-moz-range-thumb { - box-shadow: 0 0 0 1px #fff, 0 0 0 0.25rem rgba(26, 188, 156, 0.25); -} -.form-range::-moz-focus-outer { - border: 0; -} -.form-range::-webkit-slider-thumb { - width: 1rem; - height: 1rem; - margin-top: -0.25rem; - background-color: #1abc9c; - border: 0; - border-radius: 1rem; - -webkit-transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - -webkit-appearance: none; - appearance: none; -} -@media (prefers-reduced-motion: reduce) { - .form-range::-webkit-slider-thumb { - -webkit-transition: none; - transition: none; - } -} -.form-range::-webkit-slider-thumb:active { - background-color: #baebe1; -} -.form-range::-webkit-slider-runnable-track { - width: 100%; - height: 0.5rem; - color: transparent; - cursor: pointer; - background-color: #dee2e6; - border-color: transparent; - border-radius: 1rem; -} -.form-range::-moz-range-thumb { - width: 1rem; - height: 1rem; - background-color: #1abc9c; - border: 0; - border-radius: 1rem; - -moz-transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - -moz-appearance: none; - appearance: none; -} -@media (prefers-reduced-motion: reduce) { - .form-range::-moz-range-thumb { - -moz-transition: none; - transition: none; - } -} -.form-range::-moz-range-thumb:active { - background-color: #baebe1; -} -.form-range::-moz-range-track { - width: 100%; - height: 0.5rem; - color: transparent; - cursor: pointer; - background-color: #dee2e6; - border-color: transparent; - border-radius: 1rem; -} -.form-range:disabled { - pointer-events: none; -} -.form-range:disabled::-webkit-slider-thumb { - background-color: #adb5bd; -} -.form-range:disabled::-moz-range-thumb { - background-color: #adb5bd; -} - -.form-floating { - position: relative; -} -.form-floating > .form-control, -.form-floating > .form-control-plaintext, -.form-floating > .form-select { - height: 5.5rem; - line-height: 1.25; -} -.form-floating > label { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; - padding: 1.5rem 0; - overflow: hidden; - text-align: start; - text-overflow: ellipsis; - white-space: nowrap; - pointer-events: none; - border: 0.125rem solid transparent; - transform-origin: 0 0; - transition: opacity 0.1s ease-in-out, transform 0.1s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .form-floating > label { - transition: none; - } -} -.form-floating > .form-control, -.form-floating > .form-control-plaintext { - padding: 1.5rem 0; -} -.form-floating > .form-control::-moz-placeholder, .form-floating > .form-control-plaintext::-moz-placeholder { - color: transparent; -} -.form-floating > .form-control::placeholder, -.form-floating > .form-control-plaintext::placeholder { - color: transparent; -} -.form-floating > .form-control:not(:-moz-placeholder-shown), .form-floating > .form-control-plaintext:not(:-moz-placeholder-shown) { - padding-top: 2.5rem; - padding-bottom: 1.5rem; -} -.form-floating > .form-control:focus, .form-floating > .form-control:not(:placeholder-shown), -.form-floating > .form-control-plaintext:focus, -.form-floating > .form-control-plaintext:not(:placeholder-shown) { - padding-top: 2.5rem; - padding-bottom: 1.5rem; -} -.form-floating > .form-control:-webkit-autofill, -.form-floating > .form-control-plaintext:-webkit-autofill { - padding-top: 2.5rem; - padding-bottom: 1.5rem; -} -.form-floating > .form-select { - padding-top: 2.5rem; - padding-bottom: 1.5rem; -} -.form-floating > .form-control:not(:-moz-placeholder-shown) ~ label { - opacity: 0.65; - transform: scale(0.65) translateY(-0.5rem) translateX(0rem); -} -.form-floating > .form-control:focus ~ label, -.form-floating > .form-control:not(:placeholder-shown) ~ label, -.form-floating > .form-control-plaintext ~ label, -.form-floating > .form-select ~ label { - opacity: 0.65; - transform: scale(0.65) translateY(-0.5rem) translateX(0rem); -} -.form-floating > .form-control:-webkit-autofill ~ label { - opacity: 0.65; - transform: scale(0.65) translateY(-0.5rem) translateX(0rem); -} -.form-floating > .form-control-plaintext ~ label { - border-width: 0.125rem 0; -} - -.input-group { - position: relative; - display: flex; - flex-wrap: wrap; - align-items: stretch; - width: 100%; -} -.input-group > .form-control, -.input-group > .form-select, -.input-group > .form-floating { - position: relative; - flex: 1 1 auto; - width: 1%; - min-width: 0; -} -.input-group > .form-control:focus, -.input-group > .form-select:focus, -.input-group > .form-floating:focus-within { - z-index: 5; -} -.input-group .btn { - position: relative; - z-index: 2; -} -.input-group .btn:focus { - z-index: 5; -} - -.input-group-text { - display: flex; - align-items: center; - padding: 0.375rem 0.75rem; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #212529; - text-align: center; - white-space: nowrap; - background-color: #e9ecef; - border: 0.125rem solid #ced4da; - border-radius: 0.5rem; -} - -.input-group-lg > .form-control, -.input-group-lg > .form-select, -.input-group-lg > .input-group-text, -.input-group-lg > .btn { - padding: 0.5rem 1rem; - font-size: 1.25rem; - border-radius: 0.75rem; -} - -.input-group-sm > .form-control, -.input-group-sm > .form-select, -.input-group-sm > .input-group-text, -.input-group-sm > .btn { - padding: 0.25rem 0.5rem; - font-size: 0.875rem; - border-radius: 0.25rem; -} - -.input-group-lg > .form-select, -.input-group-sm > .form-select { - padding-right: 3rem; -} - -.input-group:not(.has-validation) > :not(:last-child):not(.dropdown-toggle):not(.dropdown-menu):not(.form-floating), -.input-group:not(.has-validation) > .dropdown-toggle:nth-last-child(n+3), -.input-group:not(.has-validation) > .form-floating:not(:last-child) > .form-control, -.input-group:not(.has-validation) > .form-floating:not(:last-child) > .form-select { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.input-group.has-validation > :nth-last-child(n+3):not(.dropdown-toggle):not(.dropdown-menu):not(.form-floating), -.input-group.has-validation > .dropdown-toggle:nth-last-child(n+4), -.input-group.has-validation > .form-floating:nth-last-child(n+3) > .form-control, -.input-group.has-validation > .form-floating:nth-last-child(n+3) > .form-select { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.input-group > :not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback) { - margin-left: -0.125rem; - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} -.input-group > .form-floating:not(:first-child) > .form-control, -.input-group > .form-floating:not(:first-child) > .form-select { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} - -.valid-feedback { - display: none; - width: 100%; - margin-top: 0.25rem; - font-size: 0.875em; - color: #198754; -} - -.valid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: 0.25rem 0.5rem; - margin-top: 0.1rem; - font-size: 0.875rem; - color: #fff; - background-color: rgba(25, 135, 84, 0.9); - border-radius: 0.5rem; -} - -.was-validated :valid ~ .valid-feedback, -.was-validated :valid ~ .valid-tooltip, -.is-valid ~ .valid-feedback, -.is-valid ~ .valid-tooltip { - display: block; -} - -.was-validated .form-control:valid, .form-control.is-valid { - border-color: #198754; - padding-right: calc(1.5em + 0.75rem); - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right calc(0.375em + 0.1875rem) center; - background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); -} -.was-validated .form-control:valid:focus, .form-control.is-valid:focus { - border-color: #198754; - box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25); -} - -.was-validated textarea.form-control:valid, textarea.form-control.is-valid { - padding-right: calc(1.5em + 0.75rem); - background-position: top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem); -} - -.was-validated .form-select:valid, .form-select.is-valid { - border-color: #198754; -} -.was-validated .form-select:valid:not([multiple]):not([size]), .was-validated .form-select:valid:not([multiple])[size="1"], .form-select.is-valid:not([multiple]):not([size]), .form-select.is-valid:not([multiple])[size="1"] { - padding-right: 4.125rem; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); - background-position: right 0.75rem center, center right 2.25rem; - background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); -} -.was-validated .form-select:valid:focus, .form-select.is-valid:focus { - border-color: #198754; - box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25); -} - -.was-validated .form-control-color:valid, .form-control-color.is-valid { - width: calc(3rem + calc(1.5em + 0.75rem)); -} - -.was-validated .form-check-input:valid, .form-check-input.is-valid { - border-color: #198754; -} -.was-validated .form-check-input:valid:checked, .form-check-input.is-valid:checked { - background-color: #198754; -} -.was-validated .form-check-input:valid:focus, .form-check-input.is-valid:focus { - box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25); -} -.was-validated .form-check-input:valid ~ .form-check-label, .form-check-input.is-valid ~ .form-check-label { - color: #198754; -} - -.form-check-inline .form-check-input ~ .valid-feedback { - margin-left: 0.5em; -} - -.was-validated .input-group > .form-control:not(:focus):valid, .input-group > .form-control:not(:focus).is-valid, -.was-validated .input-group > .form-select:not(:focus):valid, -.input-group > .form-select:not(:focus).is-valid, -.was-validated .input-group > .form-floating:not(:focus-within):valid, -.input-group > .form-floating:not(:focus-within).is-valid { - z-index: 3; -} - -.invalid-feedback { - display: none; - width: 100%; - margin-top: 0.25rem; - font-size: 0.875em; - color: #dc3545; -} - -.invalid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: 0.25rem 0.5rem; - margin-top: 0.1rem; - font-size: 0.875rem; - color: #fff; - background-color: rgba(220, 53, 69, 0.9); - border-radius: 0.5rem; -} - -.was-validated :invalid ~ .invalid-feedback, -.was-validated :invalid ~ .invalid-tooltip, -.is-invalid ~ .invalid-feedback, -.is-invalid ~ .invalid-tooltip { - display: block; -} - -.was-validated .form-control:invalid, .form-control.is-invalid { - border-color: #dc3545; - padding-right: calc(1.5em + 0.75rem); - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right calc(0.375em + 0.1875rem) center; - background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); -} -.was-validated .form-control:invalid:focus, .form-control.is-invalid:focus { - border-color: #dc3545; - box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25); -} - -.was-validated textarea.form-control:invalid, textarea.form-control.is-invalid { - padding-right: calc(1.5em + 0.75rem); - background-position: top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem); -} - -.was-validated .form-select:invalid, .form-select.is-invalid { - border-color: #dc3545; -} -.was-validated .form-select:invalid:not([multiple]):not([size]), .was-validated .form-select:invalid:not([multiple])[size="1"], .form-select.is-invalid:not([multiple]):not([size]), .form-select.is-invalid:not([multiple])[size="1"] { - padding-right: 4.125rem; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e"); - background-position: right 0.75rem center, center right 2.25rem; - background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem); -} -.was-validated .form-select:invalid:focus, .form-select.is-invalid:focus { - border-color: #dc3545; - box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25); -} - -.was-validated .form-control-color:invalid, .form-control-color.is-invalid { - width: calc(3rem + calc(1.5em + 0.75rem)); -} - -.was-validated .form-check-input:invalid, .form-check-input.is-invalid { - border-color: #dc3545; -} -.was-validated .form-check-input:invalid:checked, .form-check-input.is-invalid:checked { - background-color: #dc3545; -} -.was-validated .form-check-input:invalid:focus, .form-check-input.is-invalid:focus { - box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25); -} -.was-validated .form-check-input:invalid ~ .form-check-label, .form-check-input.is-invalid ~ .form-check-label { - color: #dc3545; -} - -.form-check-inline .form-check-input ~ .invalid-feedback { - margin-left: 0.5em; -} - -.was-validated .input-group > .form-control:not(:focus):invalid, .input-group > .form-control:not(:focus).is-invalid, -.was-validated .input-group > .form-select:not(:focus):invalid, -.input-group > .form-select:not(:focus).is-invalid, -.was-validated .input-group > .form-floating:not(:focus-within):invalid, -.input-group > .form-floating:not(:focus-within).is-invalid { - z-index: 4; -} - -.btn { - --bs-btn-padding-x: 0.75rem; - --bs-btn-padding-y: 0.375rem; - --bs-btn-font-family: ; - --bs-btn-font-size: 1rem; - --bs-btn-font-weight: 400; - --bs-btn-line-height: 1.5; - --bs-btn-color: #212529; - --bs-btn-bg: transparent; - --bs-btn-border-width: 0.125rem; - --bs-btn-border-color: transparent; - --bs-btn-border-radius: 0.5rem; - --bs-btn-hover-border-color: transparent; - --bs-btn-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.15), 0 1px 1px rgba(0, 0, 0, 0.075); - --bs-btn-disabled-opacity: 0.65; - --bs-btn-focus-box-shadow: 0 0 0 0.25rem rgba(var(--bs-btn-focus-shadow-rgb), .5); - display: inline-block; - padding: var(--bs-btn-padding-y) var(--bs-btn-padding-x); - font-family: var(--bs-btn-font-family); - font-size: var(--bs-btn-font-size); - font-weight: var(--bs-btn-font-weight); - line-height: var(--bs-btn-line-height); - color: var(--bs-btn-color); - text-align: center; - text-decoration: none; - vertical-align: middle; - cursor: pointer; - -webkit-user-select: none; - -moz-user-select: none; - user-select: none; - border: var(--bs-btn-border-width) solid var(--bs-btn-border-color); - border-radius: var(--bs-btn-border-radius); - background-color: var(--bs-btn-bg); - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .btn { - transition: none; - } -} -.btn:hover { - color: var(--bs-btn-hover-color); - background-color: var(--bs-btn-hover-bg); - border-color: var(--bs-btn-hover-border-color); -} -.btn-check + .btn:hover { - color: var(--bs-btn-color); - background-color: var(--bs-btn-bg); - border-color: var(--bs-btn-border-color); -} -.btn:focus-visible { - color: var(--bs-btn-hover-color); - background-color: var(--bs-btn-hover-bg); - border-color: var(--bs-btn-hover-border-color); - outline: 0; - box-shadow: var(--bs-btn-focus-box-shadow); -} -.btn-check:focus-visible + .btn { - border-color: var(--bs-btn-hover-border-color); - outline: 0; - box-shadow: var(--bs-btn-focus-box-shadow); -} -.btn-check:checked + .btn, :not(.btn-check) + .btn:active, .btn:first-child:active, .btn.active, .btn.show { - color: var(--bs-btn-active-color); - background-color: var(--bs-btn-active-bg); - border-color: var(--bs-btn-active-border-color); -} -.btn-check:checked + .btn:focus-visible, :not(.btn-check) + .btn:active:focus-visible, .btn:first-child:active:focus-visible, .btn.active:focus-visible, .btn.show:focus-visible { - box-shadow: var(--bs-btn-focus-box-shadow); -} -.btn:disabled, .btn.disabled, fieldset:disabled .btn { - color: var(--bs-btn-disabled-color); - pointer-events: none; - background-color: var(--bs-btn-disabled-bg); - border-color: var(--bs-btn-disabled-border-color); - opacity: var(--bs-btn-disabled-opacity); -} - -.btn-primary { - --bs-btn-color: #fff; - --bs-btn-bg: #1abc9c; - --bs-btn-border-color: #1abc9c; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #16a085; - --bs-btn-hover-border-color: #15967d; - --bs-btn-focus-shadow-rgb: 60, 198, 171; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #15967d; - --bs-btn-active-border-color: #148d75; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #1abc9c; - --bs-btn-disabled-border-color: #1abc9c; -} - -.btn-secondary { - --bs-btn-color: #fff; - --bs-btn-bg: #2c3e50; - --bs-btn-border-color: #2c3e50; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #253544; - --bs-btn-hover-border-color: #233240; - --bs-btn-focus-shadow-rgb: 76, 91, 106; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #233240; - --bs-btn-active-border-color: #212f3c; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #2c3e50; - --bs-btn-disabled-border-color: #2c3e50; -} - -.btn-success { - --bs-btn-color: #fff; - --bs-btn-bg: #198754; - --bs-btn-border-color: #198754; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #157347; - --bs-btn-hover-border-color: #146c43; - --bs-btn-focus-shadow-rgb: 60, 153, 110; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #146c43; - --bs-btn-active-border-color: #13653f; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #198754; - --bs-btn-disabled-border-color: #198754; -} - -.btn-info { - --bs-btn-color: #000; - --bs-btn-bg: #0dcaf0; - --bs-btn-border-color: #0dcaf0; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #31d2f2; - --bs-btn-hover-border-color: #25cff2; - --bs-btn-focus-shadow-rgb: 11, 172, 204; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #3dd5f3; - --bs-btn-active-border-color: #25cff2; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #000; - --bs-btn-disabled-bg: #0dcaf0; - --bs-btn-disabled-border-color: #0dcaf0; -} - -.btn-warning { - --bs-btn-color: #000; - --bs-btn-bg: #ffc107; - --bs-btn-border-color: #ffc107; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #ffca2c; - --bs-btn-hover-border-color: #ffc720; - --bs-btn-focus-shadow-rgb: 217, 164, 6; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #ffcd39; - --bs-btn-active-border-color: #ffc720; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #000; - --bs-btn-disabled-bg: #ffc107; - --bs-btn-disabled-border-color: #ffc107; -} - -.btn-danger { - --bs-btn-color: #fff; - --bs-btn-bg: #dc3545; - --bs-btn-border-color: #dc3545; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #bb2d3b; - --bs-btn-hover-border-color: #b02a37; - --bs-btn-focus-shadow-rgb: 225, 83, 97; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #b02a37; - --bs-btn-active-border-color: #a52834; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #dc3545; - --bs-btn-disabled-border-color: #dc3545; -} - -.btn-light { - --bs-btn-color: #000; - --bs-btn-bg: #f8f9fa; - --bs-btn-border-color: #f8f9fa; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #d3d4d5; - --bs-btn-hover-border-color: #c6c7c8; - --bs-btn-focus-shadow-rgb: 211, 212, 213; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #c6c7c8; - --bs-btn-active-border-color: #babbbc; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #000; - --bs-btn-disabled-bg: #f8f9fa; - --bs-btn-disabled-border-color: #f8f9fa; -} - -.btn-dark { - --bs-btn-color: #fff; - --bs-btn-bg: #212529; - --bs-btn-border-color: #212529; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #424649; - --bs-btn-hover-border-color: #373b3e; - --bs-btn-focus-shadow-rgb: 66, 70, 73; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #4d5154; - --bs-btn-active-border-color: #373b3e; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #212529; - --bs-btn-disabled-border-color: #212529; -} - -.btn-outline-primary { - --bs-btn-color: #1abc9c; - --bs-btn-border-color: #1abc9c; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #1abc9c; - --bs-btn-hover-border-color: #1abc9c; - --bs-btn-focus-shadow-rgb: 26, 188, 156; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #1abc9c; - --bs-btn-active-border-color: #1abc9c; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #1abc9c; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #1abc9c; - --bs-gradient: none; -} - -.btn-outline-secondary { - --bs-btn-color: #2c3e50; - --bs-btn-border-color: #2c3e50; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #2c3e50; - --bs-btn-hover-border-color: #2c3e50; - --bs-btn-focus-shadow-rgb: 44, 62, 80; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #2c3e50; - --bs-btn-active-border-color: #2c3e50; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #2c3e50; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #2c3e50; - --bs-gradient: none; -} - -.btn-outline-success { - --bs-btn-color: #198754; - --bs-btn-border-color: #198754; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #198754; - --bs-btn-hover-border-color: #198754; - --bs-btn-focus-shadow-rgb: 25, 135, 84; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #198754; - --bs-btn-active-border-color: #198754; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #198754; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #198754; - --bs-gradient: none; -} - -.btn-outline-info { - --bs-btn-color: #0dcaf0; - --bs-btn-border-color: #0dcaf0; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #0dcaf0; - --bs-btn-hover-border-color: #0dcaf0; - --bs-btn-focus-shadow-rgb: 13, 202, 240; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #0dcaf0; - --bs-btn-active-border-color: #0dcaf0; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #0dcaf0; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #0dcaf0; - --bs-gradient: none; -} - -.btn-outline-warning { - --bs-btn-color: #ffc107; - --bs-btn-border-color: #ffc107; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #ffc107; - --bs-btn-hover-border-color: #ffc107; - --bs-btn-focus-shadow-rgb: 255, 193, 7; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #ffc107; - --bs-btn-active-border-color: #ffc107; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #ffc107; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #ffc107; - --bs-gradient: none; -} - -.btn-outline-danger { - --bs-btn-color: #dc3545; - --bs-btn-border-color: #dc3545; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #dc3545; - --bs-btn-hover-border-color: #dc3545; - --bs-btn-focus-shadow-rgb: 220, 53, 69; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #dc3545; - --bs-btn-active-border-color: #dc3545; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #dc3545; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #dc3545; - --bs-gradient: none; -} - -.btn-outline-light { - --bs-btn-color: #f8f9fa; - --bs-btn-border-color: #f8f9fa; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #f8f9fa; - --bs-btn-hover-border-color: #f8f9fa; - --bs-btn-focus-shadow-rgb: 248, 249, 250; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #f8f9fa; - --bs-btn-active-border-color: #f8f9fa; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #f8f9fa; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #f8f9fa; - --bs-gradient: none; -} - -.btn-outline-dark { - --bs-btn-color: #212529; - --bs-btn-border-color: #212529; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #212529; - --bs-btn-hover-border-color: #212529; - --bs-btn-focus-shadow-rgb: 33, 37, 41; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #212529; - --bs-btn-active-border-color: #212529; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #212529; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #212529; - --bs-gradient: none; -} - -.btn-link { - --bs-btn-font-weight: 400; - --bs-btn-color: var(--bs-link-color); - --bs-btn-bg: transparent; - --bs-btn-border-color: transparent; - --bs-btn-hover-color: var(--bs-link-hover-color); - --bs-btn-hover-border-color: transparent; - --bs-btn-active-color: var(--bs-link-hover-color); - --bs-btn-active-border-color: transparent; - --bs-btn-disabled-color: #6c757d; - --bs-btn-disabled-border-color: transparent; - --bs-btn-box-shadow: none; - --bs-btn-focus-shadow-rgb: 60, 198, 171; - text-decoration: underline; -} -.btn-link:focus-visible { - color: var(--bs-btn-color); -} -.btn-link:hover { - color: var(--bs-btn-hover-color); -} - -.btn-lg, .btn-group-lg > .btn { - --bs-btn-padding-y: 0.5rem; - --bs-btn-padding-x: 1rem; - --bs-btn-font-size: 1.25rem; - --bs-btn-border-radius: 0.75rem; -} - -.btn-sm, .btn-group-sm > .btn { - --bs-btn-padding-y: 0.25rem; - --bs-btn-padding-x: 0.5rem; - --bs-btn-font-size: 0.875rem; - --bs-btn-border-radius: 0.25rem; -} - -.fade { - transition: opacity 0.15s linear; -} -@media (prefers-reduced-motion: reduce) { - .fade { - transition: none; - } -} -.fade:not(.show) { - opacity: 0; -} - -.collapse:not(.show) { - display: none; -} - -.collapsing { - height: 0; - overflow: hidden; - transition: height 0.35s ease; -} -@media (prefers-reduced-motion: reduce) { - .collapsing { - transition: none; - } -} -.collapsing.collapse-horizontal { - width: 0; - height: auto; - transition: width 0.35s ease; -} -@media (prefers-reduced-motion: reduce) { - .collapsing.collapse-horizontal { - transition: none; - } -} - -.dropup, -.dropend, -.dropdown, -.dropstart, -.dropup-center, -.dropdown-center { - position: relative; -} - -.dropdown-toggle { - white-space: nowrap; -} -.dropdown-toggle::after { - display: inline-block; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid; - border-right: 0.3em solid transparent; - border-bottom: 0; - border-left: 0.3em solid transparent; -} -.dropdown-toggle:empty::after { - margin-left: 0; -} - -.dropdown-menu { - --bs-dropdown-zindex: 1000; - --bs-dropdown-min-width: 10rem; - --bs-dropdown-padding-x: 0; - --bs-dropdown-padding-y: 0.5rem; - --bs-dropdown-spacer: 0.125rem; - --bs-dropdown-font-size: 1rem; - --bs-dropdown-color: #212529; - --bs-dropdown-bg: #fff; - --bs-dropdown-border-color: var(--bs-border-color-translucent); - --bs-dropdown-border-radius: 0.5rem; - --bs-dropdown-border-width: 0.125rem; - --bs-dropdown-inner-border-radius: 0.375rem; - --bs-dropdown-divider-bg: var(--bs-border-color-translucent); - --bs-dropdown-divider-margin-y: 0.5rem; - --bs-dropdown-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-dropdown-link-color: #212529; - --bs-dropdown-link-hover-color: #1e2125; - --bs-dropdown-link-hover-bg: #e9ecef; - --bs-dropdown-link-active-color: #fff; - --bs-dropdown-link-active-bg: #1abc9c; - --bs-dropdown-link-disabled-color: #adb5bd; - --bs-dropdown-item-padding-x: 1rem; - --bs-dropdown-item-padding-y: 0.25rem; - --bs-dropdown-header-color: #6c757d; - --bs-dropdown-header-padding-x: 1rem; - --bs-dropdown-header-padding-y: 0.5rem; - position: absolute; - z-index: var(--bs-dropdown-zindex); - display: none; - min-width: var(--bs-dropdown-min-width); - padding: var(--bs-dropdown-padding-y) var(--bs-dropdown-padding-x); - margin: 0; - font-size: var(--bs-dropdown-font-size); - color: var(--bs-dropdown-color); - text-align: left; - list-style: none; - background-color: var(--bs-dropdown-bg); - background-clip: padding-box; - border: var(--bs-dropdown-border-width) solid var(--bs-dropdown-border-color); - border-radius: var(--bs-dropdown-border-radius); -} -.dropdown-menu[data-bs-popper] { - top: 100%; - left: 0; - margin-top: var(--bs-dropdown-spacer); -} - -.dropdown-menu-start { - --bs-position: start; -} -.dropdown-menu-start[data-bs-popper] { - right: auto; - left: 0; -} - -.dropdown-menu-end { - --bs-position: end; -} -.dropdown-menu-end[data-bs-popper] { - right: 0; - left: auto; -} - -@media (min-width: 576px) { - .dropdown-menu-sm-start { - --bs-position: start; - } - .dropdown-menu-sm-start[data-bs-popper] { - right: auto; - left: 0; - } - .dropdown-menu-sm-end { - --bs-position: end; - } - .dropdown-menu-sm-end[data-bs-popper] { - right: 0; - left: auto; - } -} -@media (min-width: 768px) { - .dropdown-menu-md-start { - --bs-position: start; - } - .dropdown-menu-md-start[data-bs-popper] { - right: auto; - left: 0; - } - .dropdown-menu-md-end { - --bs-position: end; - } - .dropdown-menu-md-end[data-bs-popper] { - right: 0; - left: auto; - } -} -@media (min-width: 992px) { - .dropdown-menu-lg-start { - --bs-position: start; - } - .dropdown-menu-lg-start[data-bs-popper] { - right: auto; - left: 0; - } - .dropdown-menu-lg-end { - --bs-position: end; - } - .dropdown-menu-lg-end[data-bs-popper] { - right: 0; - left: auto; - } -} -@media (min-width: 1200px) { - .dropdown-menu-xl-start { - --bs-position: start; - } - .dropdown-menu-xl-start[data-bs-popper] { - right: auto; - left: 0; - } - .dropdown-menu-xl-end { - --bs-position: end; - } - .dropdown-menu-xl-end[data-bs-popper] { - right: 0; - left: auto; - } -} -@media (min-width: 1400px) { - .dropdown-menu-xxl-start { - --bs-position: start; - } - .dropdown-menu-xxl-start[data-bs-popper] { - right: auto; - left: 0; - } - .dropdown-menu-xxl-end { - --bs-position: end; - } - .dropdown-menu-xxl-end[data-bs-popper] { - right: 0; - left: auto; - } -} -.dropup .dropdown-menu[data-bs-popper] { - top: auto; - bottom: 100%; - margin-top: 0; - margin-bottom: var(--bs-dropdown-spacer); -} -.dropup .dropdown-toggle::after { - display: inline-block; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0; - border-right: 0.3em solid transparent; - border-bottom: 0.3em solid; - border-left: 0.3em solid transparent; -} -.dropup .dropdown-toggle:empty::after { - margin-left: 0; -} - -.dropend .dropdown-menu[data-bs-popper] { - top: 0; - right: auto; - left: 100%; - margin-top: 0; - margin-left: var(--bs-dropdown-spacer); -} -.dropend .dropdown-toggle::after { - display: inline-block; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid transparent; - border-right: 0; - border-bottom: 0.3em solid transparent; - border-left: 0.3em solid; -} -.dropend .dropdown-toggle:empty::after { - margin-left: 0; -} -.dropend .dropdown-toggle::after { - vertical-align: 0; -} - -.dropstart .dropdown-menu[data-bs-popper] { - top: 0; - right: 100%; - left: auto; - margin-top: 0; - margin-right: var(--bs-dropdown-spacer); -} -.dropstart .dropdown-toggle::after { - display: inline-block; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; -} -.dropstart .dropdown-toggle::after { - display: none; -} -.dropstart .dropdown-toggle::before { - display: inline-block; - margin-right: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid transparent; - border-right: 0.3em solid; - border-bottom: 0.3em solid transparent; -} -.dropstart .dropdown-toggle:empty::after { - margin-left: 0; -} -.dropstart .dropdown-toggle::before { - vertical-align: 0; -} - -.dropdown-divider { - height: 0; - margin: var(--bs-dropdown-divider-margin-y) 0; - overflow: hidden; - border-top: 1px solid var(--bs-dropdown-divider-bg); - opacity: 1; -} - -.dropdown-item { - display: block; - width: 100%; - padding: var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x); - clear: both; - font-weight: 400; - color: var(--bs-dropdown-link-color); - text-align: inherit; - text-decoration: none; - white-space: nowrap; - background-color: transparent; - border: 0; -} -.dropdown-item:hover, .dropdown-item:focus { - color: var(--bs-dropdown-link-hover-color); - background-color: var(--bs-dropdown-link-hover-bg); -} -.dropdown-item.active, .dropdown-item:active { - color: var(--bs-dropdown-link-active-color); - text-decoration: none; - background-color: var(--bs-dropdown-link-active-bg); -} -.dropdown-item.disabled, .dropdown-item:disabled { - color: var(--bs-dropdown-link-disabled-color); - pointer-events: none; - background-color: transparent; -} - -.dropdown-menu.show { - display: block; -} - -.dropdown-header { - display: block; - padding: var(--bs-dropdown-header-padding-y) var(--bs-dropdown-header-padding-x); - margin-bottom: 0; - font-size: 0.875rem; - color: var(--bs-dropdown-header-color); - white-space: nowrap; -} - -.dropdown-item-text { - display: block; - padding: var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x); - color: var(--bs-dropdown-link-color); -} - -.dropdown-menu-dark { - --bs-dropdown-color: #dee2e6; - --bs-dropdown-bg: #343a40; - --bs-dropdown-border-color: var(--bs-border-color-translucent); - --bs-dropdown-box-shadow: ; - --bs-dropdown-link-color: #dee2e6; - --bs-dropdown-link-hover-color: #fff; - --bs-dropdown-divider-bg: var(--bs-border-color-translucent); - --bs-dropdown-link-hover-bg: rgba(255, 255, 255, 0.15); - --bs-dropdown-link-active-color: #fff; - --bs-dropdown-link-active-bg: #1abc9c; - --bs-dropdown-link-disabled-color: #adb5bd; - --bs-dropdown-header-color: #adb5bd; -} - -.btn-group, -.btn-group-vertical { - position: relative; - display: inline-flex; - vertical-align: middle; -} -.btn-group > .btn, -.btn-group-vertical > .btn { - position: relative; - flex: 1 1 auto; -} -.btn-group > .btn-check:checked + .btn, -.btn-group > .btn-check:focus + .btn, -.btn-group > .btn:hover, -.btn-group > .btn:focus, -.btn-group > .btn:active, -.btn-group > .btn.active, -.btn-group-vertical > .btn-check:checked + .btn, -.btn-group-vertical > .btn-check:focus + .btn, -.btn-group-vertical > .btn:hover, -.btn-group-vertical > .btn:focus, -.btn-group-vertical > .btn:active, -.btn-group-vertical > .btn.active { - z-index: 1; -} - -.btn-toolbar { - display: flex; - flex-wrap: wrap; - justify-content: flex-start; -} -.btn-toolbar .input-group { - width: auto; -} - -.btn-group { - border-radius: 0.5rem; -} -.btn-group > :not(.btn-check:first-child) + .btn, -.btn-group > .btn-group:not(:first-child) { - margin-left: -0.125rem; -} -.btn-group > .btn:not(:last-child):not(.dropdown-toggle), -.btn-group > .btn.dropdown-toggle-split:first-child, -.btn-group > .btn-group:not(:last-child) > .btn { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.btn-group > .btn:nth-child(n+3), -.btn-group > :not(.btn-check) + .btn, -.btn-group > .btn-group:not(:first-child) > .btn { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} - -.dropdown-toggle-split { - padding-right: 0.5625rem; - padding-left: 0.5625rem; -} -.dropdown-toggle-split::after, .dropup .dropdown-toggle-split::after, .dropend .dropdown-toggle-split::after { - margin-left: 0; -} -.dropstart .dropdown-toggle-split::before { - margin-right: 0; -} - -.btn-sm + .dropdown-toggle-split, .btn-group-sm > .btn + .dropdown-toggle-split { - padding-right: 0.375rem; - padding-left: 0.375rem; -} - -.btn-lg + .dropdown-toggle-split, .btn-group-lg > .btn + .dropdown-toggle-split { - padding-right: 0.75rem; - padding-left: 0.75rem; -} - -.btn-group-vertical { - flex-direction: column; - align-items: flex-start; - justify-content: center; -} -.btn-group-vertical > .btn, -.btn-group-vertical > .btn-group { - width: 100%; -} -.btn-group-vertical > .btn:not(:first-child), -.btn-group-vertical > .btn-group:not(:first-child) { - margin-top: -0.125rem; -} -.btn-group-vertical > .btn:not(:last-child):not(.dropdown-toggle), -.btn-group-vertical > .btn-group:not(:last-child) > .btn { - border-bottom-right-radius: 0; - border-bottom-left-radius: 0; -} -.btn-group-vertical > .btn ~ .btn, -.btn-group-vertical > .btn-group:not(:first-child) > .btn { - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -.nav { - --bs-nav-link-padding-x: 1rem; - --bs-nav-link-padding-y: 0.5rem; - --bs-nav-link-font-weight: ; - --bs-nav-link-color: var(--bs-link-color); - --bs-nav-link-hover-color: var(--bs-link-hover-color); - --bs-nav-link-disabled-color: #6c757d; - display: flex; - flex-wrap: wrap; - padding-left: 0; - margin-bottom: 0; - list-style: none; -} - -.nav-link { - display: block; - padding: var(--bs-nav-link-padding-y) var(--bs-nav-link-padding-x); - font-size: var(--bs-nav-link-font-size); - font-weight: var(--bs-nav-link-font-weight); - color: var(--bs-nav-link-color); - text-decoration: none; - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .nav-link { - transition: none; - } -} -.nav-link:hover, .nav-link:focus { - color: var(--bs-nav-link-hover-color); -} -.nav-link.disabled { - color: var(--bs-nav-link-disabled-color); - pointer-events: none; - cursor: default; -} - -.nav-tabs { - --bs-nav-tabs-border-width: 0.125rem; - --bs-nav-tabs-border-color: #dee2e6; - --bs-nav-tabs-border-radius: 0.5rem; - --bs-nav-tabs-link-hover-border-color: #e9ecef #e9ecef #dee2e6; - --bs-nav-tabs-link-active-color: #495057; - --bs-nav-tabs-link-active-bg: #fff; - --bs-nav-tabs-link-active-border-color: #dee2e6 #dee2e6 #fff; - border-bottom: var(--bs-nav-tabs-border-width) solid var(--bs-nav-tabs-border-color); -} -.nav-tabs .nav-link { - margin-bottom: calc(-1 * var(--bs-nav-tabs-border-width)); - background: none; - border: var(--bs-nav-tabs-border-width) solid transparent; - border-top-left-radius: var(--bs-nav-tabs-border-radius); - border-top-right-radius: var(--bs-nav-tabs-border-radius); -} -.nav-tabs .nav-link:hover, .nav-tabs .nav-link:focus { - isolation: isolate; - border-color: var(--bs-nav-tabs-link-hover-border-color); -} -.nav-tabs .nav-link.disabled, .nav-tabs .nav-link:disabled { - color: var(--bs-nav-link-disabled-color); - background-color: transparent; - border-color: transparent; -} -.nav-tabs .nav-link.active, -.nav-tabs .nav-item.show .nav-link { - color: var(--bs-nav-tabs-link-active-color); - background-color: var(--bs-nav-tabs-link-active-bg); - border-color: var(--bs-nav-tabs-link-active-border-color); -} -.nav-tabs .dropdown-menu { - margin-top: calc(-1 * var(--bs-nav-tabs-border-width)); - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -.nav-pills { - --bs-nav-pills-border-radius: 0.5rem; - --bs-nav-pills-link-active-color: #fff; - --bs-nav-pills-link-active-bg: #1abc9c; -} -.nav-pills .nav-link { - background: none; - border: 0; - border-radius: var(--bs-nav-pills-border-radius); -} -.nav-pills .nav-link:disabled { - color: var(--bs-nav-link-disabled-color); - background-color: transparent; - border-color: transparent; -} -.nav-pills .nav-link.active, -.nav-pills .show > .nav-link { - color: var(--bs-nav-pills-link-active-color); - background-color: var(--bs-nav-pills-link-active-bg); -} - -.nav-fill > .nav-link, -.nav-fill .nav-item { - flex: 1 1 auto; - text-align: center; -} - -.nav-justified > .nav-link, -.nav-justified .nav-item { - flex-basis: 0; - flex-grow: 1; - text-align: center; -} - -.nav-fill .nav-item .nav-link, -.nav-justified .nav-item .nav-link { - width: 100%; -} - -.tab-content > .tab-pane { - display: none; -} -.tab-content > .active { - display: block; -} - -.navbar { - --bs-navbar-padding-x: 0; - --bs-navbar-padding-y: 0.5rem; - --bs-navbar-color: rgba(0, 0, 0, 0.55); - --bs-navbar-hover-color: rgba(0, 0, 0, 0.7); - --bs-navbar-disabled-color: rgba(0, 0, 0, 0.3); - --bs-navbar-active-color: rgba(0, 0, 0, 0.9); - --bs-navbar-brand-padding-y: 0.3125rem; - --bs-navbar-brand-margin-end: 1rem; - --bs-navbar-brand-font-size: 1.25rem; - --bs-navbar-brand-color: rgba(0, 0, 0, 0.9); - --bs-navbar-brand-hover-color: rgba(0, 0, 0, 0.9); - --bs-navbar-nav-link-padding-x: 0.5rem; - --bs-navbar-toggler-padding-y: 0.25rem; - --bs-navbar-toggler-padding-x: 0.75rem; - --bs-navbar-toggler-font-size: 1.25rem; - --bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e"); - --bs-navbar-toggler-border-color: rgba(0, 0, 0, 0.1); - --bs-navbar-toggler-border-radius: 0.5rem; - --bs-navbar-toggler-focus-width: 0.25rem; - --bs-navbar-toggler-transition: box-shadow 0.15s ease-in-out; - position: relative; - display: flex; - flex-wrap: wrap; - align-items: center; - justify-content: space-between; - padding: var(--bs-navbar-padding-y) var(--bs-navbar-padding-x); -} -.navbar > .container, -.navbar > .container-fluid, -.navbar > .container-sm, -.navbar > .container-md, -.navbar > .container-lg, -.navbar > .container-xl, -.navbar > .container-xxl { - display: flex; - flex-wrap: inherit; - align-items: center; - justify-content: space-between; -} -.navbar-brand { - padding-top: var(--bs-navbar-brand-padding-y); - padding-bottom: var(--bs-navbar-brand-padding-y); - margin-right: var(--bs-navbar-brand-margin-end); - font-size: var(--bs-navbar-brand-font-size); - color: var(--bs-navbar-brand-color); - text-decoration: none; - white-space: nowrap; -} -.navbar-brand:hover, .navbar-brand:focus { - color: var(--bs-navbar-brand-hover-color); -} - -.navbar-nav { - --bs-nav-link-padding-x: 0; - --bs-nav-link-padding-y: 0.5rem; - --bs-nav-link-font-weight: ; - --bs-nav-link-color: var(--bs-navbar-color); - --bs-nav-link-hover-color: var(--bs-navbar-hover-color); - --bs-nav-link-disabled-color: var(--bs-navbar-disabled-color); - display: flex; - flex-direction: column; - padding-left: 0; - margin-bottom: 0; - list-style: none; -} -.navbar-nav .show > .nav-link, -.navbar-nav .nav-link.active { - color: var(--bs-navbar-active-color); -} -.navbar-nav .dropdown-menu { - position: static; -} - -.navbar-text { - padding-top: 0.5rem; - padding-bottom: 0.5rem; - color: var(--bs-navbar-color); -} -.navbar-text a, -.navbar-text a:hover, -.navbar-text a:focus { - color: var(--bs-navbar-active-color); -} - -.navbar-collapse { - flex-basis: 100%; - flex-grow: 1; - align-items: center; -} - -.navbar-toggler { - padding: var(--bs-navbar-toggler-padding-y) var(--bs-navbar-toggler-padding-x); - font-size: var(--bs-navbar-toggler-font-size); - line-height: 1; - color: var(--bs-navbar-color); - background-color: transparent; - border: var(--bs-border-width) solid var(--bs-navbar-toggler-border-color); - border-radius: var(--bs-navbar-toggler-border-radius); - transition: var(--bs-navbar-toggler-transition); -} -@media (prefers-reduced-motion: reduce) { - .navbar-toggler { - transition: none; - } -} -.navbar-toggler:hover { - text-decoration: none; -} -.navbar-toggler:focus { - text-decoration: none; - outline: 0; - box-shadow: 0 0 0 var(--bs-navbar-toggler-focus-width); -} - -.navbar-toggler-icon { - display: inline-block; - width: 1.5em; - height: 1.5em; - vertical-align: middle; - background-image: var(--bs-navbar-toggler-icon-bg); - background-repeat: no-repeat; - background-position: center; - background-size: 100%; -} - -.navbar-nav-scroll { - max-height: var(--bs-scroll-height, 75vh); - overflow-y: auto; -} - -@media (min-width: 576px) { - .navbar-expand-sm { - flex-wrap: nowrap; - justify-content: flex-start; - } - .navbar-expand-sm .navbar-nav { - flex-direction: row; - } - .navbar-expand-sm .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-sm .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); - } - .navbar-expand-sm .navbar-nav-scroll { - overflow: visible; - } - .navbar-expand-sm .navbar-collapse { - display: flex !important; - flex-basis: auto; - } - .navbar-expand-sm .navbar-toggler { - display: none; - } - .navbar-expand-sm .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; - } - .navbar-expand-sm .offcanvas .offcanvas-header { - display: none; - } - .navbar-expand-sm .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - } -} -@media (min-width: 768px) { - .navbar-expand-md { - flex-wrap: nowrap; - justify-content: flex-start; - } - .navbar-expand-md .navbar-nav { - flex-direction: row; - } - .navbar-expand-md .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-md .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); - } - .navbar-expand-md .navbar-nav-scroll { - overflow: visible; - } - .navbar-expand-md .navbar-collapse { - display: flex !important; - flex-basis: auto; - } - .navbar-expand-md .navbar-toggler { - display: none; - } - .navbar-expand-md .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; - } - .navbar-expand-md .offcanvas .offcanvas-header { - display: none; - } - .navbar-expand-md .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - } -} -@media (min-width: 992px) { - .navbar-expand-lg { - flex-wrap: nowrap; - justify-content: flex-start; - } - .navbar-expand-lg .navbar-nav { - flex-direction: row; - } - .navbar-expand-lg .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-lg .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); - } - .navbar-expand-lg .navbar-nav-scroll { - overflow: visible; - } - .navbar-expand-lg .navbar-collapse { - display: flex !important; - flex-basis: auto; - } - .navbar-expand-lg .navbar-toggler { - display: none; - } - .navbar-expand-lg .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; - } - .navbar-expand-lg .offcanvas .offcanvas-header { - display: none; - } - .navbar-expand-lg .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - } -} -@media (min-width: 1200px) { - .navbar-expand-xl { - flex-wrap: nowrap; - justify-content: flex-start; - } - .navbar-expand-xl .navbar-nav { - flex-direction: row; - } - .navbar-expand-xl .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-xl .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); - } - .navbar-expand-xl .navbar-nav-scroll { - overflow: visible; - } - .navbar-expand-xl .navbar-collapse { - display: flex !important; - flex-basis: auto; - } - .navbar-expand-xl .navbar-toggler { - display: none; - } - .navbar-expand-xl .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; - } - .navbar-expand-xl .offcanvas .offcanvas-header { - display: none; - } - .navbar-expand-xl .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - } -} -@media (min-width: 1400px) { - .navbar-expand-xxl { - flex-wrap: nowrap; - justify-content: flex-start; - } - .navbar-expand-xxl .navbar-nav { - flex-direction: row; - } - .navbar-expand-xxl .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-xxl .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); - } - .navbar-expand-xxl .navbar-nav-scroll { - overflow: visible; - } - .navbar-expand-xxl .navbar-collapse { - display: flex !important; - flex-basis: auto; - } - .navbar-expand-xxl .navbar-toggler { - display: none; - } - .navbar-expand-xxl .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; - } - .navbar-expand-xxl .offcanvas .offcanvas-header { - display: none; - } - .navbar-expand-xxl .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - } -} -.navbar-expand { - flex-wrap: nowrap; - justify-content: flex-start; -} -.navbar-expand .navbar-nav { - flex-direction: row; -} -.navbar-expand .navbar-nav .dropdown-menu { - position: absolute; -} -.navbar-expand .navbar-nav .nav-link { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x); -} -.navbar-expand .navbar-nav-scroll { - overflow: visible; -} -.navbar-expand .navbar-collapse { - display: flex !important; - flex-basis: auto; -} -.navbar-expand .navbar-toggler { - display: none; -} -.navbar-expand .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important; - transition: none; -} -.navbar-expand .offcanvas .offcanvas-header { - display: none; -} -.navbar-expand .offcanvas .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; -} - -.navbar-dark { - --bs-navbar-color: rgba(255, 255, 255, 0.55); - --bs-navbar-hover-color: rgba(255, 255, 255, 0.75); - --bs-navbar-disabled-color: rgba(255, 255, 255, 0.25); - --bs-navbar-active-color: #fff; - --bs-navbar-brand-color: #fff; - --bs-navbar-brand-hover-color: #fff; - --bs-navbar-toggler-border-color: rgba(255, 255, 255, 0.1); - --bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e"); -} - -.card { - --bs-card-spacer-y: 1rem; - --bs-card-spacer-x: 1rem; - --bs-card-title-spacer-y: 0.5rem; - --bs-card-border-width: 0.125rem; - --bs-card-border-color: var(--bs-border-color-translucent); - --bs-card-border-radius: 0.5rem; - --bs-card-box-shadow: ; - --bs-card-inner-border-radius: 0.375rem; - --bs-card-cap-padding-y: 0.5rem; - --bs-card-cap-padding-x: 1rem; - --bs-card-cap-bg: rgba(0, 0, 0, 0.03); - --bs-card-cap-color: ; - --bs-card-height: ; - --bs-card-color: ; - --bs-card-bg: #fff; - --bs-card-img-overlay-padding: 1rem; - --bs-card-group-margin: 0.75rem; - position: relative; - display: flex; - flex-direction: column; - min-width: 0; - height: var(--bs-card-height); - word-wrap: break-word; - background-color: var(--bs-card-bg); - background-clip: border-box; - border: var(--bs-card-border-width) solid var(--bs-card-border-color); - border-radius: var(--bs-card-border-radius); -} -.card > hr { - margin-right: 0; - margin-left: 0; -} -.card > .list-group { - border-top: inherit; - border-bottom: inherit; -} -.card > .list-group:first-child { - border-top-width: 0; - border-top-left-radius: var(--bs-card-inner-border-radius); - border-top-right-radius: var(--bs-card-inner-border-radius); -} -.card > .list-group:last-child { - border-bottom-width: 0; - border-bottom-right-radius: var(--bs-card-inner-border-radius); - border-bottom-left-radius: var(--bs-card-inner-border-radius); -} -.card > .card-header + .list-group, -.card > .list-group + .card-footer { - border-top: 0; -} - -.card-body { - flex: 1 1 auto; - padding: var(--bs-card-spacer-y) var(--bs-card-spacer-x); - color: var(--bs-card-color); -} - -.card-title { - margin-bottom: var(--bs-card-title-spacer-y); -} - -.card-subtitle { - margin-top: calc(-0.5 * var(--bs-card-title-spacer-y)); - margin-bottom: 0; -} - -.card-text:last-child { - margin-bottom: 0; -} - -.card-link + .card-link { - margin-left: var(--bs-card-spacer-x); -} - -.card-header { - padding: var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x); - margin-bottom: 0; - color: var(--bs-card-cap-color); - background-color: var(--bs-card-cap-bg); - border-bottom: var(--bs-card-border-width) solid var(--bs-card-border-color); -} -.card-header:first-child { - border-radius: var(--bs-card-inner-border-radius) var(--bs-card-inner-border-radius) 0 0; -} - -.card-footer { - padding: var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x); - color: var(--bs-card-cap-color); - background-color: var(--bs-card-cap-bg); - border-top: var(--bs-card-border-width) solid var(--bs-card-border-color); -} -.card-footer:last-child { - border-radius: 0 0 var(--bs-card-inner-border-radius) var(--bs-card-inner-border-radius); -} - -.card-header-tabs { - margin-right: calc(-0.5 * var(--bs-card-cap-padding-x)); - margin-bottom: calc(-1 * var(--bs-card-cap-padding-y)); - margin-left: calc(-0.5 * var(--bs-card-cap-padding-x)); - border-bottom: 0; -} -.card-header-tabs .nav-link.active { - background-color: var(--bs-card-bg); - border-bottom-color: var(--bs-card-bg); -} - -.card-header-pills { - margin-right: calc(-0.5 * var(--bs-card-cap-padding-x)); - margin-left: calc(-0.5 * var(--bs-card-cap-padding-x)); -} - -.card-img-overlay { - position: absolute; - top: 0; - right: 0; - bottom: 0; - left: 0; - padding: var(--bs-card-img-overlay-padding); - border-radius: var(--bs-card-inner-border-radius); -} - -.card-img, -.card-img-top, -.card-img-bottom { - width: 100%; -} - -.card-img, -.card-img-top { - border-top-left-radius: var(--bs-card-inner-border-radius); - border-top-right-radius: var(--bs-card-inner-border-radius); -} - -.card-img, -.card-img-bottom { - border-bottom-right-radius: var(--bs-card-inner-border-radius); - border-bottom-left-radius: var(--bs-card-inner-border-radius); -} - -.card-group > .card { - margin-bottom: var(--bs-card-group-margin); -} -@media (min-width: 576px) { - .card-group { - display: flex; - flex-flow: row wrap; - } - .card-group > .card { - flex: 1 0 0%; - margin-bottom: 0; - } - .card-group > .card + .card { - margin-left: 0; - border-left: 0; - } - .card-group > .card:not(:last-child) { - border-top-right-radius: 0; - border-bottom-right-radius: 0; - } - .card-group > .card:not(:last-child) .card-img-top, - .card-group > .card:not(:last-child) .card-header { - border-top-right-radius: 0; - } - .card-group > .card:not(:last-child) .card-img-bottom, - .card-group > .card:not(:last-child) .card-footer { - border-bottom-right-radius: 0; - } - .card-group > .card:not(:first-child) { - border-top-left-radius: 0; - border-bottom-left-radius: 0; - } - .card-group > .card:not(:first-child) .card-img-top, - .card-group > .card:not(:first-child) .card-header { - border-top-left-radius: 0; - } - .card-group > .card:not(:first-child) .card-img-bottom, - .card-group > .card:not(:first-child) .card-footer { - border-bottom-left-radius: 0; - } -} - -.accordion { - --bs-accordion-color: #212529; - --bs-accordion-bg: #fff; - --bs-accordion-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, border-radius 0.15s ease; - --bs-accordion-border-color: var(--bs-border-color); - --bs-accordion-border-width: 0.125rem; - --bs-accordion-border-radius: 0.5rem; - --bs-accordion-inner-border-radius: 0.375rem; - --bs-accordion-btn-padding-x: 1.25rem; - --bs-accordion-btn-padding-y: 1rem; - --bs-accordion-btn-color: #212529; - --bs-accordion-btn-bg: var(--bs-accordion-bg); - --bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23212529'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); - --bs-accordion-btn-icon-width: 1.25rem; - --bs-accordion-btn-icon-transform: rotate(-180deg); - --bs-accordion-btn-icon-transition: transform 0.2s ease-in-out; - --bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%2317a98c'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); - --bs-accordion-btn-focus-border-color: #8ddece; - --bs-accordion-btn-focus-box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); - --bs-accordion-body-padding-x: 1.25rem; - --bs-accordion-body-padding-y: 1rem; - --bs-accordion-active-color: #17a98c; - --bs-accordion-active-bg: #e8f8f5; -} - -.accordion-button { - position: relative; - display: flex; - align-items: center; - width: 100%; - padding: var(--bs-accordion-btn-padding-y) var(--bs-accordion-btn-padding-x); - font-size: 1rem; - color: var(--bs-accordion-btn-color); - text-align: left; - background-color: var(--bs-accordion-btn-bg); - border: 0; - border-radius: 0; - overflow-anchor: none; - transition: var(--bs-accordion-transition); -} -@media (prefers-reduced-motion: reduce) { - .accordion-button { - transition: none; - } -} -.accordion-button:not(.collapsed) { - color: var(--bs-accordion-active-color); - background-color: var(--bs-accordion-active-bg); - box-shadow: inset 0 calc(-1 * var(--bs-accordion-border-width)) 0 var(--bs-accordion-border-color); -} -.accordion-button:not(.collapsed)::after { - background-image: var(--bs-accordion-btn-active-icon); - transform: var(--bs-accordion-btn-icon-transform); -} -.accordion-button::after { - flex-shrink: 0; - width: var(--bs-accordion-btn-icon-width); - height: var(--bs-accordion-btn-icon-width); - margin-left: auto; - content: ""; - background-image: var(--bs-accordion-btn-icon); - background-repeat: no-repeat; - background-size: var(--bs-accordion-btn-icon-width); - transition: var(--bs-accordion-btn-icon-transition); -} -@media (prefers-reduced-motion: reduce) { - .accordion-button::after { - transition: none; - } -} -.accordion-button:hover { - z-index: 2; -} -.accordion-button:focus { - z-index: 3; - border-color: var(--bs-accordion-btn-focus-border-color); - outline: 0; - box-shadow: var(--bs-accordion-btn-focus-box-shadow); -} - -.accordion-header { - margin-bottom: 0; -} - -.accordion-item { - color: var(--bs-accordion-color); - background-color: var(--bs-accordion-bg); - border: var(--bs-accordion-border-width) solid var(--bs-accordion-border-color); -} -.accordion-item:first-of-type { - border-top-left-radius: var(--bs-accordion-border-radius); - border-top-right-radius: var(--bs-accordion-border-radius); -} -.accordion-item:first-of-type .accordion-button { - border-top-left-radius: var(--bs-accordion-inner-border-radius); - border-top-right-radius: var(--bs-accordion-inner-border-radius); -} -.accordion-item:not(:first-of-type) { - border-top: 0; -} -.accordion-item:last-of-type { - border-bottom-right-radius: var(--bs-accordion-border-radius); - border-bottom-left-radius: var(--bs-accordion-border-radius); -} -.accordion-item:last-of-type .accordion-button.collapsed { - border-bottom-right-radius: var(--bs-accordion-inner-border-radius); - border-bottom-left-radius: var(--bs-accordion-inner-border-radius); -} -.accordion-item:last-of-type .accordion-collapse { - border-bottom-right-radius: var(--bs-accordion-border-radius); - border-bottom-left-radius: var(--bs-accordion-border-radius); -} - -.accordion-body { - padding: var(--bs-accordion-body-padding-y) var(--bs-accordion-body-padding-x); -} - -.accordion-flush .accordion-collapse { - border-width: 0; -} -.accordion-flush .accordion-item { - border-right: 0; - border-left: 0; - border-radius: 0; -} -.accordion-flush .accordion-item:first-child { - border-top: 0; -} -.accordion-flush .accordion-item:last-child { - border-bottom: 0; -} -.accordion-flush .accordion-item .accordion-button, .accordion-flush .accordion-item .accordion-button.collapsed { - border-radius: 0; -} - -.breadcrumb { - --bs-breadcrumb-padding-x: 0; - --bs-breadcrumb-padding-y: 0; - --bs-breadcrumb-margin-bottom: 1rem; - --bs-breadcrumb-bg: ; - --bs-breadcrumb-border-radius: ; - --bs-breadcrumb-divider-color: #6c757d; - --bs-breadcrumb-item-padding-x: 0.5rem; - --bs-breadcrumb-item-active-color: #6c757d; - display: flex; - flex-wrap: wrap; - padding: var(--bs-breadcrumb-padding-y) var(--bs-breadcrumb-padding-x); - margin-bottom: var(--bs-breadcrumb-margin-bottom); - font-size: var(--bs-breadcrumb-font-size); - list-style: none; - background-color: var(--bs-breadcrumb-bg); - border-radius: var(--bs-breadcrumb-border-radius); -} - -.breadcrumb-item + .breadcrumb-item { - padding-left: var(--bs-breadcrumb-item-padding-x); -} -.breadcrumb-item + .breadcrumb-item::before { - float: left; - padding-right: var(--bs-breadcrumb-item-padding-x); - color: var(--bs-breadcrumb-divider-color); - content: var(--bs-breadcrumb-divider, "/") /* rtl: var(--bs-breadcrumb-divider, "/") */; -} -.breadcrumb-item.active { - color: var(--bs-breadcrumb-item-active-color); -} - -.pagination { - --bs-pagination-padding-x: 0.75rem; - --bs-pagination-padding-y: 0.375rem; - --bs-pagination-font-size: 1rem; - --bs-pagination-color: var(--bs-link-color); - --bs-pagination-bg: #fff; - --bs-pagination-border-width: 0.125rem; - --bs-pagination-border-color: #dee2e6; - --bs-pagination-border-radius: 0.5rem; - --bs-pagination-hover-color: var(--bs-link-hover-color); - --bs-pagination-hover-bg: #e9ecef; - --bs-pagination-hover-border-color: #dee2e6; - --bs-pagination-focus-color: var(--bs-link-hover-color); - --bs-pagination-focus-bg: #e9ecef; - --bs-pagination-focus-box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); - --bs-pagination-active-color: #fff; - --bs-pagination-active-bg: #1abc9c; - --bs-pagination-active-border-color: #1abc9c; - --bs-pagination-disabled-color: #6c757d; - --bs-pagination-disabled-bg: #fff; - --bs-pagination-disabled-border-color: #dee2e6; - display: flex; - padding-left: 0; - list-style: none; -} - -.page-link { - position: relative; - display: block; - padding: var(--bs-pagination-padding-y) var(--bs-pagination-padding-x); - font-size: var(--bs-pagination-font-size); - color: var(--bs-pagination-color); - text-decoration: none; - background-color: var(--bs-pagination-bg); - border: var(--bs-pagination-border-width) solid var(--bs-pagination-border-color); - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .page-link { - transition: none; - } -} -.page-link:hover { - z-index: 2; - color: var(--bs-pagination-hover-color); - background-color: var(--bs-pagination-hover-bg); - border-color: var(--bs-pagination-hover-border-color); -} -.page-link:focus { - z-index: 3; - color: var(--bs-pagination-focus-color); - background-color: var(--bs-pagination-focus-bg); - outline: 0; - box-shadow: var(--bs-pagination-focus-box-shadow); -} -.page-link.active, .active > .page-link { - z-index: 3; - color: var(--bs-pagination-active-color); - background-color: var(--bs-pagination-active-bg); - border-color: var(--bs-pagination-active-border-color); -} -.page-link.disabled, .disabled > .page-link { - color: var(--bs-pagination-disabled-color); - pointer-events: none; - background-color: var(--bs-pagination-disabled-bg); - border-color: var(--bs-pagination-disabled-border-color); -} - -.page-item:not(:first-child) .page-link { - margin-left: -0.125rem; -} -.page-item:first-child .page-link { - border-top-left-radius: var(--bs-pagination-border-radius); - border-bottom-left-radius: var(--bs-pagination-border-radius); -} -.page-item:last-child .page-link { - border-top-right-radius: var(--bs-pagination-border-radius); - border-bottom-right-radius: var(--bs-pagination-border-radius); -} - -.pagination-lg { - --bs-pagination-padding-x: 1.5rem; - --bs-pagination-padding-y: 0.75rem; - --bs-pagination-font-size: 1.25rem; - --bs-pagination-border-radius: 0.75rem; -} - -.pagination-sm { - --bs-pagination-padding-x: 0.5rem; - --bs-pagination-padding-y: 0.25rem; - --bs-pagination-font-size: 0.875rem; - --bs-pagination-border-radius: 0.25rem; -} - -.badge { - --bs-badge-padding-x: 0.65em; - --bs-badge-padding-y: 0.35em; - --bs-badge-font-size: 0.75em; - --bs-badge-font-weight: 700; - --bs-badge-color: #fff; - --bs-badge-border-radius: 0.5rem; - display: inline-block; - padding: var(--bs-badge-padding-y) var(--bs-badge-padding-x); - font-size: var(--bs-badge-font-size); - font-weight: var(--bs-badge-font-weight); - line-height: 1; - color: var(--bs-badge-color); - text-align: center; - white-space: nowrap; - vertical-align: baseline; - border-radius: var(--bs-badge-border-radius); -} -.badge:empty { - display: none; -} - -.btn .badge { - position: relative; - top: -1px; -} - -.alert { - --bs-alert-bg: transparent; - --bs-alert-padding-x: 1rem; - --bs-alert-padding-y: 1rem; - --bs-alert-margin-bottom: 1rem; - --bs-alert-color: inherit; - --bs-alert-border-color: transparent; - --bs-alert-border: 0.125rem solid var(--bs-alert-border-color); - --bs-alert-border-radius: 0.5rem; - position: relative; - padding: var(--bs-alert-padding-y) var(--bs-alert-padding-x); - margin-bottom: var(--bs-alert-margin-bottom); - color: var(--bs-alert-color); - background-color: var(--bs-alert-bg); - border: var(--bs-alert-border); - border-radius: var(--bs-alert-border-radius); -} - -.alert-heading { - color: inherit; -} - -.alert-link { - font-weight: 700; -} - -.alert-dismissible { - padding-right: 3rem; -} -.alert-dismissible .btn-close { - position: absolute; - top: 0; - right: 0; - z-index: 2; - padding: 1.25rem 1rem; -} - -.alert-primary { - --bs-alert-color: #10715e; - --bs-alert-bg: #d1f2eb; - --bs-alert-border-color: #baebe1; -} -.alert-primary .alert-link { - color: #0d5a4b; -} - -.alert-secondary { - --bs-alert-color: #1a2530; - --bs-alert-bg: #d5d8dc; - --bs-alert-border-color: #c0c5cb; -} -.alert-secondary .alert-link { - color: #151e26; -} - -.alert-success { - --bs-alert-color: #0f5132; - --bs-alert-bg: #d1e7dd; - --bs-alert-border-color: #badbcc; -} -.alert-success .alert-link { - color: #0c4128; -} - -.alert-info { - --bs-alert-color: #087990; - --bs-alert-bg: #cff4fc; - --bs-alert-border-color: #b6effb; -} -.alert-info .alert-link { - color: #066173; -} - -.alert-warning { - --bs-alert-color: #997404; - --bs-alert-bg: #fff3cd; - --bs-alert-border-color: #ffecb5; -} -.alert-warning .alert-link { - color: #7a5d03; -} - -.alert-danger { - --bs-alert-color: #842029; - --bs-alert-bg: #f8d7da; - --bs-alert-border-color: #f5c2c7; -} -.alert-danger .alert-link { - color: #6a1a21; -} - -.alert-light { - --bs-alert-color: #959596; - --bs-alert-bg: #fefefe; - --bs-alert-border-color: #fdfdfe; -} -.alert-light .alert-link { - color: #777778; -} - -.alert-dark { - --bs-alert-color: #141619; - --bs-alert-bg: #d3d3d4; - --bs-alert-border-color: #bcbebf; -} -.alert-dark .alert-link { - color: #101214; -} - -@keyframes progress-bar-stripes { - 0% { - background-position-x: 1rem; - } -} -.progress { - --bs-progress-height: 1rem; - --bs-progress-font-size: 0.75rem; - --bs-progress-bg: #e9ecef; - --bs-progress-border-radius: 0.5rem; - --bs-progress-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.075); - --bs-progress-bar-color: #fff; - --bs-progress-bar-bg: #1abc9c; - --bs-progress-bar-transition: width 0.6s ease; - display: flex; - height: var(--bs-progress-height); - overflow: hidden; - font-size: var(--bs-progress-font-size); - background-color: var(--bs-progress-bg); - border-radius: var(--bs-progress-border-radius); -} - -.progress-bar { - display: flex; - flex-direction: column; - justify-content: center; - overflow: hidden; - color: var(--bs-progress-bar-color); - text-align: center; - white-space: nowrap; - background-color: var(--bs-progress-bar-bg); - transition: var(--bs-progress-bar-transition); -} -@media (prefers-reduced-motion: reduce) { - .progress-bar { - transition: none; - } -} - -.progress-bar-striped { - background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); - background-size: var(--bs-progress-height) var(--bs-progress-height); -} - -.progress-bar-animated { - animation: 1s linear infinite progress-bar-stripes; -} -@media (prefers-reduced-motion: reduce) { - .progress-bar-animated { - animation: none; - } -} - -.list-group { - --bs-list-group-color: #212529; - --bs-list-group-bg: #fff; - --bs-list-group-border-color: rgba(0, 0, 0, 0.125); - --bs-list-group-border-width: 0.125rem; - --bs-list-group-border-radius: 0.5rem; - --bs-list-group-item-padding-x: 1rem; - --bs-list-group-item-padding-y: 0.5rem; - --bs-list-group-action-color: #495057; - --bs-list-group-action-hover-color: #495057; - --bs-list-group-action-hover-bg: #f8f9fa; - --bs-list-group-action-active-color: #212529; - --bs-list-group-action-active-bg: #e9ecef; - --bs-list-group-disabled-color: #6c757d; - --bs-list-group-disabled-bg: #fff; - --bs-list-group-active-color: #fff; - --bs-list-group-active-bg: #1abc9c; - --bs-list-group-active-border-color: #1abc9c; - display: flex; - flex-direction: column; - padding-left: 0; - margin-bottom: 0; - border-radius: var(--bs-list-group-border-radius); -} - -.list-group-numbered { - list-style-type: none; - counter-reset: section; -} -.list-group-numbered > .list-group-item::before { - content: counters(section, ".") ". "; - counter-increment: section; -} - -.list-group-item-action { - width: 100%; - color: var(--bs-list-group-action-color); - text-align: inherit; -} -.list-group-item-action:hover, .list-group-item-action:focus { - z-index: 1; - color: var(--bs-list-group-action-hover-color); - text-decoration: none; - background-color: var(--bs-list-group-action-hover-bg); -} -.list-group-item-action:active { - color: var(--bs-list-group-action-active-color); - background-color: var(--bs-list-group-action-active-bg); -} - -.list-group-item { - position: relative; - display: block; - padding: var(--bs-list-group-item-padding-y) var(--bs-list-group-item-padding-x); - color: var(--bs-list-group-color); - text-decoration: none; - background-color: var(--bs-list-group-bg); - border: var(--bs-list-group-border-width) solid var(--bs-list-group-border-color); -} -.list-group-item:first-child { - border-top-left-radius: inherit; - border-top-right-radius: inherit; -} -.list-group-item:last-child { - border-bottom-right-radius: inherit; - border-bottom-left-radius: inherit; -} -.list-group-item.disabled, .list-group-item:disabled { - color: var(--bs-list-group-disabled-color); - pointer-events: none; - background-color: var(--bs-list-group-disabled-bg); -} -.list-group-item.active { - z-index: 2; - color: var(--bs-list-group-active-color); - background-color: var(--bs-list-group-active-bg); - border-color: var(--bs-list-group-active-border-color); -} -.list-group-item + .list-group-item { - border-top-width: 0; -} -.list-group-item + .list-group-item.active { - margin-top: calc(-1 * var(--bs-list-group-border-width)); - border-top-width: var(--bs-list-group-border-width); -} - -.list-group-horizontal { - flex-direction: row; -} -.list-group-horizontal > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; -} -.list-group-horizontal > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; -} -.list-group-horizontal > .list-group-item.active { - margin-top: 0; -} -.list-group-horizontal > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; -} -.list-group-horizontal > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); -} - -@media (min-width: 576px) { - .list-group-horizontal-sm { - flex-direction: row; - } - .list-group-horizontal-sm > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; - } - .list-group-horizontal-sm > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; - } - .list-group-horizontal-sm > .list-group-item.active { - margin-top: 0; - } - .list-group-horizontal-sm > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; - } - .list-group-horizontal-sm > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); - } -} -@media (min-width: 768px) { - .list-group-horizontal-md { - flex-direction: row; - } - .list-group-horizontal-md > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; - } - .list-group-horizontal-md > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; - } - .list-group-horizontal-md > .list-group-item.active { - margin-top: 0; - } - .list-group-horizontal-md > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; - } - .list-group-horizontal-md > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); - } -} -@media (min-width: 992px) { - .list-group-horizontal-lg { - flex-direction: row; - } - .list-group-horizontal-lg > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; - } - .list-group-horizontal-lg > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; - } - .list-group-horizontal-lg > .list-group-item.active { - margin-top: 0; - } - .list-group-horizontal-lg > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; - } - .list-group-horizontal-lg > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); - } -} -@media (min-width: 1200px) { - .list-group-horizontal-xl { - flex-direction: row; - } - .list-group-horizontal-xl > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; - } - .list-group-horizontal-xl > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; - } - .list-group-horizontal-xl > .list-group-item.active { - margin-top: 0; - } - .list-group-horizontal-xl > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; - } - .list-group-horizontal-xl > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); - } -} -@media (min-width: 1400px) { - .list-group-horizontal-xxl { - flex-direction: row; - } - .list-group-horizontal-xxl > .list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0; - } - .list-group-horizontal-xxl > .list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0; - } - .list-group-horizontal-xxl > .list-group-item.active { - margin-top: 0; - } - .list-group-horizontal-xxl > .list-group-item + .list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0; - } - .list-group-horizontal-xxl > .list-group-item + .list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width); - } -} -.list-group-flush { - border-radius: 0; -} -.list-group-flush > .list-group-item { - border-width: 0 0 var(--bs-list-group-border-width); -} -.list-group-flush > .list-group-item:last-child { - border-bottom-width: 0; -} - -.list-group-item-primary { - color: #10715e; - background-color: #d1f2eb; -} -.list-group-item-primary.list-group-item-action:hover, .list-group-item-primary.list-group-item-action:focus { - color: #10715e; - background-color: #bcdad4; -} -.list-group-item-primary.list-group-item-action.active { - color: #fff; - background-color: #10715e; - border-color: #10715e; -} - -.list-group-item-secondary { - color: #1a2530; - background-color: #d5d8dc; -} -.list-group-item-secondary.list-group-item-action:hover, .list-group-item-secondary.list-group-item-action:focus { - color: #1a2530; - background-color: #c0c2c6; -} -.list-group-item-secondary.list-group-item-action.active { - color: #fff; - background-color: #1a2530; - border-color: #1a2530; -} - -.list-group-item-success { - color: #0f5132; - background-color: #d1e7dd; -} -.list-group-item-success.list-group-item-action:hover, .list-group-item-success.list-group-item-action:focus { - color: #0f5132; - background-color: #bcd0c7; -} -.list-group-item-success.list-group-item-action.active { - color: #fff; - background-color: #0f5132; - border-color: #0f5132; -} - -.list-group-item-info { - color: #087990; - background-color: #cff4fc; -} -.list-group-item-info.list-group-item-action:hover, .list-group-item-info.list-group-item-action:focus { - color: #087990; - background-color: #badce3; -} -.list-group-item-info.list-group-item-action.active { - color: #fff; - background-color: #087990; - border-color: #087990; -} - -.list-group-item-warning { - color: #997404; - background-color: #fff3cd; -} -.list-group-item-warning.list-group-item-action:hover, .list-group-item-warning.list-group-item-action:focus { - color: #997404; - background-color: #e6dbb9; -} -.list-group-item-warning.list-group-item-action.active { - color: #fff; - background-color: #997404; - border-color: #997404; -} - -.list-group-item-danger { - color: #842029; - background-color: #f8d7da; -} -.list-group-item-danger.list-group-item-action:hover, .list-group-item-danger.list-group-item-action:focus { - color: #842029; - background-color: #dfc2c4; -} -.list-group-item-danger.list-group-item-action.active { - color: #fff; - background-color: #842029; - border-color: #842029; -} - -.list-group-item-light { - color: #959596; - background-color: #fefefe; -} -.list-group-item-light.list-group-item-action:hover, .list-group-item-light.list-group-item-action:focus { - color: #959596; - background-color: #e5e5e5; -} -.list-group-item-light.list-group-item-action.active { - color: #fff; - background-color: #959596; - border-color: #959596; -} - -.list-group-item-dark { - color: #141619; - background-color: #d3d3d4; -} -.list-group-item-dark.list-group-item-action:hover, .list-group-item-dark.list-group-item-action:focus { - color: #141619; - background-color: #bebebf; -} -.list-group-item-dark.list-group-item-action.active { - color: #fff; - background-color: #141619; - border-color: #141619; -} - -.btn-close { - box-sizing: content-box; - width: 1em; - height: 1em; - padding: 0.25em 0.25em; - color: #000; - background: transparent url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat; - border: 0; - border-radius: 0.5rem; - opacity: 0.5; -} -.btn-close:hover { - color: #000; - text-decoration: none; - opacity: 0.75; -} -.btn-close:focus { - outline: 0; - box-shadow: 0 0 0 0.25rem rgba(26, 188, 156, 0.25); - opacity: 1; -} -.btn-close:disabled, .btn-close.disabled { - pointer-events: none; - -webkit-user-select: none; - -moz-user-select: none; - user-select: none; - opacity: 0.25; -} - -.btn-close-white { - filter: invert(1) grayscale(100%) brightness(200%); -} - -.toast { - --bs-toast-zindex: 1090; - --bs-toast-padding-x: 0.75rem; - --bs-toast-padding-y: 0.5rem; - --bs-toast-spacing: 1.5rem; - --bs-toast-max-width: 350px; - --bs-toast-font-size: 0.875rem; - --bs-toast-color: ; - --bs-toast-bg: rgba(255, 255, 255, 0.85); - --bs-toast-border-width: 0.125rem; - --bs-toast-border-color: var(--bs-border-color-translucent); - --bs-toast-border-radius: 0.5rem; - --bs-toast-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-toast-header-color: #6c757d; - --bs-toast-header-bg: rgba(255, 255, 255, 0.85); - --bs-toast-header-border-color: rgba(0, 0, 0, 0.05); - width: var(--bs-toast-max-width); - max-width: 100%; - font-size: var(--bs-toast-font-size); - color: var(--bs-toast-color); - pointer-events: auto; - background-color: var(--bs-toast-bg); - background-clip: padding-box; - border: var(--bs-toast-border-width) solid var(--bs-toast-border-color); - box-shadow: var(--bs-toast-box-shadow); - border-radius: var(--bs-toast-border-radius); -} -.toast.showing { - opacity: 0; -} -.toast:not(.show) { - display: none; -} - -.toast-container { - --bs-toast-zindex: 1090; - position: absolute; - z-index: var(--bs-toast-zindex); - width: -moz-max-content; - width: max-content; - max-width: 100%; - pointer-events: none; -} -.toast-container > :not(:last-child) { - margin-bottom: var(--bs-toast-spacing); -} - -.toast-header { - display: flex; - align-items: center; - padding: var(--bs-toast-padding-y) var(--bs-toast-padding-x); - color: var(--bs-toast-header-color); - background-color: var(--bs-toast-header-bg); - background-clip: padding-box; - border-bottom: var(--bs-toast-border-width) solid var(--bs-toast-header-border-color); - border-top-left-radius: calc(var(--bs-toast-border-radius) - var(--bs-toast-border-width)); - border-top-right-radius: calc(var(--bs-toast-border-radius) - var(--bs-toast-border-width)); -} -.toast-header .btn-close { - margin-right: calc(-0.5 * var(--bs-toast-padding-x)); - margin-left: var(--bs-toast-padding-x); -} - -.toast-body { - padding: var(--bs-toast-padding-x); - word-wrap: break-word; -} - -.modal { - --bs-modal-zindex: 1055; - --bs-modal-width: 500px; - --bs-modal-padding: 1rem; - --bs-modal-margin: 0.5rem; - --bs-modal-color: ; - --bs-modal-bg: #fff; - --bs-modal-border-color: var(--bs-border-color-translucent); - --bs-modal-border-width: 0.125rem; - --bs-modal-border-radius: 0.75rem; - --bs-modal-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075); - --bs-modal-inner-border-radius: 0.625rem; - --bs-modal-header-padding-x: 1rem; - --bs-modal-header-padding-y: 1rem; - --bs-modal-header-padding: 1rem 1rem; - --bs-modal-header-border-color: var(--bs-border-color); - --bs-modal-header-border-width: 0.125rem; - --bs-modal-title-line-height: 1.5; - --bs-modal-footer-gap: 0.5rem; - --bs-modal-footer-bg: ; - --bs-modal-footer-border-color: var(--bs-border-color); - --bs-modal-footer-border-width: 0.125rem; - position: fixed; - top: 0; - left: 0; - z-index: var(--bs-modal-zindex); - display: none; - width: 100%; - height: 100%; - overflow-x: hidden; - overflow-y: auto; - outline: 0; -} - -.modal-dialog { - position: relative; - width: auto; - margin: var(--bs-modal-margin); - pointer-events: none; -} -.modal.fade .modal-dialog { - transition: transform 0.3s ease-out; - transform: translate(0, -50px); -} -@media (prefers-reduced-motion: reduce) { - .modal.fade .modal-dialog { - transition: none; - } -} -.modal.show .modal-dialog { - transform: none; -} -.modal.modal-static .modal-dialog { - transform: scale(1.02); -} - -.modal-dialog-scrollable { - height: calc(100% - var(--bs-modal-margin) * 2); -} -.modal-dialog-scrollable .modal-content { - max-height: 100%; - overflow: hidden; -} -.modal-dialog-scrollable .modal-body { - overflow-y: auto; -} - -.modal-dialog-centered { - display: flex; - align-items: center; - min-height: calc(100% - var(--bs-modal-margin) * 2); -} - -.modal-content { - position: relative; - display: flex; - flex-direction: column; - width: 100%; - color: var(--bs-modal-color); - pointer-events: auto; - background-color: var(--bs-modal-bg); - background-clip: padding-box; - border: var(--bs-modal-border-width) solid var(--bs-modal-border-color); - border-radius: var(--bs-modal-border-radius); - outline: 0; -} - -.modal-backdrop { - --bs-backdrop-zindex: 1050; - --bs-backdrop-bg: #000; - --bs-backdrop-opacity: 0.5; - position: fixed; - top: 0; - left: 0; - z-index: var(--bs-backdrop-zindex); - width: 100vw; - height: 100vh; - background-color: var(--bs-backdrop-bg); -} -.modal-backdrop.fade { - opacity: 0; -} -.modal-backdrop.show { - opacity: var(--bs-backdrop-opacity); -} - -.modal-header { - display: flex; - flex-shrink: 0; - align-items: center; - justify-content: space-between; - padding: var(--bs-modal-header-padding); - border-bottom: var(--bs-modal-header-border-width) solid var(--bs-modal-header-border-color); - border-top-left-radius: var(--bs-modal-inner-border-radius); - border-top-right-radius: var(--bs-modal-inner-border-radius); -} -.modal-header .btn-close { - padding: calc(var(--bs-modal-header-padding-y) * 0.5) calc(var(--bs-modal-header-padding-x) * 0.5); - margin: calc(-0.5 * var(--bs-modal-header-padding-y)) calc(-0.5 * var(--bs-modal-header-padding-x)) calc(-0.5 * var(--bs-modal-header-padding-y)) auto; -} - -.modal-title { - margin-bottom: 0; - line-height: var(--bs-modal-title-line-height); -} - -.modal-body { - position: relative; - flex: 1 1 auto; - padding: var(--bs-modal-padding); -} - -.modal-footer { - display: flex; - flex-shrink: 0; - flex-wrap: wrap; - align-items: center; - justify-content: flex-end; - padding: calc(var(--bs-modal-padding) - var(--bs-modal-footer-gap) * 0.5); - background-color: var(--bs-modal-footer-bg); - border-top: var(--bs-modal-footer-border-width) solid var(--bs-modal-footer-border-color); - border-bottom-right-radius: var(--bs-modal-inner-border-radius); - border-bottom-left-radius: var(--bs-modal-inner-border-radius); -} -.modal-footer > * { - margin: calc(var(--bs-modal-footer-gap) * 0.5); -} - -@media (min-width: 576px) { - .modal { - --bs-modal-margin: 1.75rem; - --bs-modal-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - } - .modal-dialog { - max-width: var(--bs-modal-width); - margin-right: auto; - margin-left: auto; - } - .modal-sm { - --bs-modal-width: 300px; - } -} -@media (min-width: 992px) { - .modal-lg, - .modal-xl { - --bs-modal-width: 800px; - } -} -@media (min-width: 1200px) { - .modal-xl { - --bs-modal-width: 1140px; - } -} -.modal-fullscreen { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; -} -.modal-fullscreen .modal-content { - height: 100%; - border: 0; - border-radius: 0; -} -.modal-fullscreen .modal-header, -.modal-fullscreen .modal-footer { - border-radius: 0; -} -.modal-fullscreen .modal-body { - overflow-y: auto; -} - -@media (max-width: 575.98px) { - .modal-fullscreen-sm-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; - } - .modal-fullscreen-sm-down .modal-content { - height: 100%; - border: 0; - border-radius: 0; - } - .modal-fullscreen-sm-down .modal-header, - .modal-fullscreen-sm-down .modal-footer { - border-radius: 0; - } - .modal-fullscreen-sm-down .modal-body { - overflow-y: auto; - } -} -@media (max-width: 767.98px) { - .modal-fullscreen-md-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; - } - .modal-fullscreen-md-down .modal-content { - height: 100%; - border: 0; - border-radius: 0; - } - .modal-fullscreen-md-down .modal-header, - .modal-fullscreen-md-down .modal-footer { - border-radius: 0; - } - .modal-fullscreen-md-down .modal-body { - overflow-y: auto; - } -} -@media (max-width: 991.98px) { - .modal-fullscreen-lg-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; - } - .modal-fullscreen-lg-down .modal-content { - height: 100%; - border: 0; - border-radius: 0; - } - .modal-fullscreen-lg-down .modal-header, - .modal-fullscreen-lg-down .modal-footer { - border-radius: 0; - } - .modal-fullscreen-lg-down .modal-body { - overflow-y: auto; - } -} -@media (max-width: 1199.98px) { - .modal-fullscreen-xl-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; - } - .modal-fullscreen-xl-down .modal-content { - height: 100%; - border: 0; - border-radius: 0; - } - .modal-fullscreen-xl-down .modal-header, - .modal-fullscreen-xl-down .modal-footer { - border-radius: 0; - } - .modal-fullscreen-xl-down .modal-body { - overflow-y: auto; - } -} -@media (max-width: 1399.98px) { - .modal-fullscreen-xxl-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0; - } - .modal-fullscreen-xxl-down .modal-content { - height: 100%; - border: 0; - border-radius: 0; - } - .modal-fullscreen-xxl-down .modal-header, - .modal-fullscreen-xxl-down .modal-footer { - border-radius: 0; - } - .modal-fullscreen-xxl-down .modal-body { - overflow-y: auto; - } -} -.tooltip { - --bs-tooltip-zindex: 1080; - --bs-tooltip-max-width: 200px; - --bs-tooltip-padding-x: 0.5rem; - --bs-tooltip-padding-y: 0.25rem; - --bs-tooltip-margin: ; - --bs-tooltip-font-size: 0.875rem; - --bs-tooltip-color: #fff; - --bs-tooltip-bg: #000; - --bs-tooltip-border-radius: 0.5rem; - --bs-tooltip-opacity: 0.9; - --bs-tooltip-arrow-width: 0.8rem; - --bs-tooltip-arrow-height: 0.4rem; - z-index: var(--bs-tooltip-zindex); - display: block; - padding: var(--bs-tooltip-arrow-height); - margin: var(--bs-tooltip-margin); - font-family: var(--bs-font-sans-serif); - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - white-space: normal; - word-spacing: normal; - line-break: auto; - font-size: var(--bs-tooltip-font-size); - word-wrap: break-word; - opacity: 0; -} -.tooltip.show { - opacity: var(--bs-tooltip-opacity); -} -.tooltip .tooltip-arrow { - display: block; - width: var(--bs-tooltip-arrow-width); - height: var(--bs-tooltip-arrow-height); -} -.tooltip .tooltip-arrow::before { - position: absolute; - content: ""; - border-color: transparent; - border-style: solid; -} - -.bs-tooltip-top .tooltip-arrow, .bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow { - bottom: 0; -} -.bs-tooltip-top .tooltip-arrow::before, .bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before { - top: -1px; - border-width: var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width) * 0.5) 0; - border-top-color: var(--bs-tooltip-bg); -} - -/* rtl:begin:ignore */ -.bs-tooltip-end .tooltip-arrow, .bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow { - left: 0; - width: var(--bs-tooltip-arrow-height); - height: var(--bs-tooltip-arrow-width); -} -.bs-tooltip-end .tooltip-arrow::before, .bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before { - right: -1px; - border-width: calc(var(--bs-tooltip-arrow-width) * 0.5) var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width) * 0.5) 0; - border-right-color: var(--bs-tooltip-bg); -} - -/* rtl:end:ignore */ -.bs-tooltip-bottom .tooltip-arrow, .bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow { - top: 0; -} -.bs-tooltip-bottom .tooltip-arrow::before, .bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before { - bottom: -1px; - border-width: 0 calc(var(--bs-tooltip-arrow-width) * 0.5) var(--bs-tooltip-arrow-height); - border-bottom-color: var(--bs-tooltip-bg); -} - -/* rtl:begin:ignore */ -.bs-tooltip-start .tooltip-arrow, .bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow { - right: 0; - width: var(--bs-tooltip-arrow-height); - height: var(--bs-tooltip-arrow-width); -} -.bs-tooltip-start .tooltip-arrow::before, .bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before { - left: -1px; - border-width: calc(var(--bs-tooltip-arrow-width) * 0.5) 0 calc(var(--bs-tooltip-arrow-width) * 0.5) var(--bs-tooltip-arrow-height); - border-left-color: var(--bs-tooltip-bg); -} - -/* rtl:end:ignore */ -.tooltip-inner { - max-width: var(--bs-tooltip-max-width); - padding: var(--bs-tooltip-padding-y) var(--bs-tooltip-padding-x); - color: var(--bs-tooltip-color); - text-align: center; - background-color: var(--bs-tooltip-bg); - border-radius: var(--bs-tooltip-border-radius); -} - -.popover { - --bs-popover-zindex: 1070; - --bs-popover-max-width: 276px; - --bs-popover-font-size: 0.875rem; - --bs-popover-bg: #fff; - --bs-popover-border-width: 0.125rem; - --bs-popover-border-color: var(--bs-border-color-translucent); - --bs-popover-border-radius: 0.75rem; - --bs-popover-inner-border-radius: 0.625rem; - --bs-popover-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-popover-header-padding-x: 1rem; - --bs-popover-header-padding-y: 0.5rem; - --bs-popover-header-font-size: 1rem; - --bs-popover-header-color: ; - --bs-popover-header-bg: #f0f0f0; - --bs-popover-body-padding-x: 1rem; - --bs-popover-body-padding-y: 1rem; - --bs-popover-body-color: #212529; - --bs-popover-arrow-width: 1rem; - --bs-popover-arrow-height: 0.5rem; - --bs-popover-arrow-border: var(--bs-popover-border-color); - z-index: var(--bs-popover-zindex); - display: block; - max-width: var(--bs-popover-max-width); - font-family: var(--bs-font-sans-serif); - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - white-space: normal; - word-spacing: normal; - line-break: auto; - font-size: var(--bs-popover-font-size); - word-wrap: break-word; - background-color: var(--bs-popover-bg); - background-clip: padding-box; - border: var(--bs-popover-border-width) solid var(--bs-popover-border-color); - border-radius: var(--bs-popover-border-radius); -} -.popover .popover-arrow { - display: block; - width: var(--bs-popover-arrow-width); - height: var(--bs-popover-arrow-height); -} -.popover .popover-arrow::before, .popover .popover-arrow::after { - position: absolute; - display: block; - content: ""; - border-color: transparent; - border-style: solid; - border-width: 0; -} - -.bs-popover-top > .popover-arrow, .bs-popover-auto[data-popper-placement^=top] > .popover-arrow { - bottom: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); -} -.bs-popover-top > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=top] > .popover-arrow::before, .bs-popover-top > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=top] > .popover-arrow::after { - border-width: var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width) * 0.5) 0; -} -.bs-popover-top > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=top] > .popover-arrow::before { - bottom: 0; - border-top-color: var(--bs-popover-arrow-border); -} -.bs-popover-top > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=top] > .popover-arrow::after { - bottom: var(--bs-popover-border-width); - border-top-color: var(--bs-popover-bg); -} - -/* rtl:begin:ignore */ -.bs-popover-end > .popover-arrow, .bs-popover-auto[data-popper-placement^=right] > .popover-arrow { - left: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); - width: var(--bs-popover-arrow-height); - height: var(--bs-popover-arrow-width); -} -.bs-popover-end > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=right] > .popover-arrow::before, .bs-popover-end > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=right] > .popover-arrow::after { - border-width: calc(var(--bs-popover-arrow-width) * 0.5) var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width) * 0.5) 0; -} -.bs-popover-end > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=right] > .popover-arrow::before { - left: 0; - border-right-color: var(--bs-popover-arrow-border); -} -.bs-popover-end > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=right] > .popover-arrow::after { - left: var(--bs-popover-border-width); - border-right-color: var(--bs-popover-bg); -} - -/* rtl:end:ignore */ -.bs-popover-bottom > .popover-arrow, .bs-popover-auto[data-popper-placement^=bottom] > .popover-arrow { - top: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); -} -.bs-popover-bottom > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=bottom] > .popover-arrow::before, .bs-popover-bottom > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=bottom] > .popover-arrow::after { - border-width: 0 calc(var(--bs-popover-arrow-width) * 0.5) var(--bs-popover-arrow-height); -} -.bs-popover-bottom > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=bottom] > .popover-arrow::before { - top: 0; - border-bottom-color: var(--bs-popover-arrow-border); -} -.bs-popover-bottom > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=bottom] > .popover-arrow::after { - top: var(--bs-popover-border-width); - border-bottom-color: var(--bs-popover-bg); -} -.bs-popover-bottom .popover-header::before, .bs-popover-auto[data-popper-placement^=bottom] .popover-header::before { - position: absolute; - top: 0; - left: 50%; - display: block; - width: var(--bs-popover-arrow-width); - margin-left: calc(-0.5 * var(--bs-popover-arrow-width)); - content: ""; - border-bottom: var(--bs-popover-border-width) solid var(--bs-popover-header-bg); -} - -/* rtl:begin:ignore */ -.bs-popover-start > .popover-arrow, .bs-popover-auto[data-popper-placement^=left] > .popover-arrow { - right: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); - width: var(--bs-popover-arrow-height); - height: var(--bs-popover-arrow-width); -} -.bs-popover-start > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=left] > .popover-arrow::before, .bs-popover-start > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=left] > .popover-arrow::after { - border-width: calc(var(--bs-popover-arrow-width) * 0.5) 0 calc(var(--bs-popover-arrow-width) * 0.5) var(--bs-popover-arrow-height); -} -.bs-popover-start > .popover-arrow::before, .bs-popover-auto[data-popper-placement^=left] > .popover-arrow::before { - right: 0; - border-left-color: var(--bs-popover-arrow-border); -} -.bs-popover-start > .popover-arrow::after, .bs-popover-auto[data-popper-placement^=left] > .popover-arrow::after { - right: var(--bs-popover-border-width); - border-left-color: var(--bs-popover-bg); -} - -/* rtl:end:ignore */ -.popover-header { - padding: var(--bs-popover-header-padding-y) var(--bs-popover-header-padding-x); - margin-bottom: 0; - font-size: var(--bs-popover-header-font-size); - color: var(--bs-popover-header-color); - background-color: var(--bs-popover-header-bg); - border-bottom: var(--bs-popover-border-width) solid var(--bs-popover-border-color); - border-top-left-radius: var(--bs-popover-inner-border-radius); - border-top-right-radius: var(--bs-popover-inner-border-radius); -} -.popover-header:empty { - display: none; -} - -.popover-body { - padding: var(--bs-popover-body-padding-y) var(--bs-popover-body-padding-x); - color: var(--bs-popover-body-color); -} - -.carousel { - position: relative; -} - -.carousel.pointer-event { - touch-action: pan-y; -} - -.carousel-inner { - position: relative; - width: 100%; - overflow: hidden; -} -.carousel-inner::after { - display: block; - clear: both; - content: ""; -} - -.carousel-item { - position: relative; - display: none; - float: left; - width: 100%; - margin-right: -100%; - -webkit-backface-visibility: hidden; - backface-visibility: hidden; - transition: transform 0.6s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .carousel-item { - transition: none; - } -} - -.carousel-item.active, -.carousel-item-next, -.carousel-item-prev { - display: block; -} - -.carousel-item-next:not(.carousel-item-start), -.active.carousel-item-end { - transform: translateX(100%); -} - -.carousel-item-prev:not(.carousel-item-end), -.active.carousel-item-start { - transform: translateX(-100%); -} - -.carousel-fade .carousel-item { - opacity: 0; - transition-property: opacity; - transform: none; -} -.carousel-fade .carousel-item.active, -.carousel-fade .carousel-item-next.carousel-item-start, -.carousel-fade .carousel-item-prev.carousel-item-end { - z-index: 1; - opacity: 1; -} -.carousel-fade .active.carousel-item-start, -.carousel-fade .active.carousel-item-end { - z-index: 0; - opacity: 0; - transition: opacity 0s 0.6s; -} -@media (prefers-reduced-motion: reduce) { - .carousel-fade .active.carousel-item-start, - .carousel-fade .active.carousel-item-end { - transition: none; - } -} - -.carousel-control-prev, -.carousel-control-next { - position: absolute; - top: 0; - bottom: 0; - z-index: 1; - display: flex; - align-items: center; - justify-content: center; - width: 15%; - padding: 0; - color: #fff; - text-align: center; - background: none; - border: 0; - opacity: 0.5; - transition: opacity 0.15s ease; -} -@media (prefers-reduced-motion: reduce) { - .carousel-control-prev, - .carousel-control-next { - transition: none; - } -} -.carousel-control-prev:hover, .carousel-control-prev:focus, -.carousel-control-next:hover, -.carousel-control-next:focus { - color: #fff; - text-decoration: none; - outline: 0; - opacity: 0.9; -} - -.carousel-control-prev { - left: 0; -} - -.carousel-control-next { - right: 0; -} - -.carousel-control-prev-icon, -.carousel-control-next-icon { - display: inline-block; - width: 2rem; - height: 2rem; - background-repeat: no-repeat; - background-position: 50%; - background-size: 100% 100%; -} - -/* rtl:options: { - "autoRename": true, - "stringMap":[ { - "name" : "prev-next", - "search" : "prev", - "replace" : "next" - } ] -} */ -.carousel-control-prev-icon { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e"); -} - -.carousel-control-next-icon { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); -} - -.carousel-indicators { - position: absolute; - right: 0; - bottom: 0; - left: 0; - z-index: 2; - display: flex; - justify-content: center; - padding: 0; - margin-right: 15%; - margin-bottom: 1rem; - margin-left: 15%; - list-style: none; -} -.carousel-indicators [data-bs-target] { - box-sizing: content-box; - flex: 0 1 auto; - width: 30px; - height: 3px; - padding: 0; - margin-right: 3px; - margin-left: 3px; - text-indent: -999px; - cursor: pointer; - background-color: #fff; - background-clip: padding-box; - border: 0; - border-top: 10px solid transparent; - border-bottom: 10px solid transparent; - opacity: 0.5; - transition: opacity 0.6s ease; -} -@media (prefers-reduced-motion: reduce) { - .carousel-indicators [data-bs-target] { - transition: none; - } -} -.carousel-indicators .active { - opacity: 1; -} - -.carousel-caption { - position: absolute; - right: 15%; - bottom: 1.25rem; - left: 15%; - padding-top: 1.25rem; - padding-bottom: 1.25rem; - color: #fff; - text-align: center; -} - -.carousel-dark .carousel-control-prev-icon, -.carousel-dark .carousel-control-next-icon { - filter: invert(1) grayscale(100); -} -.carousel-dark .carousel-indicators [data-bs-target] { - background-color: #000; -} -.carousel-dark .carousel-caption { - color: #000; -} - -.spinner-grow, -.spinner-border { - display: inline-block; - width: var(--bs-spinner-width); - height: var(--bs-spinner-height); - vertical-align: var(--bs-spinner-vertical-align); - border-radius: 50%; - animation: var(--bs-spinner-animation-speed) linear infinite var(--bs-spinner-animation-name); -} - -@keyframes spinner-border { - to { - transform: rotate(360deg) /* rtl:ignore */; - } -} -.spinner-border { - --bs-spinner-width: 2rem; - --bs-spinner-height: 2rem; - --bs-spinner-vertical-align: -0.125em; - --bs-spinner-border-width: 0.25em; - --bs-spinner-animation-speed: 0.75s; - --bs-spinner-animation-name: spinner-border; - border: var(--bs-spinner-border-width) solid currentcolor; - border-right-color: transparent; -} - -.spinner-border-sm { - --bs-spinner-width: 1rem; - --bs-spinner-height: 1rem; - --bs-spinner-border-width: 0.2em; -} - -@keyframes spinner-grow { - 0% { - transform: scale(0); - } - 50% { - opacity: 1; - transform: none; - } -} -.spinner-grow { - --bs-spinner-width: 2rem; - --bs-spinner-height: 2rem; - --bs-spinner-vertical-align: -0.125em; - --bs-spinner-animation-speed: 0.75s; - --bs-spinner-animation-name: spinner-grow; - background-color: currentcolor; - opacity: 0; -} - -.spinner-grow-sm { - --bs-spinner-width: 1rem; - --bs-spinner-height: 1rem; -} - -@media (prefers-reduced-motion: reduce) { - .spinner-border, - .spinner-grow { - --bs-spinner-animation-speed: 1.5s; - } -} -.offcanvas, .offcanvas-xxl, .offcanvas-xl, .offcanvas-lg, .offcanvas-md, .offcanvas-sm { - --bs-offcanvas-zindex: 1045; - --bs-offcanvas-width: 400px; - --bs-offcanvas-height: 30vh; - --bs-offcanvas-padding-x: 1rem; - --bs-offcanvas-padding-y: 1rem; - --bs-offcanvas-color: ; - --bs-offcanvas-bg: #fff; - --bs-offcanvas-border-width: 0.125rem; - --bs-offcanvas-border-color: var(--bs-border-color-translucent); - --bs-offcanvas-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075); -} - -@media (max-width: 575.98px) { - .offcanvas-sm { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; - } -} -@media (max-width: 575.98px) and (prefers-reduced-motion: reduce) { - .offcanvas-sm { - transition: none; - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.showing, .offcanvas-sm.show:not(.hiding) { - transform: none; - } -} -@media (max-width: 575.98px) { - .offcanvas-sm.showing, .offcanvas-sm.hiding, .offcanvas-sm.show { - visibility: visible; - } -} -@media (min-width: 576px) { - .offcanvas-sm { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important; - } - .offcanvas-sm .offcanvas-header { - display: none; - } - .offcanvas-sm .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important; - } -} - -@media (max-width: 767.98px) { - .offcanvas-md { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; - } -} -@media (max-width: 767.98px) and (prefers-reduced-motion: reduce) { - .offcanvas-md { - transition: none; - } -} -@media (max-width: 767.98px) { - .offcanvas-md.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); - } -} -@media (max-width: 767.98px) { - .offcanvas-md.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); - } -} -@media (max-width: 767.98px) { - .offcanvas-md.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); - } -} -@media (max-width: 767.98px) { - .offcanvas-md.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); - } -} -@media (max-width: 767.98px) { - .offcanvas-md.showing, .offcanvas-md.show:not(.hiding) { - transform: none; - } -} -@media (max-width: 767.98px) { - .offcanvas-md.showing, .offcanvas-md.hiding, .offcanvas-md.show { - visibility: visible; - } -} -@media (min-width: 768px) { - .offcanvas-md { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important; - } - .offcanvas-md .offcanvas-header { - display: none; - } - .offcanvas-md .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important; - } -} - -@media (max-width: 991.98px) { - .offcanvas-lg { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; - } -} -@media (max-width: 991.98px) and (prefers-reduced-motion: reduce) { - .offcanvas-lg { - transition: none; - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.showing, .offcanvas-lg.show:not(.hiding) { - transform: none; - } -} -@media (max-width: 991.98px) { - .offcanvas-lg.showing, .offcanvas-lg.hiding, .offcanvas-lg.show { - visibility: visible; - } -} -@media (min-width: 992px) { - .offcanvas-lg { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important; - } - .offcanvas-lg .offcanvas-header { - display: none; - } - .offcanvas-lg .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important; - } -} - -@media (max-width: 1199.98px) { - .offcanvas-xl { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; - } -} -@media (max-width: 1199.98px) and (prefers-reduced-motion: reduce) { - .offcanvas-xl { - transition: none; - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.showing, .offcanvas-xl.show:not(.hiding) { - transform: none; - } -} -@media (max-width: 1199.98px) { - .offcanvas-xl.showing, .offcanvas-xl.hiding, .offcanvas-xl.show { - visibility: visible; - } -} -@media (min-width: 1200px) { - .offcanvas-xl { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important; - } - .offcanvas-xl .offcanvas-header { - display: none; - } - .offcanvas-xl .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important; - } -} - -@media (max-width: 1399.98px) { - .offcanvas-xxl { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; - } -} -@media (max-width: 1399.98px) and (prefers-reduced-motion: reduce) { - .offcanvas-xxl { - transition: none; - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.showing, .offcanvas-xxl.show:not(.hiding) { - transform: none; - } -} -@media (max-width: 1399.98px) { - .offcanvas-xxl.showing, .offcanvas-xxl.hiding, .offcanvas-xxl.show { - visibility: visible; - } -} -@media (min-width: 1400px) { - .offcanvas-xxl { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important; - } - .offcanvas-xxl .offcanvas-header { - display: none; - } - .offcanvas-xxl .offcanvas-body { - display: flex; - flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important; - } -} - -.offcanvas { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0; - transition: transform 0.3s ease-in-out; -} -@media (prefers-reduced-motion: reduce) { - .offcanvas { - transition: none; - } -} -.offcanvas.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%); -} -.offcanvas.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%); -} -.offcanvas.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%); -} -.offcanvas.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%); -} -.offcanvas.showing, .offcanvas.show:not(.hiding) { - transform: none; -} -.offcanvas.showing, .offcanvas.hiding, .offcanvas.show { - visibility: visible; -} - -.offcanvas-backdrop { - position: fixed; - top: 0; - left: 0; - z-index: 1040; - width: 100vw; - height: 100vh; - background-color: #000; -} -.offcanvas-backdrop.fade { - opacity: 0; -} -.offcanvas-backdrop.show { - opacity: 0.5; -} - -.offcanvas-header { - display: flex; - align-items: center; - justify-content: space-between; - padding: var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x); -} -.offcanvas-header .btn-close { - padding: calc(var(--bs-offcanvas-padding-y) * 0.5) calc(var(--bs-offcanvas-padding-x) * 0.5); - margin-top: calc(-0.5 * var(--bs-offcanvas-padding-y)); - margin-right: calc(-0.5 * var(--bs-offcanvas-padding-x)); - margin-bottom: calc(-0.5 * var(--bs-offcanvas-padding-y)); -} - -.offcanvas-title { - margin-bottom: 0; - line-height: 1.5; -} - -.offcanvas-body { - flex-grow: 1; - padding: var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x); - overflow-y: auto; -} - -.placeholder { - display: inline-block; - min-height: 1em; - vertical-align: middle; - cursor: wait; - background-color: currentcolor; - opacity: 0.5; -} -.placeholder.btn::before { - display: inline-block; - content: ""; -} - -.placeholder-xs { - min-height: 0.6em; -} - -.placeholder-sm { - min-height: 0.8em; -} - -.placeholder-lg { - min-height: 1.2em; -} - -.placeholder-glow .placeholder { - animation: placeholder-glow 2s ease-in-out infinite; -} - -@keyframes placeholder-glow { - 50% { - opacity: 0.2; - } -} -.placeholder-wave { - -webkit-mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%); - mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%); - -webkit-mask-size: 200% 100%; - mask-size: 200% 100%; - animation: placeholder-wave 2s linear infinite; -} - -@keyframes placeholder-wave { - 100% { - -webkit-mask-position: -200% 0%; - mask-position: -200% 0%; - } -} -.clearfix::after { - display: block; - clear: both; - content: ""; -} - -.text-bg-primary { - color: #fff !important; - background-color: RGBA(26, 188, 156, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-secondary { - color: #fff !important; - background-color: RGBA(44, 62, 80, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-success { - color: #fff !important; - background-color: RGBA(25, 135, 84, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-info { - color: #000 !important; - background-color: RGBA(13, 202, 240, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-warning { - color: #000 !important; - background-color: RGBA(255, 193, 7, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-danger { - color: #fff !important; - background-color: RGBA(220, 53, 69, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-light { - color: #000 !important; - background-color: RGBA(248, 249, 250, var(--bs-bg-opacity, 1)) !important; -} - -.text-bg-dark { - color: #fff !important; - background-color: RGBA(33, 37, 41, var(--bs-bg-opacity, 1)) !important; -} - -.link-primary { - color: #1abc9c !important; -} -.link-primary:hover, .link-primary:focus { - color: #15967d !important; -} - -.link-secondary { - color: #2c3e50 !important; -} -.link-secondary:hover, .link-secondary:focus { - color: #233240 !important; -} - -.link-success { - color: #198754 !important; -} -.link-success:hover, .link-success:focus { - color: #146c43 !important; -} - -.link-info { - color: #0dcaf0 !important; -} -.link-info:hover, .link-info:focus { - color: #3dd5f3 !important; -} - -.link-warning { - color: #ffc107 !important; -} -.link-warning:hover, .link-warning:focus { - color: #ffcd39 !important; -} - -.link-danger { - color: #dc3545 !important; -} -.link-danger:hover, .link-danger:focus { - color: #b02a37 !important; -} - -.link-light { - color: #f8f9fa !important; -} -.link-light:hover, .link-light:focus { - color: #f9fafb !important; -} - -.link-dark { - color: #212529 !important; -} -.link-dark:hover, .link-dark:focus { - color: #1a1e21 !important; -} - -.ratio { - position: relative; - width: 100%; -} -.ratio::before { - display: block; - padding-top: var(--bs-aspect-ratio); - content: ""; -} -.ratio > * { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; -} - -.ratio-1x1 { - --bs-aspect-ratio: 100%; -} - -.ratio-4x3 { - --bs-aspect-ratio: 75%; -} - -.ratio-16x9 { - --bs-aspect-ratio: 56.25%; -} - -.ratio-21x9 { - --bs-aspect-ratio: 42.8571428571%; -} - -.fixed-top { - position: fixed; - top: 0; - right: 0; - left: 0; - z-index: 1030; -} - -.fixed-bottom { - position: fixed; - right: 0; - bottom: 0; - left: 0; - z-index: 1030; -} - -.sticky-top { - position: sticky; - top: 0; - z-index: 1020; -} - -.sticky-bottom { - position: sticky; - bottom: 0; - z-index: 1020; -} - -@media (min-width: 576px) { - .sticky-sm-top { - position: sticky; - top: 0; - z-index: 1020; - } - .sticky-sm-bottom { - position: sticky; - bottom: 0; - z-index: 1020; - } -} -@media (min-width: 768px) { - .sticky-md-top { - position: sticky; - top: 0; - z-index: 1020; - } - .sticky-md-bottom { - position: sticky; - bottom: 0; - z-index: 1020; - } -} -@media (min-width: 992px) { - .sticky-lg-top { - position: sticky; - top: 0; - z-index: 1020; - } - .sticky-lg-bottom { - position: sticky; - bottom: 0; - z-index: 1020; - } -} -@media (min-width: 1200px) { - .sticky-xl-top { - position: sticky; - top: 0; - z-index: 1020; - } - .sticky-xl-bottom { - position: sticky; - bottom: 0; - z-index: 1020; - } -} -@media (min-width: 1400px) { - .sticky-xxl-top { - position: sticky; - top: 0; - z-index: 1020; - } - .sticky-xxl-bottom { - position: sticky; - bottom: 0; - z-index: 1020; - } -} -.hstack { - display: flex; - flex-direction: row; - align-items: center; - align-self: stretch; -} - -.vstack { - display: flex; - flex: 1 1 auto; - flex-direction: column; - align-self: stretch; -} - -.visually-hidden, -.visually-hidden-focusable:not(:focus):not(:focus-within) { - position: absolute !important; - width: 1px !important; - height: 1px !important; - padding: 0 !important; - margin: -1px !important; - overflow: hidden !important; - clip: rect(0, 0, 0, 0) !important; - white-space: nowrap !important; - border: 0 !important; -} - -.stretched-link::after { - position: absolute; - top: 0; - right: 0; - bottom: 0; - left: 0; - z-index: 1; - content: ""; -} - -.text-truncate { - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap; -} - -.vr { - display: inline-block; - align-self: stretch; - width: 1px; - min-height: 1em; - background-color: currentcolor; - opacity: 0.25; -} - -.align-baseline { - vertical-align: baseline !important; -} - -.align-top { - vertical-align: top !important; -} - -.align-middle { - vertical-align: middle !important; -} - -.align-bottom { - vertical-align: bottom !important; -} - -.align-text-bottom { - vertical-align: text-bottom !important; -} - -.align-text-top { - vertical-align: text-top !important; -} - -.float-start { - float: left !important; -} - -.float-end { - float: right !important; -} - -.float-none { - float: none !important; -} - -.opacity-0 { - opacity: 0 !important; -} - -.opacity-25 { - opacity: 0.25 !important; -} - -.opacity-50 { - opacity: 0.5 !important; -} - -.opacity-75 { - opacity: 0.75 !important; -} - -.opacity-100 { - opacity: 1 !important; -} - -.overflow-auto { - overflow: auto !important; -} - -.overflow-hidden { - overflow: hidden !important; -} - -.overflow-visible { - overflow: visible !important; -} - -.overflow-scroll { - overflow: scroll !important; -} - -.d-inline { - display: inline !important; -} - -.d-inline-block { - display: inline-block !important; -} - -.d-block { - display: block !important; -} - -.d-grid { - display: grid !important; -} - -.d-table { - display: table !important; -} - -.d-table-row { - display: table-row !important; -} - -.d-table-cell { - display: table-cell !important; -} - -.d-flex { - display: flex !important; -} - -.d-inline-flex { - display: inline-flex !important; -} - -.d-none { - display: none !important; -} - -.shadow { - box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15) !important; -} - -.shadow-sm { - box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075) !important; -} - -.shadow-lg { - box-shadow: 0 1rem 3rem rgba(0, 0, 0, 0.175) !important; -} - -.shadow-none { - box-shadow: none !important; -} - -.position-static { - position: static !important; -} - -.position-relative { - position: relative !important; -} - -.position-absolute { - position: absolute !important; -} - -.position-fixed { - position: fixed !important; -} - -.position-sticky { - position: sticky !important; -} - -.top-0 { - top: 0 !important; -} - -.top-50 { - top: 50% !important; -} - -.top-100 { - top: 100% !important; -} - -.bottom-0 { - bottom: 0 !important; -} - -.bottom-50 { - bottom: 50% !important; -} - -.bottom-100 { - bottom: 100% !important; -} - -.start-0 { - left: 0 !important; -} - -.start-50 { - left: 50% !important; -} - -.start-100 { - left: 100% !important; -} - -.end-0 { - right: 0 !important; -} - -.end-50 { - right: 50% !important; -} - -.end-100 { - right: 100% !important; -} - -.translate-middle { - transform: translate(-50%, -50%) !important; -} - -.translate-middle-x { - transform: translateX(-50%) !important; -} - -.translate-middle-y { - transform: translateY(-50%) !important; -} - -.border { - border: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important; -} - -.border-0 { - border: 0 !important; -} - -.border-top { - border-top: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important; -} - -.border-top-0 { - border-top: 0 !important; -} - -.border-end { - border-right: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important; -} - -.border-end-0 { - border-right: 0 !important; -} - -.border-bottom { - border-bottom: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important; -} - -.border-bottom-0 { - border-bottom: 0 !important; -} - -.border-start { - border-left: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important; -} - -.border-start-0 { - border-left: 0 !important; -} - -.border-primary { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-primary-rgb), var(--bs-border-opacity)) !important; -} - -.border-secondary { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-secondary-rgb), var(--bs-border-opacity)) !important; -} - -.border-success { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-success-rgb), var(--bs-border-opacity)) !important; -} - -.border-info { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-info-rgb), var(--bs-border-opacity)) !important; -} - -.border-warning { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-warning-rgb), var(--bs-border-opacity)) !important; -} - -.border-danger { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-danger-rgb), var(--bs-border-opacity)) !important; -} - -.border-light { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-light-rgb), var(--bs-border-opacity)) !important; -} - -.border-dark { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-dark-rgb), var(--bs-border-opacity)) !important; -} - -.border-white { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-white-rgb), var(--bs-border-opacity)) !important; -} - -.border-1 { - --bs-border-width: 1px; -} - -.border-2 { - --bs-border-width: 2px; -} - -.border-3 { - --bs-border-width: 3px; -} - -.border-4 { - --bs-border-width: 4px; -} - -.border-5 { - --bs-border-width: 5px; -} - -.border-opacity-10 { - --bs-border-opacity: 0.1; -} - -.border-opacity-25 { - --bs-border-opacity: 0.25; -} - -.border-opacity-50 { - --bs-border-opacity: 0.5; -} - -.border-opacity-75 { - --bs-border-opacity: 0.75; -} - -.border-opacity-100 { - --bs-border-opacity: 1; -} - -.w-25 { - width: 25% !important; -} - -.w-50 { - width: 50% !important; -} - -.w-75 { - width: 75% !important; -} - -.w-100 { - width: 100% !important; -} - -.w-auto { - width: auto !important; -} - -.mw-100 { - max-width: 100% !important; -} - -.vw-100 { - width: 100vw !important; -} - -.min-vw-100 { - min-width: 100vw !important; -} - -.h-25 { - height: 25% !important; -} - -.h-50 { - height: 50% !important; -} - -.h-75 { - height: 75% !important; -} - -.h-100 { - height: 100% !important; -} - -.h-auto { - height: auto !important; -} - -.mh-100 { - max-height: 100% !important; -} - -.vh-100 { - height: 100vh !important; -} - -.min-vh-100 { - min-height: 100vh !important; -} - -.flex-fill { - flex: 1 1 auto !important; -} - -.flex-row { - flex-direction: row !important; -} - -.flex-column { - flex-direction: column !important; -} - -.flex-row-reverse { - flex-direction: row-reverse !important; -} - -.flex-column-reverse { - flex-direction: column-reverse !important; -} - -.flex-grow-0 { - flex-grow: 0 !important; -} - -.flex-grow-1 { - flex-grow: 1 !important; -} - -.flex-shrink-0 { - flex-shrink: 0 !important; -} - -.flex-shrink-1 { - flex-shrink: 1 !important; -} - -.flex-wrap { - flex-wrap: wrap !important; -} - -.flex-nowrap { - flex-wrap: nowrap !important; -} - -.flex-wrap-reverse { - flex-wrap: wrap-reverse !important; -} - -.justify-content-start { - justify-content: flex-start !important; -} - -.justify-content-end { - justify-content: flex-end !important; -} - -.justify-content-center { - justify-content: center !important; -} - -.justify-content-between { - justify-content: space-between !important; -} - -.justify-content-around { - justify-content: space-around !important; -} - -.justify-content-evenly { - justify-content: space-evenly !important; -} - -.align-items-start { - align-items: flex-start !important; -} - -.align-items-end { - align-items: flex-end !important; -} - -.align-items-center { - align-items: center !important; -} - -.align-items-baseline { - align-items: baseline !important; -} - -.align-items-stretch { - align-items: stretch !important; -} - -.align-content-start { - align-content: flex-start !important; -} - -.align-content-end { - align-content: flex-end !important; -} - -.align-content-center { - align-content: center !important; -} - -.align-content-between { - align-content: space-between !important; -} - -.align-content-around { - align-content: space-around !important; -} - -.align-content-stretch { - align-content: stretch !important; -} - -.align-self-auto { - align-self: auto !important; -} - -.align-self-start { - align-self: flex-start !important; -} - -.align-self-end { - align-self: flex-end !important; -} - -.align-self-center { - align-self: center !important; -} - -.align-self-baseline { - align-self: baseline !important; -} - -.align-self-stretch { - align-self: stretch !important; -} - -.order-first { - order: -1 !important; -} - -.order-0 { - order: 0 !important; -} - -.order-1 { - order: 1 !important; -} - -.order-2 { - order: 2 !important; -} - -.order-3 { - order: 3 !important; -} - -.order-4 { - order: 4 !important; -} - -.order-5 { - order: 5 !important; -} - -.order-last { - order: 6 !important; -} - -.m-0 { - margin: 0 !important; -} - -.m-1 { - margin: 0.25rem !important; -} - -.m-2 { - margin: 0.5rem !important; -} - -.m-3 { - margin: 1rem !important; -} - -.m-4 { - margin: 1.5rem !important; -} - -.m-5 { - margin: 3rem !important; -} - -.m-auto { - margin: auto !important; -} - -.mx-0 { - margin-right: 0 !important; - margin-left: 0 !important; -} - -.mx-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; -} - -.mx-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; -} - -.mx-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; -} - -.mx-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; -} - -.mx-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; -} - -.mx-auto { - margin-right: auto !important; - margin-left: auto !important; -} - -.my-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; -} - -.my-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; -} - -.my-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; -} - -.my-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; -} - -.my-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; -} - -.my-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; -} - -.my-auto { - margin-top: auto !important; - margin-bottom: auto !important; -} - -.mt-0 { - margin-top: 0 !important; -} - -.mt-1 { - margin-top: 0.25rem !important; -} - -.mt-2 { - margin-top: 0.5rem !important; -} - -.mt-3 { - margin-top: 1rem !important; -} - -.mt-4 { - margin-top: 1.5rem !important; -} - -.mt-5 { - margin-top: 3rem !important; -} - -.mt-auto { - margin-top: auto !important; -} - -.me-0 { - margin-right: 0 !important; -} - -.me-1 { - margin-right: 0.25rem !important; -} - -.me-2 { - margin-right: 0.5rem !important; -} - -.me-3 { - margin-right: 1rem !important; -} - -.me-4 { - margin-right: 1.5rem !important; -} - -.me-5 { - margin-right: 3rem !important; -} - -.me-auto { - margin-right: auto !important; -} - -.mb-0 { - margin-bottom: 0 !important; -} - -.mb-1 { - margin-bottom: 0.25rem !important; -} - -.mb-2 { - margin-bottom: 0.5rem !important; -} - -.mb-3 { - margin-bottom: 1rem !important; -} - -.mb-4 { - margin-bottom: 1.5rem !important; -} - -.mb-5 { - margin-bottom: 3rem !important; -} - -.mb-auto { - margin-bottom: auto !important; -} - -.ms-0 { - margin-left: 0 !important; -} - -.ms-1 { - margin-left: 0.25rem !important; -} - -.ms-2 { - margin-left: 0.5rem !important; -} - -.ms-3 { - margin-left: 1rem !important; -} - -.ms-4 { - margin-left: 1.5rem !important; -} - -.ms-5 { - margin-left: 3rem !important; -} - -.ms-auto { - margin-left: auto !important; -} - -.p-0 { - padding: 0 !important; -} - -.p-1 { - padding: 0.25rem !important; -} - -.p-2 { - padding: 0.5rem !important; -} - -.p-3 { - padding: 1rem !important; -} - -.p-4 { - padding: 1.5rem !important; -} - -.p-5 { - padding: 3rem !important; -} - -.px-0 { - padding-right: 0 !important; - padding-left: 0 !important; -} - -.px-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; -} - -.px-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; -} - -.px-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; -} - -.px-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; -} - -.px-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; -} - -.py-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; -} - -.py-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; -} - -.py-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; -} - -.py-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; -} - -.py-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; -} - -.py-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; -} - -.pt-0 { - padding-top: 0 !important; -} - -.pt-1 { - padding-top: 0.25rem !important; -} - -.pt-2 { - padding-top: 0.5rem !important; -} - -.pt-3 { - padding-top: 1rem !important; -} - -.pt-4 { - padding-top: 1.5rem !important; -} - -.pt-5 { - padding-top: 3rem !important; -} - -.pe-0 { - padding-right: 0 !important; -} - -.pe-1 { - padding-right: 0.25rem !important; -} - -.pe-2 { - padding-right: 0.5rem !important; -} - -.pe-3 { - padding-right: 1rem !important; -} - -.pe-4 { - padding-right: 1.5rem !important; -} - -.pe-5 { - padding-right: 3rem !important; -} - -.pb-0 { - padding-bottom: 0 !important; -} - -.pb-1 { - padding-bottom: 0.25rem !important; -} - -.pb-2 { - padding-bottom: 0.5rem !important; -} - -.pb-3 { - padding-bottom: 1rem !important; -} - -.pb-4 { - padding-bottom: 1.5rem !important; -} - -.pb-5 { - padding-bottom: 3rem !important; -} - -.ps-0 { - padding-left: 0 !important; -} - -.ps-1 { - padding-left: 0.25rem !important; -} - -.ps-2 { - padding-left: 0.5rem !important; -} - -.ps-3 { - padding-left: 1rem !important; -} - -.ps-4 { - padding-left: 1.5rem !important; -} - -.ps-5 { - padding-left: 3rem !important; -} - -.gap-0 { - gap: 0 !important; -} - -.gap-1 { - gap: 0.25rem !important; -} - -.gap-2 { - gap: 0.5rem !important; -} - -.gap-3 { - gap: 1rem !important; -} - -.gap-4 { - gap: 1.5rem !important; -} - -.gap-5 { - gap: 3rem !important; -} - -.font-monospace { - font-family: var(--bs-font-monospace) !important; -} - -.fs-1 { - font-size: calc(1.375rem + 1.5vw) !important; -} - -.fs-2 { - font-size: calc(1.325rem + 0.9vw) !important; -} - -.fs-3 { - font-size: calc(1.3rem + 0.6vw) !important; -} - -.fs-4 { - font-size: calc(1.275rem + 0.3vw) !important; -} - -.fs-5 { - font-size: 1.25rem !important; -} - -.fs-6 { - font-size: 1rem !important; -} - -.fst-italic { - font-style: italic !important; -} - -.fst-normal { - font-style: normal !important; -} - -.fw-light { - font-weight: 300 !important; -} - -.fw-lighter { - font-weight: lighter !important; -} - -.fw-normal { - font-weight: 400 !important; -} - -.fw-bold { - font-weight: 700 !important; -} - -.fw-semibold { - font-weight: 600 !important; -} - -.fw-bolder { - font-weight: bolder !important; -} - -.lh-1 { - line-height: 1 !important; -} - -.lh-sm { - line-height: 1.25 !important; -} - -.lh-base { - line-height: 1.5 !important; -} - -.lh-lg { - line-height: 2 !important; -} - -.text-start { - text-align: left !important; -} - -.text-end { - text-align: right !important; -} - -.text-center { - text-align: center !important; -} - -.text-decoration-none { - text-decoration: none !important; -} - -.text-decoration-underline { - text-decoration: underline !important; -} - -.text-decoration-line-through { - text-decoration: line-through !important; -} - -.text-lowercase { - text-transform: lowercase !important; -} - -.text-uppercase { - text-transform: uppercase !important; -} - -.text-capitalize { - text-transform: capitalize !important; -} - -.text-wrap { - white-space: normal !important; -} - -.text-nowrap { - white-space: nowrap !important; -} - -/* rtl:begin:remove */ -.text-break { - word-wrap: break-word !important; - word-break: break-word !important; -} - -/* rtl:end:remove */ -.text-primary { - --bs-text-opacity: 1; - color: rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important; -} - -.text-secondary { - --bs-text-opacity: 1; - color: rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important; -} - -.text-success { - --bs-text-opacity: 1; - color: rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important; -} - -.text-info { - --bs-text-opacity: 1; - color: rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important; -} - -.text-warning { - --bs-text-opacity: 1; - color: rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important; -} - -.text-danger { - --bs-text-opacity: 1; - color: rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important; -} - -.text-light { - --bs-text-opacity: 1; - color: rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important; -} - -.text-dark { - --bs-text-opacity: 1; - color: rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important; -} - -.text-black { - --bs-text-opacity: 1; - color: rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important; -} - -.text-white { - --bs-text-opacity: 1; - color: rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important; -} - -.text-body { - --bs-text-opacity: 1; - color: rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important; -} - -.text-muted { - --bs-text-opacity: 1; - color: #6c757d !important; -} - -.text-black-50 { - --bs-text-opacity: 1; - color: rgba(0, 0, 0, 0.5) !important; -} - -.text-white-50 { - --bs-text-opacity: 1; - color: rgba(255, 255, 255, 0.5) !important; -} - -.text-reset { - --bs-text-opacity: 1; - color: inherit !important; -} - -.text-opacity-25 { - --bs-text-opacity: 0.25; -} - -.text-opacity-50 { - --bs-text-opacity: 0.5; -} - -.text-opacity-75 { - --bs-text-opacity: 0.75; -} - -.text-opacity-100 { - --bs-text-opacity: 1; -} - -.bg-primary { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-secondary { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-success { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-info { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-warning { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-danger { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-light { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-dark { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-black { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-white { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-body { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important; -} - -.bg-transparent { - --bs-bg-opacity: 1; - background-color: transparent !important; -} - -.bg-opacity-10 { - --bs-bg-opacity: 0.1; -} - -.bg-opacity-25 { - --bs-bg-opacity: 0.25; -} - -.bg-opacity-50 { - --bs-bg-opacity: 0.5; -} - -.bg-opacity-75 { - --bs-bg-opacity: 0.75; -} - -.bg-opacity-100 { - --bs-bg-opacity: 1; -} - -.bg-gradient { - background-image: var(--bs-gradient) !important; -} - -.user-select-all { - -webkit-user-select: all !important; - -moz-user-select: all !important; - user-select: all !important; -} - -.user-select-auto { - -webkit-user-select: auto !important; - -moz-user-select: auto !important; - user-select: auto !important; -} - -.user-select-none { - -webkit-user-select: none !important; - -moz-user-select: none !important; - user-select: none !important; -} - -.pe-none { - pointer-events: none !important; -} - -.pe-auto { - pointer-events: auto !important; -} - -.rounded { - border-radius: var(--bs-border-radius) !important; -} - -.rounded-0 { - border-radius: 0 !important; -} - -.rounded-1 { - border-radius: var(--bs-border-radius-sm) !important; -} - -.rounded-2 { - border-radius: var(--bs-border-radius) !important; -} - -.rounded-3 { - border-radius: var(--bs-border-radius-lg) !important; -} - -.rounded-4 { - border-radius: var(--bs-border-radius-xl) !important; -} - -.rounded-5 { - border-radius: var(--bs-border-radius-2xl) !important; -} - -.rounded-circle { - border-radius: 50% !important; -} - -.rounded-pill { - border-radius: var(--bs-border-radius-pill) !important; -} - -.rounded-top { - border-top-left-radius: var(--bs-border-radius) !important; - border-top-right-radius: var(--bs-border-radius) !important; -} - -.rounded-end { - border-top-right-radius: var(--bs-border-radius) !important; - border-bottom-right-radius: var(--bs-border-radius) !important; -} - -.rounded-bottom { - border-bottom-right-radius: var(--bs-border-radius) !important; - border-bottom-left-radius: var(--bs-border-radius) !important; -} - -.rounded-start { - border-bottom-left-radius: var(--bs-border-radius) !important; - border-top-left-radius: var(--bs-border-radius) !important; -} - -.visible { - visibility: visible !important; -} - -.invisible { - visibility: hidden !important; -} - -@media (min-width: 576px) { - .float-sm-start { - float: left !important; - } - .float-sm-end { - float: right !important; - } - .float-sm-none { - float: none !important; - } - .d-sm-inline { - display: inline !important; - } - .d-sm-inline-block { - display: inline-block !important; - } - .d-sm-block { - display: block !important; - } - .d-sm-grid { - display: grid !important; - } - .d-sm-table { - display: table !important; - } - .d-sm-table-row { - display: table-row !important; - } - .d-sm-table-cell { - display: table-cell !important; - } - .d-sm-flex { - display: flex !important; - } - .d-sm-inline-flex { - display: inline-flex !important; - } - .d-sm-none { - display: none !important; - } - .flex-sm-fill { - flex: 1 1 auto !important; - } - .flex-sm-row { - flex-direction: row !important; - } - .flex-sm-column { - flex-direction: column !important; - } - .flex-sm-row-reverse { - flex-direction: row-reverse !important; - } - .flex-sm-column-reverse { - flex-direction: column-reverse !important; - } - .flex-sm-grow-0 { - flex-grow: 0 !important; - } - .flex-sm-grow-1 { - flex-grow: 1 !important; - } - .flex-sm-shrink-0 { - flex-shrink: 0 !important; - } - .flex-sm-shrink-1 { - flex-shrink: 1 !important; - } - .flex-sm-wrap { - flex-wrap: wrap !important; - } - .flex-sm-nowrap { - flex-wrap: nowrap !important; - } - .flex-sm-wrap-reverse { - flex-wrap: wrap-reverse !important; - } - .justify-content-sm-start { - justify-content: flex-start !important; - } - .justify-content-sm-end { - justify-content: flex-end !important; - } - .justify-content-sm-center { - justify-content: center !important; - } - .justify-content-sm-between { - justify-content: space-between !important; - } - .justify-content-sm-around { - justify-content: space-around !important; - } - .justify-content-sm-evenly { - justify-content: space-evenly !important; - } - .align-items-sm-start { - align-items: flex-start !important; - } - .align-items-sm-end { - align-items: flex-end !important; - } - .align-items-sm-center { - align-items: center !important; - } - .align-items-sm-baseline { - align-items: baseline !important; - } - .align-items-sm-stretch { - align-items: stretch !important; - } - .align-content-sm-start { - align-content: flex-start !important; - } - .align-content-sm-end { - align-content: flex-end !important; - } - .align-content-sm-center { - align-content: center !important; - } - .align-content-sm-between { - align-content: space-between !important; - } - .align-content-sm-around { - align-content: space-around !important; - } - .align-content-sm-stretch { - align-content: stretch !important; - } - .align-self-sm-auto { - align-self: auto !important; - } - .align-self-sm-start { - align-self: flex-start !important; - } - .align-self-sm-end { - align-self: flex-end !important; - } - .align-self-sm-center { - align-self: center !important; - } - .align-self-sm-baseline { - align-self: baseline !important; - } - .align-self-sm-stretch { - align-self: stretch !important; - } - .order-sm-first { - order: -1 !important; - } - .order-sm-0 { - order: 0 !important; - } - .order-sm-1 { - order: 1 !important; - } - .order-sm-2 { - order: 2 !important; - } - .order-sm-3 { - order: 3 !important; - } - .order-sm-4 { - order: 4 !important; - } - .order-sm-5 { - order: 5 !important; - } - .order-sm-last { - order: 6 !important; - } - .m-sm-0 { - margin: 0 !important; - } - .m-sm-1 { - margin: 0.25rem !important; - } - .m-sm-2 { - margin: 0.5rem !important; - } - .m-sm-3 { - margin: 1rem !important; - } - .m-sm-4 { - margin: 1.5rem !important; - } - .m-sm-5 { - margin: 3rem !important; - } - .m-sm-auto { - margin: auto !important; - } - .mx-sm-0 { - margin-right: 0 !important; - margin-left: 0 !important; - } - .mx-sm-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; - } - .mx-sm-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; - } - .mx-sm-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; - } - .mx-sm-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; - } - .mx-sm-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; - } - .mx-sm-auto { - margin-right: auto !important; - margin-left: auto !important; - } - .my-sm-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; - } - .my-sm-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; - } - .my-sm-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; - } - .my-sm-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; - } - .my-sm-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; - } - .my-sm-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; - } - .my-sm-auto { - margin-top: auto !important; - margin-bottom: auto !important; - } - .mt-sm-0 { - margin-top: 0 !important; - } - .mt-sm-1 { - margin-top: 0.25rem !important; - } - .mt-sm-2 { - margin-top: 0.5rem !important; - } - .mt-sm-3 { - margin-top: 1rem !important; - } - .mt-sm-4 { - margin-top: 1.5rem !important; - } - .mt-sm-5 { - margin-top: 3rem !important; - } - .mt-sm-auto { - margin-top: auto !important; - } - .me-sm-0 { - margin-right: 0 !important; - } - .me-sm-1 { - margin-right: 0.25rem !important; - } - .me-sm-2 { - margin-right: 0.5rem !important; - } - .me-sm-3 { - margin-right: 1rem !important; - } - .me-sm-4 { - margin-right: 1.5rem !important; - } - .me-sm-5 { - margin-right: 3rem !important; - } - .me-sm-auto { - margin-right: auto !important; - } - .mb-sm-0 { - margin-bottom: 0 !important; - } - .mb-sm-1 { - margin-bottom: 0.25rem !important; - } - .mb-sm-2 { - margin-bottom: 0.5rem !important; - } - .mb-sm-3 { - margin-bottom: 1rem !important; - } - .mb-sm-4 { - margin-bottom: 1.5rem !important; - } - .mb-sm-5 { - margin-bottom: 3rem !important; - } - .mb-sm-auto { - margin-bottom: auto !important; - } - .ms-sm-0 { - margin-left: 0 !important; - } - .ms-sm-1 { - margin-left: 0.25rem !important; - } - .ms-sm-2 { - margin-left: 0.5rem !important; - } - .ms-sm-3 { - margin-left: 1rem !important; - } - .ms-sm-4 { - margin-left: 1.5rem !important; - } - .ms-sm-5 { - margin-left: 3rem !important; - } - .ms-sm-auto { - margin-left: auto !important; - } - .p-sm-0 { - padding: 0 !important; - } - .p-sm-1 { - padding: 0.25rem !important; - } - .p-sm-2 { - padding: 0.5rem !important; - } - .p-sm-3 { - padding: 1rem !important; - } - .p-sm-4 { - padding: 1.5rem !important; - } - .p-sm-5 { - padding: 3rem !important; - } - .px-sm-0 { - padding-right: 0 !important; - padding-left: 0 !important; - } - .px-sm-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; - } - .px-sm-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; - } - .px-sm-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; - } - .px-sm-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; - } - .px-sm-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; - } - .py-sm-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; - } - .py-sm-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - } - .py-sm-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - } - .py-sm-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; - } - .py-sm-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; - } - .py-sm-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; - } - .pt-sm-0 { - padding-top: 0 !important; - } - .pt-sm-1 { - padding-top: 0.25rem !important; - } - .pt-sm-2 { - padding-top: 0.5rem !important; - } - .pt-sm-3 { - padding-top: 1rem !important; - } - .pt-sm-4 { - padding-top: 1.5rem !important; - } - .pt-sm-5 { - padding-top: 3rem !important; - } - .pe-sm-0 { - padding-right: 0 !important; - } - .pe-sm-1 { - padding-right: 0.25rem !important; - } - .pe-sm-2 { - padding-right: 0.5rem !important; - } - .pe-sm-3 { - padding-right: 1rem !important; - } - .pe-sm-4 { - padding-right: 1.5rem !important; - } - .pe-sm-5 { - padding-right: 3rem !important; - } - .pb-sm-0 { - padding-bottom: 0 !important; - } - .pb-sm-1 { - padding-bottom: 0.25rem !important; - } - .pb-sm-2 { - padding-bottom: 0.5rem !important; - } - .pb-sm-3 { - padding-bottom: 1rem !important; - } - .pb-sm-4 { - padding-bottom: 1.5rem !important; - } - .pb-sm-5 { - padding-bottom: 3rem !important; - } - .ps-sm-0 { - padding-left: 0 !important; - } - .ps-sm-1 { - padding-left: 0.25rem !important; - } - .ps-sm-2 { - padding-left: 0.5rem !important; - } - .ps-sm-3 { - padding-left: 1rem !important; - } - .ps-sm-4 { - padding-left: 1.5rem !important; - } - .ps-sm-5 { - padding-left: 3rem !important; - } - .gap-sm-0 { - gap: 0 !important; - } - .gap-sm-1 { - gap: 0.25rem !important; - } - .gap-sm-2 { - gap: 0.5rem !important; - } - .gap-sm-3 { - gap: 1rem !important; - } - .gap-sm-4 { - gap: 1.5rem !important; - } - .gap-sm-5 { - gap: 3rem !important; - } - .text-sm-start { - text-align: left !important; - } - .text-sm-end { - text-align: right !important; - } - .text-sm-center { - text-align: center !important; - } -} -@media (min-width: 768px) { - .float-md-start { - float: left !important; - } - .float-md-end { - float: right !important; - } - .float-md-none { - float: none !important; - } - .d-md-inline { - display: inline !important; - } - .d-md-inline-block { - display: inline-block !important; - } - .d-md-block { - display: block !important; - } - .d-md-grid { - display: grid !important; - } - .d-md-table { - display: table !important; - } - .d-md-table-row { - display: table-row !important; - } - .d-md-table-cell { - display: table-cell !important; - } - .d-md-flex { - display: flex !important; - } - .d-md-inline-flex { - display: inline-flex !important; - } - .d-md-none { - display: none !important; - } - .flex-md-fill { - flex: 1 1 auto !important; - } - .flex-md-row { - flex-direction: row !important; - } - .flex-md-column { - flex-direction: column !important; - } - .flex-md-row-reverse { - flex-direction: row-reverse !important; - } - .flex-md-column-reverse { - flex-direction: column-reverse !important; - } - .flex-md-grow-0 { - flex-grow: 0 !important; - } - .flex-md-grow-1 { - flex-grow: 1 !important; - } - .flex-md-shrink-0 { - flex-shrink: 0 !important; - } - .flex-md-shrink-1 { - flex-shrink: 1 !important; - } - .flex-md-wrap { - flex-wrap: wrap !important; - } - .flex-md-nowrap { - flex-wrap: nowrap !important; - } - .flex-md-wrap-reverse { - flex-wrap: wrap-reverse !important; - } - .justify-content-md-start { - justify-content: flex-start !important; - } - .justify-content-md-end { - justify-content: flex-end !important; - } - .justify-content-md-center { - justify-content: center !important; - } - .justify-content-md-between { - justify-content: space-between !important; - } - .justify-content-md-around { - justify-content: space-around !important; - } - .justify-content-md-evenly { - justify-content: space-evenly !important; - } - .align-items-md-start { - align-items: flex-start !important; - } - .align-items-md-end { - align-items: flex-end !important; - } - .align-items-md-center { - align-items: center !important; - } - .align-items-md-baseline { - align-items: baseline !important; - } - .align-items-md-stretch { - align-items: stretch !important; - } - .align-content-md-start { - align-content: flex-start !important; - } - .align-content-md-end { - align-content: flex-end !important; - } - .align-content-md-center { - align-content: center !important; - } - .align-content-md-between { - align-content: space-between !important; - } - .align-content-md-around { - align-content: space-around !important; - } - .align-content-md-stretch { - align-content: stretch !important; - } - .align-self-md-auto { - align-self: auto !important; - } - .align-self-md-start { - align-self: flex-start !important; - } - .align-self-md-end { - align-self: flex-end !important; - } - .align-self-md-center { - align-self: center !important; - } - .align-self-md-baseline { - align-self: baseline !important; - } - .align-self-md-stretch { - align-self: stretch !important; - } - .order-md-first { - order: -1 !important; - } - .order-md-0 { - order: 0 !important; - } - .order-md-1 { - order: 1 !important; - } - .order-md-2 { - order: 2 !important; - } - .order-md-3 { - order: 3 !important; - } - .order-md-4 { - order: 4 !important; - } - .order-md-5 { - order: 5 !important; - } - .order-md-last { - order: 6 !important; - } - .m-md-0 { - margin: 0 !important; - } - .m-md-1 { - margin: 0.25rem !important; - } - .m-md-2 { - margin: 0.5rem !important; - } - .m-md-3 { - margin: 1rem !important; - } - .m-md-4 { - margin: 1.5rem !important; - } - .m-md-5 { - margin: 3rem !important; - } - .m-md-auto { - margin: auto !important; - } - .mx-md-0 { - margin-right: 0 !important; - margin-left: 0 !important; - } - .mx-md-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; - } - .mx-md-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; - } - .mx-md-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; - } - .mx-md-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; - } - .mx-md-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; - } - .mx-md-auto { - margin-right: auto !important; - margin-left: auto !important; - } - .my-md-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; - } - .my-md-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; - } - .my-md-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; - } - .my-md-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; - } - .my-md-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; - } - .my-md-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; - } - .my-md-auto { - margin-top: auto !important; - margin-bottom: auto !important; - } - .mt-md-0 { - margin-top: 0 !important; - } - .mt-md-1 { - margin-top: 0.25rem !important; - } - .mt-md-2 { - margin-top: 0.5rem !important; - } - .mt-md-3 { - margin-top: 1rem !important; - } - .mt-md-4 { - margin-top: 1.5rem !important; - } - .mt-md-5 { - margin-top: 3rem !important; - } - .mt-md-auto { - margin-top: auto !important; - } - .me-md-0 { - margin-right: 0 !important; - } - .me-md-1 { - margin-right: 0.25rem !important; - } - .me-md-2 { - margin-right: 0.5rem !important; - } - .me-md-3 { - margin-right: 1rem !important; - } - .me-md-4 { - margin-right: 1.5rem !important; - } - .me-md-5 { - margin-right: 3rem !important; - } - .me-md-auto { - margin-right: auto !important; - } - .mb-md-0 { - margin-bottom: 0 !important; - } - .mb-md-1 { - margin-bottom: 0.25rem !important; - } - .mb-md-2 { - margin-bottom: 0.5rem !important; - } - .mb-md-3 { - margin-bottom: 1rem !important; - } - .mb-md-4 { - margin-bottom: 1.5rem !important; - } - .mb-md-5 { - margin-bottom: 3rem !important; - } - .mb-md-auto { - margin-bottom: auto !important; - } - .ms-md-0 { - margin-left: 0 !important; - } - .ms-md-1 { - margin-left: 0.25rem !important; - } - .ms-md-2 { - margin-left: 0.5rem !important; - } - .ms-md-3 { - margin-left: 1rem !important; - } - .ms-md-4 { - margin-left: 1.5rem !important; - } - .ms-md-5 { - margin-left: 3rem !important; - } - .ms-md-auto { - margin-left: auto !important; - } - .p-md-0 { - padding: 0 !important; - } - .p-md-1 { - padding: 0.25rem !important; - } - .p-md-2 { - padding: 0.5rem !important; - } - .p-md-3 { - padding: 1rem !important; - } - .p-md-4 { - padding: 1.5rem !important; - } - .p-md-5 { - padding: 3rem !important; - } - .px-md-0 { - padding-right: 0 !important; - padding-left: 0 !important; - } - .px-md-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; - } - .px-md-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; - } - .px-md-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; - } - .px-md-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; - } - .px-md-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; - } - .py-md-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; - } - .py-md-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - } - .py-md-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - } - .py-md-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; - } - .py-md-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; - } - .py-md-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; - } - .pt-md-0 { - padding-top: 0 !important; - } - .pt-md-1 { - padding-top: 0.25rem !important; - } - .pt-md-2 { - padding-top: 0.5rem !important; - } - .pt-md-3 { - padding-top: 1rem !important; - } - .pt-md-4 { - padding-top: 1.5rem !important; - } - .pt-md-5 { - padding-top: 3rem !important; - } - .pe-md-0 { - padding-right: 0 !important; - } - .pe-md-1 { - padding-right: 0.25rem !important; - } - .pe-md-2 { - padding-right: 0.5rem !important; - } - .pe-md-3 { - padding-right: 1rem !important; - } - .pe-md-4 { - padding-right: 1.5rem !important; - } - .pe-md-5 { - padding-right: 3rem !important; - } - .pb-md-0 { - padding-bottom: 0 !important; - } - .pb-md-1 { - padding-bottom: 0.25rem !important; - } - .pb-md-2 { - padding-bottom: 0.5rem !important; - } - .pb-md-3 { - padding-bottom: 1rem !important; - } - .pb-md-4 { - padding-bottom: 1.5rem !important; - } - .pb-md-5 { - padding-bottom: 3rem !important; - } - .ps-md-0 { - padding-left: 0 !important; - } - .ps-md-1 { - padding-left: 0.25rem !important; - } - .ps-md-2 { - padding-left: 0.5rem !important; - } - .ps-md-3 { - padding-left: 1rem !important; - } - .ps-md-4 { - padding-left: 1.5rem !important; - } - .ps-md-5 { - padding-left: 3rem !important; - } - .gap-md-0 { - gap: 0 !important; - } - .gap-md-1 { - gap: 0.25rem !important; - } - .gap-md-2 { - gap: 0.5rem !important; - } - .gap-md-3 { - gap: 1rem !important; - } - .gap-md-4 { - gap: 1.5rem !important; - } - .gap-md-5 { - gap: 3rem !important; - } - .text-md-start { - text-align: left !important; - } - .text-md-end { - text-align: right !important; - } - .text-md-center { - text-align: center !important; - } -} -@media (min-width: 992px) { - .float-lg-start { - float: left !important; - } - .float-lg-end { - float: right !important; - } - .float-lg-none { - float: none !important; - } - .d-lg-inline { - display: inline !important; - } - .d-lg-inline-block { - display: inline-block !important; - } - .d-lg-block { - display: block !important; - } - .d-lg-grid { - display: grid !important; - } - .d-lg-table { - display: table !important; - } - .d-lg-table-row { - display: table-row !important; - } - .d-lg-table-cell { - display: table-cell !important; - } - .d-lg-flex { - display: flex !important; - } - .d-lg-inline-flex { - display: inline-flex !important; - } - .d-lg-none { - display: none !important; - } - .flex-lg-fill { - flex: 1 1 auto !important; - } - .flex-lg-row { - flex-direction: row !important; - } - .flex-lg-column { - flex-direction: column !important; - } - .flex-lg-row-reverse { - flex-direction: row-reverse !important; - } - .flex-lg-column-reverse { - flex-direction: column-reverse !important; - } - .flex-lg-grow-0 { - flex-grow: 0 !important; - } - .flex-lg-grow-1 { - flex-grow: 1 !important; - } - .flex-lg-shrink-0 { - flex-shrink: 0 !important; - } - .flex-lg-shrink-1 { - flex-shrink: 1 !important; - } - .flex-lg-wrap { - flex-wrap: wrap !important; - } - .flex-lg-nowrap { - flex-wrap: nowrap !important; - } - .flex-lg-wrap-reverse { - flex-wrap: wrap-reverse !important; - } - .justify-content-lg-start { - justify-content: flex-start !important; - } - .justify-content-lg-end { - justify-content: flex-end !important; - } - .justify-content-lg-center { - justify-content: center !important; - } - .justify-content-lg-between { - justify-content: space-between !important; - } - .justify-content-lg-around { - justify-content: space-around !important; - } - .justify-content-lg-evenly { - justify-content: space-evenly !important; - } - .align-items-lg-start { - align-items: flex-start !important; - } - .align-items-lg-end { - align-items: flex-end !important; - } - .align-items-lg-center { - align-items: center !important; - } - .align-items-lg-baseline { - align-items: baseline !important; - } - .align-items-lg-stretch { - align-items: stretch !important; - } - .align-content-lg-start { - align-content: flex-start !important; - } - .align-content-lg-end { - align-content: flex-end !important; - } - .align-content-lg-center { - align-content: center !important; - } - .align-content-lg-between { - align-content: space-between !important; - } - .align-content-lg-around { - align-content: space-around !important; - } - .align-content-lg-stretch { - align-content: stretch !important; - } - .align-self-lg-auto { - align-self: auto !important; - } - .align-self-lg-start { - align-self: flex-start !important; - } - .align-self-lg-end { - align-self: flex-end !important; - } - .align-self-lg-center { - align-self: center !important; - } - .align-self-lg-baseline { - align-self: baseline !important; - } - .align-self-lg-stretch { - align-self: stretch !important; - } - .order-lg-first { - order: -1 !important; - } - .order-lg-0 { - order: 0 !important; - } - .order-lg-1 { - order: 1 !important; - } - .order-lg-2 { - order: 2 !important; - } - .order-lg-3 { - order: 3 !important; - } - .order-lg-4 { - order: 4 !important; - } - .order-lg-5 { - order: 5 !important; - } - .order-lg-last { - order: 6 !important; - } - .m-lg-0 { - margin: 0 !important; - } - .m-lg-1 { - margin: 0.25rem !important; - } - .m-lg-2 { - margin: 0.5rem !important; - } - .m-lg-3 { - margin: 1rem !important; - } - .m-lg-4 { - margin: 1.5rem !important; - } - .m-lg-5 { - margin: 3rem !important; - } - .m-lg-auto { - margin: auto !important; - } - .mx-lg-0 { - margin-right: 0 !important; - margin-left: 0 !important; - } - .mx-lg-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; - } - .mx-lg-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; - } - .mx-lg-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; - } - .mx-lg-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; - } - .mx-lg-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; - } - .mx-lg-auto { - margin-right: auto !important; - margin-left: auto !important; - } - .my-lg-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; - } - .my-lg-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; - } - .my-lg-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; - } - .my-lg-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; - } - .my-lg-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; - } - .my-lg-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; - } - .my-lg-auto { - margin-top: auto !important; - margin-bottom: auto !important; - } - .mt-lg-0 { - margin-top: 0 !important; - } - .mt-lg-1 { - margin-top: 0.25rem !important; - } - .mt-lg-2 { - margin-top: 0.5rem !important; - } - .mt-lg-3 { - margin-top: 1rem !important; - } - .mt-lg-4 { - margin-top: 1.5rem !important; - } - .mt-lg-5 { - margin-top: 3rem !important; - } - .mt-lg-auto { - margin-top: auto !important; - } - .me-lg-0 { - margin-right: 0 !important; - } - .me-lg-1 { - margin-right: 0.25rem !important; - } - .me-lg-2 { - margin-right: 0.5rem !important; - } - .me-lg-3 { - margin-right: 1rem !important; - } - .me-lg-4 { - margin-right: 1.5rem !important; - } - .me-lg-5 { - margin-right: 3rem !important; - } - .me-lg-auto { - margin-right: auto !important; - } - .mb-lg-0 { - margin-bottom: 0 !important; - } - .mb-lg-1 { - margin-bottom: 0.25rem !important; - } - .mb-lg-2 { - margin-bottom: 0.5rem !important; - } - .mb-lg-3 { - margin-bottom: 1rem !important; - } - .mb-lg-4 { - margin-bottom: 1.5rem !important; - } - .mb-lg-5 { - margin-bottom: 3rem !important; - } - .mb-lg-auto { - margin-bottom: auto !important; - } - .ms-lg-0 { - margin-left: 0 !important; - } - .ms-lg-1 { - margin-left: 0.25rem !important; - } - .ms-lg-2 { - margin-left: 0.5rem !important; - } - .ms-lg-3 { - margin-left: 1rem !important; - } - .ms-lg-4 { - margin-left: 1.5rem !important; - } - .ms-lg-5 { - margin-left: 3rem !important; - } - .ms-lg-auto { - margin-left: auto !important; - } - .p-lg-0 { - padding: 0 !important; - } - .p-lg-1 { - padding: 0.25rem !important; - } - .p-lg-2 { - padding: 0.5rem !important; - } - .p-lg-3 { - padding: 1rem !important; - } - .p-lg-4 { - padding: 1.5rem !important; - } - .p-lg-5 { - padding: 3rem !important; - } - .px-lg-0 { - padding-right: 0 !important; - padding-left: 0 !important; - } - .px-lg-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; - } - .px-lg-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; - } - .px-lg-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; - } - .px-lg-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; - } - .px-lg-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; - } - .py-lg-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; - } - .py-lg-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - } - .py-lg-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - } - .py-lg-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; - } - .py-lg-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; - } - .py-lg-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; - } - .pt-lg-0 { - padding-top: 0 !important; - } - .pt-lg-1 { - padding-top: 0.25rem !important; - } - .pt-lg-2 { - padding-top: 0.5rem !important; - } - .pt-lg-3 { - padding-top: 1rem !important; - } - .pt-lg-4 { - padding-top: 1.5rem !important; - } - .pt-lg-5 { - padding-top: 3rem !important; - } - .pe-lg-0 { - padding-right: 0 !important; - } - .pe-lg-1 { - padding-right: 0.25rem !important; - } - .pe-lg-2 { - padding-right: 0.5rem !important; - } - .pe-lg-3 { - padding-right: 1rem !important; - } - .pe-lg-4 { - padding-right: 1.5rem !important; - } - .pe-lg-5 { - padding-right: 3rem !important; - } - .pb-lg-0 { - padding-bottom: 0 !important; - } - .pb-lg-1 { - padding-bottom: 0.25rem !important; - } - .pb-lg-2 { - padding-bottom: 0.5rem !important; - } - .pb-lg-3 { - padding-bottom: 1rem !important; - } - .pb-lg-4 { - padding-bottom: 1.5rem !important; - } - .pb-lg-5 { - padding-bottom: 3rem !important; - } - .ps-lg-0 { - padding-left: 0 !important; - } - .ps-lg-1 { - padding-left: 0.25rem !important; - } - .ps-lg-2 { - padding-left: 0.5rem !important; - } - .ps-lg-3 { - padding-left: 1rem !important; - } - .ps-lg-4 { - padding-left: 1.5rem !important; - } - .ps-lg-5 { - padding-left: 3rem !important; - } - .gap-lg-0 { - gap: 0 !important; - } - .gap-lg-1 { - gap: 0.25rem !important; - } - .gap-lg-2 { - gap: 0.5rem !important; - } - .gap-lg-3 { - gap: 1rem !important; - } - .gap-lg-4 { - gap: 1.5rem !important; - } - .gap-lg-5 { - gap: 3rem !important; - } - .text-lg-start { - text-align: left !important; - } - .text-lg-end { - text-align: right !important; - } - .text-lg-center { - text-align: center !important; - } -} -@media (min-width: 1200px) { - .float-xl-start { - float: left !important; - } - .float-xl-end { - float: right !important; - } - .float-xl-none { - float: none !important; - } - .d-xl-inline { - display: inline !important; - } - .d-xl-inline-block { - display: inline-block !important; - } - .d-xl-block { - display: block !important; - } - .d-xl-grid { - display: grid !important; - } - .d-xl-table { - display: table !important; - } - .d-xl-table-row { - display: table-row !important; - } - .d-xl-table-cell { - display: table-cell !important; - } - .d-xl-flex { - display: flex !important; - } - .d-xl-inline-flex { - display: inline-flex !important; - } - .d-xl-none { - display: none !important; - } - .flex-xl-fill { - flex: 1 1 auto !important; - } - .flex-xl-row { - flex-direction: row !important; - } - .flex-xl-column { - flex-direction: column !important; - } - .flex-xl-row-reverse { - flex-direction: row-reverse !important; - } - .flex-xl-column-reverse { - flex-direction: column-reverse !important; - } - .flex-xl-grow-0 { - flex-grow: 0 !important; - } - .flex-xl-grow-1 { - flex-grow: 1 !important; - } - .flex-xl-shrink-0 { - flex-shrink: 0 !important; - } - .flex-xl-shrink-1 { - flex-shrink: 1 !important; - } - .flex-xl-wrap { - flex-wrap: wrap !important; - } - .flex-xl-nowrap { - flex-wrap: nowrap !important; - } - .flex-xl-wrap-reverse { - flex-wrap: wrap-reverse !important; - } - .justify-content-xl-start { - justify-content: flex-start !important; - } - .justify-content-xl-end { - justify-content: flex-end !important; - } - .justify-content-xl-center { - justify-content: center !important; - } - .justify-content-xl-between { - justify-content: space-between !important; - } - .justify-content-xl-around { - justify-content: space-around !important; - } - .justify-content-xl-evenly { - justify-content: space-evenly !important; - } - .align-items-xl-start { - align-items: flex-start !important; - } - .align-items-xl-end { - align-items: flex-end !important; - } - .align-items-xl-center { - align-items: center !important; - } - .align-items-xl-baseline { - align-items: baseline !important; - } - .align-items-xl-stretch { - align-items: stretch !important; - } - .align-content-xl-start { - align-content: flex-start !important; - } - .align-content-xl-end { - align-content: flex-end !important; - } - .align-content-xl-center { - align-content: center !important; - } - .align-content-xl-between { - align-content: space-between !important; - } - .align-content-xl-around { - align-content: space-around !important; - } - .align-content-xl-stretch { - align-content: stretch !important; - } - .align-self-xl-auto { - align-self: auto !important; - } - .align-self-xl-start { - align-self: flex-start !important; - } - .align-self-xl-end { - align-self: flex-end !important; - } - .align-self-xl-center { - align-self: center !important; - } - .align-self-xl-baseline { - align-self: baseline !important; - } - .align-self-xl-stretch { - align-self: stretch !important; - } - .order-xl-first { - order: -1 !important; - } - .order-xl-0 { - order: 0 !important; - } - .order-xl-1 { - order: 1 !important; - } - .order-xl-2 { - order: 2 !important; - } - .order-xl-3 { - order: 3 !important; - } - .order-xl-4 { - order: 4 !important; - } - .order-xl-5 { - order: 5 !important; - } - .order-xl-last { - order: 6 !important; - } - .m-xl-0 { - margin: 0 !important; - } - .m-xl-1 { - margin: 0.25rem !important; - } - .m-xl-2 { - margin: 0.5rem !important; - } - .m-xl-3 { - margin: 1rem !important; - } - .m-xl-4 { - margin: 1.5rem !important; - } - .m-xl-5 { - margin: 3rem !important; - } - .m-xl-auto { - margin: auto !important; - } - .mx-xl-0 { - margin-right: 0 !important; - margin-left: 0 !important; - } - .mx-xl-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; - } - .mx-xl-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; - } - .mx-xl-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; - } - .mx-xl-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; - } - .mx-xl-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; - } - .mx-xl-auto { - margin-right: auto !important; - margin-left: auto !important; - } - .my-xl-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; - } - .my-xl-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; - } - .my-xl-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; - } - .my-xl-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; - } - .my-xl-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; - } - .my-xl-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; - } - .my-xl-auto { - margin-top: auto !important; - margin-bottom: auto !important; - } - .mt-xl-0 { - margin-top: 0 !important; - } - .mt-xl-1 { - margin-top: 0.25rem !important; - } - .mt-xl-2 { - margin-top: 0.5rem !important; - } - .mt-xl-3 { - margin-top: 1rem !important; - } - .mt-xl-4 { - margin-top: 1.5rem !important; - } - .mt-xl-5 { - margin-top: 3rem !important; - } - .mt-xl-auto { - margin-top: auto !important; - } - .me-xl-0 { - margin-right: 0 !important; - } - .me-xl-1 { - margin-right: 0.25rem !important; - } - .me-xl-2 { - margin-right: 0.5rem !important; - } - .me-xl-3 { - margin-right: 1rem !important; - } - .me-xl-4 { - margin-right: 1.5rem !important; - } - .me-xl-5 { - margin-right: 3rem !important; - } - .me-xl-auto { - margin-right: auto !important; - } - .mb-xl-0 { - margin-bottom: 0 !important; - } - .mb-xl-1 { - margin-bottom: 0.25rem !important; - } - .mb-xl-2 { - margin-bottom: 0.5rem !important; - } - .mb-xl-3 { - margin-bottom: 1rem !important; - } - .mb-xl-4 { - margin-bottom: 1.5rem !important; - } - .mb-xl-5 { - margin-bottom: 3rem !important; - } - .mb-xl-auto { - margin-bottom: auto !important; - } - .ms-xl-0 { - margin-left: 0 !important; - } - .ms-xl-1 { - margin-left: 0.25rem !important; - } - .ms-xl-2 { - margin-left: 0.5rem !important; - } - .ms-xl-3 { - margin-left: 1rem !important; - } - .ms-xl-4 { - margin-left: 1.5rem !important; - } - .ms-xl-5 { - margin-left: 3rem !important; - } - .ms-xl-auto { - margin-left: auto !important; - } - .p-xl-0 { - padding: 0 !important; - } - .p-xl-1 { - padding: 0.25rem !important; - } - .p-xl-2 { - padding: 0.5rem !important; - } - .p-xl-3 { - padding: 1rem !important; - } - .p-xl-4 { - padding: 1.5rem !important; - } - .p-xl-5 { - padding: 3rem !important; - } - .px-xl-0 { - padding-right: 0 !important; - padding-left: 0 !important; - } - .px-xl-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; - } - .px-xl-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; - } - .px-xl-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; - } - .px-xl-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; - } - .px-xl-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; - } - .py-xl-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; - } - .py-xl-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - } - .py-xl-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - } - .py-xl-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; - } - .py-xl-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; - } - .py-xl-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; - } - .pt-xl-0 { - padding-top: 0 !important; - } - .pt-xl-1 { - padding-top: 0.25rem !important; - } - .pt-xl-2 { - padding-top: 0.5rem !important; - } - .pt-xl-3 { - padding-top: 1rem !important; - } - .pt-xl-4 { - padding-top: 1.5rem !important; - } - .pt-xl-5 { - padding-top: 3rem !important; - } - .pe-xl-0 { - padding-right: 0 !important; - } - .pe-xl-1 { - padding-right: 0.25rem !important; - } - .pe-xl-2 { - padding-right: 0.5rem !important; - } - .pe-xl-3 { - padding-right: 1rem !important; - } - .pe-xl-4 { - padding-right: 1.5rem !important; - } - .pe-xl-5 { - padding-right: 3rem !important; - } - .pb-xl-0 { - padding-bottom: 0 !important; - } - .pb-xl-1 { - padding-bottom: 0.25rem !important; - } - .pb-xl-2 { - padding-bottom: 0.5rem !important; - } - .pb-xl-3 { - padding-bottom: 1rem !important; - } - .pb-xl-4 { - padding-bottom: 1.5rem !important; - } - .pb-xl-5 { - padding-bottom: 3rem !important; - } - .ps-xl-0 { - padding-left: 0 !important; - } - .ps-xl-1 { - padding-left: 0.25rem !important; - } - .ps-xl-2 { - padding-left: 0.5rem !important; - } - .ps-xl-3 { - padding-left: 1rem !important; - } - .ps-xl-4 { - padding-left: 1.5rem !important; - } - .ps-xl-5 { - padding-left: 3rem !important; - } - .gap-xl-0 { - gap: 0 !important; - } - .gap-xl-1 { - gap: 0.25rem !important; - } - .gap-xl-2 { - gap: 0.5rem !important; - } - .gap-xl-3 { - gap: 1rem !important; - } - .gap-xl-4 { - gap: 1.5rem !important; - } - .gap-xl-5 { - gap: 3rem !important; - } - .text-xl-start { - text-align: left !important; - } - .text-xl-end { - text-align: right !important; - } - .text-xl-center { - text-align: center !important; - } -} -@media (min-width: 1400px) { - .float-xxl-start { - float: left !important; - } - .float-xxl-end { - float: right !important; - } - .float-xxl-none { - float: none !important; - } - .d-xxl-inline { - display: inline !important; - } - .d-xxl-inline-block { - display: inline-block !important; - } - .d-xxl-block { - display: block !important; - } - .d-xxl-grid { - display: grid !important; - } - .d-xxl-table { - display: table !important; - } - .d-xxl-table-row { - display: table-row !important; - } - .d-xxl-table-cell { - display: table-cell !important; - } - .d-xxl-flex { - display: flex !important; - } - .d-xxl-inline-flex { - display: inline-flex !important; - } - .d-xxl-none { - display: none !important; - } - .flex-xxl-fill { - flex: 1 1 auto !important; - } - .flex-xxl-row { - flex-direction: row !important; - } - .flex-xxl-column { - flex-direction: column !important; - } - .flex-xxl-row-reverse { - flex-direction: row-reverse !important; - } - .flex-xxl-column-reverse { - flex-direction: column-reverse !important; - } - .flex-xxl-grow-0 { - flex-grow: 0 !important; - } - .flex-xxl-grow-1 { - flex-grow: 1 !important; - } - .flex-xxl-shrink-0 { - flex-shrink: 0 !important; - } - .flex-xxl-shrink-1 { - flex-shrink: 1 !important; - } - .flex-xxl-wrap { - flex-wrap: wrap !important; - } - .flex-xxl-nowrap { - flex-wrap: nowrap !important; - } - .flex-xxl-wrap-reverse { - flex-wrap: wrap-reverse !important; - } - .justify-content-xxl-start { - justify-content: flex-start !important; - } - .justify-content-xxl-end { - justify-content: flex-end !important; - } - .justify-content-xxl-center { - justify-content: center !important; - } - .justify-content-xxl-between { - justify-content: space-between !important; - } - .justify-content-xxl-around { - justify-content: space-around !important; - } - .justify-content-xxl-evenly { - justify-content: space-evenly !important; - } - .align-items-xxl-start { - align-items: flex-start !important; - } - .align-items-xxl-end { - align-items: flex-end !important; - } - .align-items-xxl-center { - align-items: center !important; - } - .align-items-xxl-baseline { - align-items: baseline !important; - } - .align-items-xxl-stretch { - align-items: stretch !important; - } - .align-content-xxl-start { - align-content: flex-start !important; - } - .align-content-xxl-end { - align-content: flex-end !important; - } - .align-content-xxl-center { - align-content: center !important; - } - .align-content-xxl-between { - align-content: space-between !important; - } - .align-content-xxl-around { - align-content: space-around !important; - } - .align-content-xxl-stretch { - align-content: stretch !important; - } - .align-self-xxl-auto { - align-self: auto !important; - } - .align-self-xxl-start { - align-self: flex-start !important; - } - .align-self-xxl-end { - align-self: flex-end !important; - } - .align-self-xxl-center { - align-self: center !important; - } - .align-self-xxl-baseline { - align-self: baseline !important; - } - .align-self-xxl-stretch { - align-self: stretch !important; - } - .order-xxl-first { - order: -1 !important; - } - .order-xxl-0 { - order: 0 !important; - } - .order-xxl-1 { - order: 1 !important; - } - .order-xxl-2 { - order: 2 !important; - } - .order-xxl-3 { - order: 3 !important; - } - .order-xxl-4 { - order: 4 !important; - } - .order-xxl-5 { - order: 5 !important; - } - .order-xxl-last { - order: 6 !important; - } - .m-xxl-0 { - margin: 0 !important; - } - .m-xxl-1 { - margin: 0.25rem !important; - } - .m-xxl-2 { - margin: 0.5rem !important; - } - .m-xxl-3 { - margin: 1rem !important; - } - .m-xxl-4 { - margin: 1.5rem !important; - } - .m-xxl-5 { - margin: 3rem !important; - } - .m-xxl-auto { - margin: auto !important; - } - .mx-xxl-0 { - margin-right: 0 !important; - margin-left: 0 !important; - } - .mx-xxl-1 { - margin-right: 0.25rem !important; - margin-left: 0.25rem !important; - } - .mx-xxl-2 { - margin-right: 0.5rem !important; - margin-left: 0.5rem !important; - } - .mx-xxl-3 { - margin-right: 1rem !important; - margin-left: 1rem !important; - } - .mx-xxl-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important; - } - .mx-xxl-5 { - margin-right: 3rem !important; - margin-left: 3rem !important; - } - .mx-xxl-auto { - margin-right: auto !important; - margin-left: auto !important; - } - .my-xxl-0 { - margin-top: 0 !important; - margin-bottom: 0 !important; - } - .my-xxl-1 { - margin-top: 0.25rem !important; - margin-bottom: 0.25rem !important; - } - .my-xxl-2 { - margin-top: 0.5rem !important; - margin-bottom: 0.5rem !important; - } - .my-xxl-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important; - } - .my-xxl-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important; - } - .my-xxl-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important; - } - .my-xxl-auto { - margin-top: auto !important; - margin-bottom: auto !important; - } - .mt-xxl-0 { - margin-top: 0 !important; - } - .mt-xxl-1 { - margin-top: 0.25rem !important; - } - .mt-xxl-2 { - margin-top: 0.5rem !important; - } - .mt-xxl-3 { - margin-top: 1rem !important; - } - .mt-xxl-4 { - margin-top: 1.5rem !important; - } - .mt-xxl-5 { - margin-top: 3rem !important; - } - .mt-xxl-auto { - margin-top: auto !important; - } - .me-xxl-0 { - margin-right: 0 !important; - } - .me-xxl-1 { - margin-right: 0.25rem !important; - } - .me-xxl-2 { - margin-right: 0.5rem !important; - } - .me-xxl-3 { - margin-right: 1rem !important; - } - .me-xxl-4 { - margin-right: 1.5rem !important; - } - .me-xxl-5 { - margin-right: 3rem !important; - } - .me-xxl-auto { - margin-right: auto !important; - } - .mb-xxl-0 { - margin-bottom: 0 !important; - } - .mb-xxl-1 { - margin-bottom: 0.25rem !important; - } - .mb-xxl-2 { - margin-bottom: 0.5rem !important; - } - .mb-xxl-3 { - margin-bottom: 1rem !important; - } - .mb-xxl-4 { - margin-bottom: 1.5rem !important; - } - .mb-xxl-5 { - margin-bottom: 3rem !important; - } - .mb-xxl-auto { - margin-bottom: auto !important; - } - .ms-xxl-0 { - margin-left: 0 !important; - } - .ms-xxl-1 { - margin-left: 0.25rem !important; - } - .ms-xxl-2 { - margin-left: 0.5rem !important; - } - .ms-xxl-3 { - margin-left: 1rem !important; - } - .ms-xxl-4 { - margin-left: 1.5rem !important; - } - .ms-xxl-5 { - margin-left: 3rem !important; - } - .ms-xxl-auto { - margin-left: auto !important; - } - .p-xxl-0 { - padding: 0 !important; - } - .p-xxl-1 { - padding: 0.25rem !important; - } - .p-xxl-2 { - padding: 0.5rem !important; - } - .p-xxl-3 { - padding: 1rem !important; - } - .p-xxl-4 { - padding: 1.5rem !important; - } - .p-xxl-5 { - padding: 3rem !important; - } - .px-xxl-0 { - padding-right: 0 !important; - padding-left: 0 !important; - } - .px-xxl-1 { - padding-right: 0.25rem !important; - padding-left: 0.25rem !important; - } - .px-xxl-2 { - padding-right: 0.5rem !important; - padding-left: 0.5rem !important; - } - .px-xxl-3 { - padding-right: 1rem !important; - padding-left: 1rem !important; - } - .px-xxl-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important; - } - .px-xxl-5 { - padding-right: 3rem !important; - padding-left: 3rem !important; - } - .py-xxl-0 { - padding-top: 0 !important; - padding-bottom: 0 !important; - } - .py-xxl-1 { - padding-top: 0.25rem !important; - padding-bottom: 0.25rem !important; - } - .py-xxl-2 { - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - } - .py-xxl-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important; - } - .py-xxl-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important; - } - .py-xxl-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important; - } - .pt-xxl-0 { - padding-top: 0 !important; - } - .pt-xxl-1 { - padding-top: 0.25rem !important; - } - .pt-xxl-2 { - padding-top: 0.5rem !important; - } - .pt-xxl-3 { - padding-top: 1rem !important; - } - .pt-xxl-4 { - padding-top: 1.5rem !important; - } - .pt-xxl-5 { - padding-top: 3rem !important; - } - .pe-xxl-0 { - padding-right: 0 !important; - } - .pe-xxl-1 { - padding-right: 0.25rem !important; - } - .pe-xxl-2 { - padding-right: 0.5rem !important; - } - .pe-xxl-3 { - padding-right: 1rem !important; - } - .pe-xxl-4 { - padding-right: 1.5rem !important; - } - .pe-xxl-5 { - padding-right: 3rem !important; - } - .pb-xxl-0 { - padding-bottom: 0 !important; - } - .pb-xxl-1 { - padding-bottom: 0.25rem !important; - } - .pb-xxl-2 { - padding-bottom: 0.5rem !important; - } - .pb-xxl-3 { - padding-bottom: 1rem !important; - } - .pb-xxl-4 { - padding-bottom: 1.5rem !important; - } - .pb-xxl-5 { - padding-bottom: 3rem !important; - } - .ps-xxl-0 { - padding-left: 0 !important; - } - .ps-xxl-1 { - padding-left: 0.25rem !important; - } - .ps-xxl-2 { - padding-left: 0.5rem !important; - } - .ps-xxl-3 { - padding-left: 1rem !important; - } - .ps-xxl-4 { - padding-left: 1.5rem !important; - } - .ps-xxl-5 { - padding-left: 3rem !important; - } - .gap-xxl-0 { - gap: 0 !important; - } - .gap-xxl-1 { - gap: 0.25rem !important; - } - .gap-xxl-2 { - gap: 0.5rem !important; - } - .gap-xxl-3 { - gap: 1rem !important; - } - .gap-xxl-4 { - gap: 1.5rem !important; - } - .gap-xxl-5 { - gap: 3rem !important; - } - .text-xxl-start { - text-align: left !important; - } - .text-xxl-end { - text-align: right !important; - } - .text-xxl-center { - text-align: center !important; - } -} -@media (min-width: 1200px) { - .fs-1 { - font-size: 2.5rem !important; - } - .fs-2 { - font-size: 2rem !important; - } - .fs-3 { - font-size: 1.75rem !important; - } - .fs-4 { - font-size: 1.5rem !important; - } -} -@media print { - .d-print-inline { - display: inline !important; - } - .d-print-inline-block { - display: inline-block !important; - } - .d-print-block { - display: block !important; - } - .d-print-grid { - display: grid !important; - } - .d-print-table { - display: table !important; - } - .d-print-table-row { - display: table-row !important; - } - .d-print-table-cell { - display: table-cell !important; - } - .d-print-flex { - display: flex !important; - } - .d-print-inline-flex { - display: inline-flex !important; - } - .d-print-none { - display: none !important; - } -} -html { - height: 100%; - scroll-padding-top: calc(4.5rem - 1px); -} - -.page-section { - padding: 6rem 0; -} -.page-section .page-section-heading { - font-size: 2.25rem; - line-height: 2rem; -} -@media (min-width: 992px) { - .page-section .page-section-heading { - font-size: 3rem; - line-height: 2.5rem; - } -} - -.divider-custom { - margin: 1.25rem 0 1.5rem; - width: 100%; - display: flex; - justify-content: center; - align-items: center; -} -.divider-custom .divider-custom-line { - width: 100%; - max-width: 7rem; - height: 0.25rem; - background-color: #2c3e50; - border-radius: 1rem; - border-color: #2c3e50; -} -.divider-custom .divider-custom-line:first-child { - margin-right: 1rem; -} -.divider-custom .divider-custom-line:last-child { - margin-left: 1rem; -} -.divider-custom .divider-custom-icon { - color: #2c3e50; - font-size: 2rem; -} -.divider-custom.divider-light .divider-custom-line { - background-color: #fff; -} -.divider-custom.divider-light .divider-custom-icon { - color: #fff; -} - -.btn-xl { - padding: 1rem 1.75rem; - font-size: 1.25rem; -} - -.btn-social { - border-radius: 100%; - display: inline-flex; - width: 3.25rem; - height: 3.25rem; - font-size: 1.25rem; - justify-content: center; - align-items: center; -} - -#mainNav { - padding-top: 1rem; - padding-bottom: 1rem; - font-family: "Montserrat", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"; - font-weight: 700; -} -#mainNav .navbar-brand { - color: #fff; -} -#mainNav .navbar-nav { - margin-top: 1rem; -} -#mainNav .navbar-nav li.nav-item a.nav-link { - color: #fff; -} -#mainNav .navbar-nav li.nav-item a.nav-link:hover { - color: #1abc9c; -} -#mainNav .navbar-nav li.nav-item a.nav-link:active, #mainNav .navbar-nav li.nav-item a.nav-link:focus { - color: #fff; -} -#mainNav .navbar-nav li.nav-item a.nav-link.active { - color: #1abc9c; -} -#mainNav .navbar-toggler { - font-size: 80%; - padding: 0.8rem; -} - -@media (min-width: 992px) { - #mainNav { - padding-top: 1.5rem; - padding-bottom: 1.5rem; - transition: padding-top 0.3s, padding-bottom 0.3s; - } - #mainNav .navbar-brand { - font-size: 1.75em; - transition: font-size 0.3s; - } - #mainNav .navbar-nav { - margin-top: 0; - } - #mainNav .navbar-nav > li.nav-item > a.nav-link.active { - color: #fff; - background: #1abc9c; - } - #mainNav .navbar-nav > li.nav-item > a.nav-link.active:active, #mainNav .navbar-nav > li.nav-item > a.nav-link.active:focus, #mainNav .navbar-nav > li.nav-item > a.nav-link.active:hover { - color: #fff; - background: #1abc9c; - } - #mainNav.navbar-shrink { - padding-top: 0.5rem; - padding-bottom: 0.5rem; - } - #mainNav.navbar-shrink .navbar-brand { - font-size: 1.5em; - } -} -.form-floating input.form-control, -.form-floating textarea.form-control { - font-size: 1.5rem; - border-left: 0; - border-right: 0; - border-top: 0; - border-radius: 0; - border-width: 1px; -} -.form-floating input.form-control:focus, -.form-floating textarea.form-control:focus { - box-shadow: none; -} -.form-floating label { - font-size: 1.5rem; - color: #6c757d; -} - -.masthead { - padding-top: calc(6rem + 74px); - padding-bottom: 6rem; -} -.masthead .masthead-heading { - font-size: 2.75rem; - line-height: 2.75rem; -} -.masthead .masthead-subheading { - font-size: 1.25rem; -} -.masthead .masthead-avatar { - width: 15rem; -} - -@media (min-width: 992px) { - .masthead { - padding-top: calc(6rem + 104px); - padding-bottom: 6rem; - } - .masthead .masthead-heading { - font-size: 4rem; - line-height: 3.5rem; - } - .masthead .masthead-subheading { - font-size: 1.5rem; - } -} -.portfolio .portfolio-item { - cursor: pointer; - position: relative; - display: block; - max-width: 25rem; - border-radius: 0.5rem; - overflow: hidden; -} -.portfolio .portfolio-item .portfolio-item-caption { - position: absolute; - top: 0; - left: 0; - transition: all 0.2s ease-in-out; - opacity: 0; - background-color: rgba(26, 188, 156, 0.9); -} -.portfolio .portfolio-item .portfolio-item-caption:hover { - opacity: 1; -} -.portfolio .portfolio-item .portfolio-item-caption .portfolio-item-caption-content { - font-size: 1.5rem; -} - -.portfolio-modal .btn-close { - color: #1abc9c; - font-size: 2rem; - padding: 1rem; -} -.portfolio-modal .portfolio-modal-title { - font-size: 2.25rem; - line-height: 2rem; -} -@media (min-width: 992px) { - .portfolio-modal .portfolio-modal-title { - font-size: 3rem; - line-height: 2.5rem; - } -} - -.footer { - padding-top: 5rem; - padding-bottom: 5rem; - background-color: #2c3e50; - color: #fff; -} - -.copyright { - background-color: #1a252f; -} \ No newline at end of file diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/text/chinese_bert.py b/spaces/XzJosh/Azuma-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/YaYaB/text-to-magic/README.md b/spaces/YaYaB/text-to-magic/README.md deleted file mode 100644 index 3c7897708fd90c3e420436e20c8a69882ade3d6b..0000000000000000000000000000000000000000 --- a/spaces/YaYaB/text-to-magic/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Magic -emoji: 📊 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py deleted file mode 100644 index d7bbdd7d00505f1e51154379c99ab621cb648a6d..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py +++ /dev/null @@ -1,34 +0,0 @@ -from ..common.optim import SGD as optimizer -from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier -from ..common.data.coco import dataloader -from ..common.models.mask_rcnn_fpn import model -from ..common.train import train - -from detectron2.config import LazyCall as L -from detectron2.modeling.backbone import RegNet -from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock - - -# Replace default ResNet with RegNetX-4GF from the DDS paper. Config source: -# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnetx/RegNetX-4.0GF_dds_8gpu.yaml#L4-L9 # noqa -model.backbone.bottom_up = L(RegNet)( - stem_class=SimpleStem, - stem_width=32, - block_class=ResBottleneckBlock, - depth=23, - w_a=38.65, - w_0=96, - w_m=2.43, - group_width=40, - freeze_at=2, - norm="FrozenBN", - out_features=["s1", "s2", "s3", "s4"], -) -model.pixel_std = [57.375, 57.120, 58.395] - -optimizer.weight_decay = 5e-5 -train.init_checkpoint = ( - "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906383/RegNetX-4.0GF_dds_8gpu.pyth" -) -# RegNets benefit from enabling cudnn benchmark mode -train.cudnn_benchmark = True diff --git a/spaces/YuAnthony/Voice-Recognition/utils/resnet.py b/spaces/YuAnthony/Voice-Recognition/utils/resnet.py deleted file mode 100644 index f6d52c41e7d2dec338639cc6358c823250f7c071..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Voice-Recognition/utils/resnet.py +++ /dev/null @@ -1,128 +0,0 @@ -import torch.nn as nn - -# 残差块 -class IRBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True): - super(IRBlock, self).__init__() - self.bn0 = nn.BatchNorm2d(inplanes) - self.conv1 = nn.Conv2d(inplanes, inplanes, kernel_size=3, stride=1, padding=1) - self.bn1 = nn.BatchNorm2d(inplanes) - self.prelu = nn.PReLU() - self.conv2 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample # downsample 对输入特征图大小进行减半处理 - self.stride = stride - self.use_se = use_se - if self.use_se: - self.se = SEBlock(planes) - - # 残差块里首先有 2 个相同输出通道数的 卷积层, - # 每个卷积层后接一个批量归一化层和 ReLU 激活函数*, - # 然后我们将输入跳过这 2 个卷积运算后直接加在最后的 ReLU 激活函数*前. - # *此文件中是 PReLU 激活函数 - # PReLU 和 ReLU 的区别主要是前者在 输入小于 0 的部分加了一个系数 a, - # 若 a ==0, PReLU 退化为 ReLU;若 a 很小(比如0.01) ,PReLU 退化为 LReLU, - # 有实验证明,与ReLU相比,LReLU对最终的结果几乎没什么影响。 - def forward(self, x): - residual = x - out = self.bn0(x) - out = self.conv1(out) - out = self.bn1(out) - out = self.prelu(out) - - out = self.conv2(out) - out = self.bn2(out) - if self.use_se: - out = self.se(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.prelu(out) - - return out - - -class SEBlock(nn.Module): - def __init__(self, channel, reduction=16): - super(SEBlock, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), - nn.PReLU(), - nn.Linear(channel // reduction, channel), - nn.Sigmoid() - ) - - def forward(self, x): - b, c, _, _ = x.size() - y = self.avg_pool(x).view(b, c) - y = self.fc(y).view(b, c, 1, 1) - return x * y - - -class ResNet(nn.Module): - def __init__(self, block, layers, use_se=True): - self.inplanes = 64 - self.use_se = use_se - super(ResNet, self).__init__() - # 所有 ResNet 网络的输入由一个大卷积核+最大池化组成,极大减少了存储所需大小 - self.conv1 = nn.Conv2d(1, 64, kernel_size=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.prelu = nn.PReLU() - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=3) - - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - - self.pool = nn.AdaptiveMaxPool2d((1, 1)) - self.bn4 = nn.BatchNorm2d(512) - self.dropout = nn.Dropout() - self.flatten = nn.Flatten() - self.fc5 = nn.Linear(512, 512) - self.bn5 = nn.BatchNorm1d(512) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion),) - layers = [block(self.inplanes, planes, stride, downsample, use_se=self.use_se)] - self.inplanes = planes - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, use_se=self.use_se)) - - return nn.Sequential(*layers) - - # 规定了网络数据的流向 - def forward(self, x): - # 输入 - x = self.conv1(x) - x = self.bn1(x) - x = self.prelu(x) - x = self.maxpool(x) - # 中间卷积 - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - # 输出 - x = self.pool(x) - x = self.bn4(x) - x = self.dropout(x) - x = self.flatten(x) - x = self.fc5(x) - x = self.bn5(x) - - return x - -# 3,4,6,3 是 ResNet34 卷积部分的配置,至于为什么要用这个配置没有解释清楚 -def resnet34(use_se=True): - model = ResNet(IRBlock, [3, 4, 6, 3], use_se=use_se) - return model diff --git a/spaces/Yusin/ChatGPT-Speech/monotonic_align/__init__.py b/spaces/Yusin/ChatGPT-Speech/monotonic_align/__init__.py deleted file mode 100644 index 49e32c9a128aeadc2044c362ff27f6a43f6d7815..0000000000000000000000000000000000000000 --- a/spaces/Yusin/ChatGPT-Speech/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/models/__init__.py b/spaces/YuxinJ/Scenimefy/Scenimefy/models/__init__.py deleted file mode 100644 index 8124efa0d0a6ad40b9c042f79592ff27c7893c17..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/models/__init__.py +++ /dev/null @@ -1,67 +0,0 @@ -"""This package contains modules related to objective functions, optimizations, and network architectures. - -To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel. -You need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate loss, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - -In the function <__init__>, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): define networks used in our training. - -- self.visual_names (str list): specify the images that you want to display and save. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage. - -Now you can use the model class by specifying flag '--model dummy'. -See our template model class 'template_model.py' for more details. -""" - -import importlib -from Scenimefy.models.base_model import BaseModel - - -def find_model_using_name(model_name): - """Import the module "models/[model_name]_model.py". - - In the file, the class called DatasetNameModel() will - be instantiated. It has to be a subclass of BaseModel, - and it is case-insensitive. - """ - model_filename = "Scenimefy.models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - model = None - target_model_name = model_name.replace('_', '') + 'model' - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() \ - and issubclass(cls, BaseModel): - model = cls - - if model is None: - print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name)) - exit(0) - - return model - - -def get_option_setter(model_name): - """Return the static method of the model class.""" - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(opt): - """Create a model given the option. - - This function warps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from models import create_model - >>> model = create_model(opt) - """ - model = find_model_using_name(opt.model) - instance = model(opt) - print("model [%s] was created" % type(instance).__name__) - return instance diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/__init__.py b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/__init__.py deleted file mode 100644 index a8a98f8df29d8f62fbd9b9cd15ab5192dc96467c..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -__author__ = "曾逸夫(Zeng Yifu)" -__email__ = "zyfiy1314@163.com" diff --git a/spaces/aaronb/DragGAN/stylegan2/op/__init__.py b/spaces/aaronb/DragGAN/stylegan2/op/__init__.py deleted file mode 100644 index 3a5b94c61ead660c0bc4f1c7e1597fa0781b554d..0000000000000000000000000000000000000000 --- a/spaces/aaronb/DragGAN/stylegan2/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu, fused_leaky_relu_native, FusedLeakyReLU_Native -from .upfirdn2d import upfirdn2d, upfirdn2d_native diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/swish.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/swish.py deleted file mode 100644 index e2ca8ed7b749413f011ae54aac0cab27e6f0b51f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/swish.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class Swish(nn.Module): - """Swish Module. - - This module applies the swish function: - - .. math:: - Swish(x) = x * Sigmoid(x) - - Returns: - Tensor: The output tensor. - """ - - def __init__(self): - super(Swish, self).__init__() - - def forward(self, x): - return x * torch.sigmoid(x) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnet.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnet.py deleted file mode 100644 index 519db464493c7c7b60fc34be1d21add2235ec341..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnet.py +++ /dev/null @@ -1,305 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, constant_init - -from ..builder import BACKBONES -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - r"""Bottleneck for the ResNet backbone in `DetectoRS - `_. - - This bottleneck allows the users to specify whether to use - SAC (Switchable Atrous Convolution) and RFP (Recursive Feature Pyramid). - - Args: - inplanes (int): The number of input channels. - planes (int): The number of output channels before expansion. - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - sac (dict, optional): Dictionary to construct SAC. Default: None. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - rfp_inplanes=None, - sac=None, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - assert sac is None or isinstance(sac, dict) - self.sac = sac - self.with_sac = sac is not None - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - bias=False) - - self.rfp_inplanes = rfp_inplanes - if self.rfp_inplanes: - self.rfp_conv = build_conv_layer( - None, - self.rfp_inplanes, - planes * self.expansion, - 1, - stride=1, - bias=True) - self.init_weights() - - def init_weights(self): - """Initialize the weights.""" - if self.rfp_inplanes: - constant_init(self.rfp_conv, 0) - - def rfp_forward(self, x, rfp_feat): - """The forward function that also takes the RFP features as input.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - if self.rfp_inplanes: - rfp_feat = self.rfp_conv(rfp_feat) - out = out + rfp_feat - - out = self.relu(out) - - return out - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone for RPF in detectoRS. - - The difference between this module and base class is that we pass - ``rfp_inplanes`` to the first block. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - rfp_inplanes=None, - **kwargs): - self.block = block - assert downsample_first, f'downsample_first={downsample_first} is ' \ - 'not supported in DetectoRS' - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down and stride != 1: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - rfp_inplanes=rfp_inplanes, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - super(ResLayer, self).__init__(*layers) - - -@BACKBONES.register_module() -class DetectoRS_ResNet(ResNet): - """ResNet backbone for DetectoRS. - - Args: - sac (dict, optional): Dictionary to construct SAC (Switchable Atrous - Convolution). Default: None. - stage_with_sac (list): Which stage to use sac. Default: (False, False, - False, False). - rfp_inplanes (int, optional): The number of channels from RFP. - Default: None. If specified, an additional conv layer will be - added for ``rfp_feat``. Otherwise, the structure is the same as - base class. - output_img (bool): If ``True``, the input image will be inserted into - the starting position of output. Default: False. - pretrained (str, optional): The pretrained model to load. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - sac=None, - stage_with_sac=(False, False, False, False), - rfp_inplanes=None, - output_img=False, - pretrained=None, - **kwargs): - self.sac = sac - self.stage_with_sac = stage_with_sac - self.rfp_inplanes = rfp_inplanes - self.output_img = output_img - self.pretrained = pretrained - super(DetectoRS_ResNet, self).__init__(**kwargs) - - self.inplanes = self.stem_channels - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = self.strides[i] - dilation = self.dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - sac = self.sac if self.stage_with_sac[i] else None - if self.plugins is not None: - stage_plugins = self.make_stage_plugins(self.plugins, i) - else: - stage_plugins = None - planes = self.base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=self.with_cp, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=dcn, - sac=sac, - rfp_inplanes=rfp_inplanes if i > 0 else None, - plugins=stage_plugins) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer`` for DetectoRS.""" - return ResLayer(**kwargs) - - def forward(self, x): - """Forward function.""" - outs = list(super(DetectoRS_ResNet, self).forward(x)) - if self.output_img: - outs.insert(0, x) - return tuple(outs) - - def rfp_forward(self, x, rfp_feats): - """Forward function for RFP.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - rfp_feat = rfp_feats[i] if i > 0 else None - for layer in res_layer: - x = layer.rfp_forward(x, rfp_feat) - if i in self.out_indices: - outs.append(x) - return tuple(outs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/assign_result.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/assign_result.py deleted file mode 100644 index cb12a571dfe306e5f3055af170d16ff12371ac77..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/assign_result.py +++ /dev/null @@ -1,204 +0,0 @@ -import torch - -from annotator.uniformer.mmdet.utils import util_mixins - - -class AssignResult(util_mixins.NiceRepr): - """Stores assignments between predicted and truth boxes. - - Attributes: - num_gts (int): the number of truth boxes considered when computing this - assignment - - gt_inds (LongTensor): for each predicted box indicates the 1-based - index of the assigned truth box. 0 means unassigned and -1 means - ignore. - - max_overlaps (FloatTensor): the iou between the predicted box and its - assigned truth box. - - labels (None | LongTensor): If specified, for each predicted box - indicates the category label of the assigned truth box. - - Example: - >>> # An assign result between 4 predicted boxes and 9 true boxes - >>> # where only two boxes were assigned. - >>> num_gts = 9 - >>> max_overlaps = torch.LongTensor([0, .5, .9, 0]) - >>> gt_inds = torch.LongTensor([-1, 1, 2, 0]) - >>> labels = torch.LongTensor([0, 3, 4, 0]) - >>> self = AssignResult(num_gts, gt_inds, max_overlaps, labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - >>> # Force addition of gt labels (when adding gt as proposals) - >>> new_labels = torch.LongTensor([3, 4, 5]) - >>> self.add_gt_(new_labels) - >>> print(str(self)) # xdoctest: +IGNORE_WANT - - """ - - def __init__(self, num_gts, gt_inds, max_overlaps, labels=None): - self.num_gts = num_gts - self.gt_inds = gt_inds - self.max_overlaps = max_overlaps - self.labels = labels - # Interface for possible user-defined properties - self._extra_properties = {} - - @property - def num_preds(self): - """int: the number of predictions in this assignment""" - return len(self.gt_inds) - - def set_extra_property(self, key, value): - """Set user-defined new property.""" - assert key not in self.info - self._extra_properties[key] = value - - def get_extra_property(self, key): - """Get user-defined property.""" - return self._extra_properties.get(key, None) - - @property - def info(self): - """dict: a dictionary of info about the object""" - basic_info = { - 'num_gts': self.num_gts, - 'num_preds': self.num_preds, - 'gt_inds': self.gt_inds, - 'max_overlaps': self.max_overlaps, - 'labels': self.labels, - } - basic_info.update(self._extra_properties) - return basic_info - - def __nice__(self): - """str: a "nice" summary string describing this assign result""" - parts = [] - parts.append(f'num_gts={self.num_gts!r}') - if self.gt_inds is None: - parts.append(f'gt_inds={self.gt_inds!r}') - else: - parts.append(f'gt_inds.shape={tuple(self.gt_inds.shape)!r}') - if self.max_overlaps is None: - parts.append(f'max_overlaps={self.max_overlaps!r}') - else: - parts.append('max_overlaps.shape=' - f'{tuple(self.max_overlaps.shape)!r}') - if self.labels is None: - parts.append(f'labels={self.labels!r}') - else: - parts.append(f'labels.shape={tuple(self.labels.shape)!r}') - return ', '.join(parts) - - @classmethod - def random(cls, **kwargs): - """Create random AssignResult for tests or debugging. - - Args: - num_preds: number of predicted boxes - num_gts: number of true boxes - p_ignore (float): probability of a predicted box assinged to an - ignored truth - p_assigned (float): probability of a predicted box not being - assigned - p_use_label (float | bool): with labels or not - rng (None | int | numpy.random.RandomState): seed or state - - Returns: - :obj:`AssignResult`: Randomly generated assign results. - - Example: - >>> from mmdet.core.bbox.assigners.assign_result import * # NOQA - >>> self = AssignResult.random() - >>> print(self.info) - """ - from mmdet.core.bbox import demodata - rng = demodata.ensure_rng(kwargs.get('rng', None)) - - num_gts = kwargs.get('num_gts', None) - num_preds = kwargs.get('num_preds', None) - p_ignore = kwargs.get('p_ignore', 0.3) - p_assigned = kwargs.get('p_assigned', 0.7) - p_use_label = kwargs.get('p_use_label', 0.5) - num_classes = kwargs.get('p_use_label', 3) - - if num_gts is None: - num_gts = rng.randint(0, 8) - if num_preds is None: - num_preds = rng.randint(0, 16) - - if num_gts == 0: - max_overlaps = torch.zeros(num_preds, dtype=torch.float32) - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - if p_use_label is True or p_use_label < rng.rand(): - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = None - else: - import numpy as np - # Create an overlap for each predicted box - max_overlaps = torch.from_numpy(rng.rand(num_preds)) - - # Construct gt_inds for each predicted box - is_assigned = torch.from_numpy(rng.rand(num_preds) < p_assigned) - # maximum number of assignments constraints - n_assigned = min(num_preds, min(num_gts, is_assigned.sum())) - - assigned_idxs = np.where(is_assigned)[0] - rng.shuffle(assigned_idxs) - assigned_idxs = assigned_idxs[0:n_assigned] - assigned_idxs.sort() - - is_assigned[:] = 0 - is_assigned[assigned_idxs] = True - - is_ignore = torch.from_numpy( - rng.rand(num_preds) < p_ignore) & is_assigned - - gt_inds = torch.zeros(num_preds, dtype=torch.int64) - - true_idxs = np.arange(num_gts) - rng.shuffle(true_idxs) - true_idxs = torch.from_numpy(true_idxs) - gt_inds[is_assigned] = true_idxs[:n_assigned] - - gt_inds = torch.from_numpy( - rng.randint(1, num_gts + 1, size=num_preds)) - gt_inds[is_ignore] = -1 - gt_inds[~is_assigned] = 0 - max_overlaps[~is_assigned] = 0 - - if p_use_label is True or p_use_label < rng.rand(): - if num_classes == 0: - labels = torch.zeros(num_preds, dtype=torch.int64) - else: - labels = torch.from_numpy( - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - rng.randint(0, num_classes, size=num_preds)) - labels[~is_assigned] = 0 - else: - labels = None - - self = cls(num_gts, gt_inds, max_overlaps, labels) - return self - - def add_gt_(self, gt_labels): - """Add ground truth as assigned results. - - Args: - gt_labels (torch.Tensor): Labels of gt boxes - """ - self_inds = torch.arange( - 1, len(gt_labels) + 1, dtype=torch.long, device=gt_labels.device) - self.gt_inds = torch.cat([self_inds, self.gt_inds]) - - self.max_overlaps = torch.cat( - [self.max_overlaps.new_ones(len(gt_labels)), self.max_overlaps]) - - if self.labels is not None: - self.labels = torch.cat([gt_labels, self.labels]) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/uper_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/uper_head.py deleted file mode 100644 index 9e1301b706b0d83ed714bbdee8ee24693f150455..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/decode_heads/uper_head.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead -from .psp_head import PPM - - -@HEADS.register_module() -class UPerHead(BaseDecodeHead): - """Unified Perceptual Parsing for Scene Understanding. - - This head is the implementation of `UPerNet - `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module applied on the last feature. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(UPerHead, self).__init__( - input_transform='multiple_select', **kwargs) - # PSP Module - self.psp_modules = PPM( - pool_scales, - self.in_channels[-1], - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels[-1] + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - # FPN Module - self.lateral_convs = nn.ModuleList() - self.fpn_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the top layer - l_conv = ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - fpn_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - inplace=False) - self.lateral_convs.append(l_conv) - self.fpn_convs.append(fpn_conv) - - self.fpn_bottleneck = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def psp_forward(self, inputs): - """Forward function of PSP module.""" - x = inputs[-1] - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - - return output - - def forward(self, inputs): - """Forward function.""" - - inputs = self._transform_inputs(inputs) - - # build laterals - laterals = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - - laterals.append(self.psp_forward(inputs)) - - # build top-down path - used_backbone_levels = len(laterals) - for i in range(used_backbone_levels - 1, 0, -1): - prev_shape = laterals[i - 1].shape[2:] - laterals[i - 1] += resize( - laterals[i], - size=prev_shape, - mode='bilinear', - align_corners=self.align_corners) - - # build outputs - fpn_outs = [ - self.fpn_convs[i](laterals[i]) - for i in range(used_backbone_levels - 1) - ] - # append psp feature - fpn_outs.append(laterals[-1]) - - for i in range(used_backbone_levels - 1, 0, -1): - fpn_outs[i] = resize( - fpn_outs[i], - size=fpn_outs[0].shape[2:], - mode='bilinear', - align_corners=self.align_corners) - fpn_outs = torch.cat(fpn_outs, dim=1) - output = self.fpn_bottleneck(fpn_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/abidlabs/Gradio-MNIST-Realtime/app.py b/spaces/abidlabs/Gradio-MNIST-Realtime/app.py deleted file mode 100644 index eac88e59e3b37cc46557104428a0aa76d1813557..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Gradio-MNIST-Realtime/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import tensorflow as tf -import numpy as np -from urllib.request import urlretrieve -import gradio as gr - -urlretrieve("https://gr-models.s3-us-west-2.amazonaws.com/mnist-model.h5", "mnist-model.h5") -model = tf.keras.models.load_model("mnist-model.h5") - -def recognize_digit(image): - image = image.reshape(1, -1) # add a batch dimension - prediction = model.predict(image).tolist()[0] - return {str(i): prediction[i] for i in range(10)} - -gr.Interface(fn=recognize_digit, - inputs="sketchpad", - outputs=gr.outputs.Label(num_top_classes=3), - live=True, - css=".footer {display:none !important}", - # title="MNIST Sketchpad", - description="Draw a number 0 through 9 on the sketchpad, and see predictions in real time.", - thumbnail="https://raw.githubusercontent.com/gradio-app/real-time-mnist/master/thumbnail2.png").launch(); diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/vertexarray.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/vertexarray.py deleted file mode 100644 index e4c2580120cd341abfe3c87d5a43ee90cc712ded..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/graphics/vertexarray.py +++ /dev/null @@ -1,48 +0,0 @@ -import pyglet - -from pyglet.gl import GLuint, glGenVertexArrays, glDeleteVertexArrays, glBindVertexArray - - -__all__ = ['VertexArray'] - - -class VertexArray: - """OpenGL Vertex Array Object""" - - def __init__(self): - """Create an instance of a Vertex Array object.""" - self._context = pyglet.gl.current_context - self._id = GLuint() - glGenVertexArrays(1, self._id) - - @property - def id(self): - return self._id.value - - def bind(self): - glBindVertexArray(self._id) - - @staticmethod - def unbind(): - glBindVertexArray(0) - - def delete(self): - try: - glDeleteVertexArrays(1, self._id) - except Exception: - pass - - __enter__ = bind - - def __exit__(self, *_): - glBindVertexArray(0) - - def __del__(self): - try: - self._context.delete_vao(self.id) - # Python interpreter is shutting down: - except ImportError: - pass - - def __repr__(self): - return "{}(id={})".format(self.__class__.__name__, self._id.value) diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/__init__.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/__init__.py deleted file mode 100644 index 7aed7beac4a880371b14b368f64227a0d129e7c7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .backbone.swin import D2SwinTransformer -from .pixel_decoder.fpn import BasePixelDecoder -from .pixel_decoder.msdeformattn import MSDeformAttnPixelDecoder -from .meta_arch.mask_former_head import MaskFormerHead -from .meta_arch.per_pixel_baseline import PerPixelBaselineHead, PerPixelBaselinePlusHead diff --git a/spaces/akhaliq/Mask2Former/mask2former_video/modeling/transformer_decoder/video_mask2former_transformer_decoder.py b/spaces/akhaliq/Mask2Former/mask2former_video/modeling/transformer_decoder/video_mask2former_transformer_decoder.py deleted file mode 100644 index 06d4e8511e99faa3b104b8588cba6163f548135e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former_video/modeling/transformer_decoder/video_mask2former_transformer_decoder.py +++ /dev/null @@ -1,474 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py -import logging -import fvcore.nn.weight_init as weight_init -from typing import Optional -import torch -from torch import nn, Tensor -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import Conv2d - -from mask2former.modeling.transformer_decoder.maskformer_transformer_decoder import TRANSFORMER_DECODER_REGISTRY - -from .position_encoding import PositionEmbeddingSine3D - - -class SelfAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, - tgt_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - return self.forward_post(tgt, tgt_mask, - tgt_key_padding_mask, query_pos) - - -class CrossAttentionLayer(nn.Module): - - def __init__(self, d_model, nhead, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - - self.norm = nn.LayerNorm(d_model) - self.dropout = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - - return tgt - - def forward_pre(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - tgt2 = self.norm(tgt) - tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask)[0] - tgt = tgt + self.dropout(tgt2) - - return tgt - - def forward(self, tgt, memory, - memory_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None): - if self.normalize_before: - return self.forward_pre(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - return self.forward_post(tgt, memory, memory_mask, - memory_key_padding_mask, pos, query_pos) - - -class FFNLayer(nn.Module): - - def __init__(self, d_model, dim_feedforward=2048, dropout=0.0, - activation="relu", normalize_before=False): - super().__init__() - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm = nn.LayerNorm(d_model) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post(self, tgt): - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout(tgt2) - tgt = self.norm(tgt) - return tgt - - def forward_pre(self, tgt): - tgt2 = self.norm(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout(tgt2) - return tgt - - def forward(self, tgt): - if self.normalize_before: - return self.forward_pre(tgt) - return self.forward_post(tgt) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(F"activation should be relu/gelu, not {activation}.") - - -class MLP(nn.Module): - """ Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -@TRANSFORMER_DECODER_REGISTRY.register() -class VideoMultiScaleMaskedTransformerDecoder(nn.Module): - - _version = 2 - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - version = local_metadata.get("version", None) - if version is None or version < 2: - # Do not warn if train from scratch - scratch = True - logger = logging.getLogger(__name__) - for k in list(state_dict.keys()): - newk = k - if "static_query" in k: - newk = k.replace("static_query", "query_feat") - if newk != k: - state_dict[newk] = state_dict[k] - del state_dict[k] - scratch = False - - if not scratch: - logger.warning( - f"Weight format of {self.__class__.__name__} have changed! " - "Please upgrade your models. Applying automatic conversion now ..." - ) - - @configurable - def __init__( - self, - in_channels, - mask_classification=True, - *, - num_classes: int, - hidden_dim: int, - num_queries: int, - nheads: int, - dim_feedforward: int, - dec_layers: int, - pre_norm: bool, - mask_dim: int, - enforce_input_project: bool, - # video related - num_frames, - ): - """ - NOTE: this interface is experimental. - Args: - in_channels: channels of the input features - mask_classification: whether to add mask classifier or not - num_classes: number of classes - hidden_dim: Transformer feature dimension - num_queries: number of queries - nheads: number of heads - dim_feedforward: feature dimension in feedforward network - enc_layers: number of Transformer encoder layers - dec_layers: number of Transformer decoder layers - pre_norm: whether to use pre-LayerNorm or not - mask_dim: mask feature dimension - enforce_input_project: add input project 1x1 conv even if input - channels and hidden dim is identical - """ - super().__init__() - - assert mask_classification, "Only support mask classification model" - self.mask_classification = mask_classification - - self.num_frames = num_frames - - # positional encoding - N_steps = hidden_dim // 2 - self.pe_layer = PositionEmbeddingSine3D(N_steps, normalize=True) - - # define Transformer decoder here - self.num_heads = nheads - self.num_layers = dec_layers - self.transformer_self_attention_layers = nn.ModuleList() - self.transformer_cross_attention_layers = nn.ModuleList() - self.transformer_ffn_layers = nn.ModuleList() - - for _ in range(self.num_layers): - self.transformer_self_attention_layers.append( - SelfAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_cross_attention_layers.append( - CrossAttentionLayer( - d_model=hidden_dim, - nhead=nheads, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.transformer_ffn_layers.append( - FFNLayer( - d_model=hidden_dim, - dim_feedforward=dim_feedforward, - dropout=0.0, - normalize_before=pre_norm, - ) - ) - - self.decoder_norm = nn.LayerNorm(hidden_dim) - - self.num_queries = num_queries - # learnable query features - self.query_feat = nn.Embedding(num_queries, hidden_dim) - # learnable query p.e. - self.query_embed = nn.Embedding(num_queries, hidden_dim) - - # level embedding (we always use 3 scales) - self.num_feature_levels = 3 - self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim) - self.input_proj = nn.ModuleList() - for _ in range(self.num_feature_levels): - if in_channels != hidden_dim or enforce_input_project: - self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1)) - weight_init.c2_xavier_fill(self.input_proj[-1]) - else: - self.input_proj.append(nn.Sequential()) - - # output FFNs - if self.mask_classification: - self.class_embed = nn.Linear(hidden_dim, num_classes + 1) - self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3) - - @classmethod - def from_config(cls, cfg, in_channels, mask_classification): - ret = {} - ret["in_channels"] = in_channels - ret["mask_classification"] = mask_classification - - ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES - ret["hidden_dim"] = cfg.MODEL.MASK_FORMER.HIDDEN_DIM - ret["num_queries"] = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES - # Transformer parameters: - ret["nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - - # NOTE: because we add learnable query features which requires supervision, - # we add minus 1 to decoder layers to be consistent with our loss - # implementation: that is, number of auxiliary losses is always - # equal to number of decoder layers. With learnable query features, the number of - # auxiliary losses equals number of decoders plus 1. - assert cfg.MODEL.MASK_FORMER.DEC_LAYERS >= 1 - ret["dec_layers"] = cfg.MODEL.MASK_FORMER.DEC_LAYERS - 1 - ret["pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - ret["enforce_input_project"] = cfg.MODEL.MASK_FORMER.ENFORCE_INPUT_PROJ - - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - - ret["num_frames"] = cfg.INPUT.SAMPLING_FRAME_NUM - - return ret - - def forward(self, x, mask_features, mask = None): - bt, c_m, h_m, w_m = mask_features.shape - bs = bt // self.num_frames if self.training else 1 - t = bt // bs - mask_features = mask_features.view(bs, t, c_m, h_m, w_m) - - # x is a list of multi-scale feature - assert len(x) == self.num_feature_levels - src = [] - pos = [] - size_list = [] - - # disable mask, it does not affect performance - del mask - - for i in range(self.num_feature_levels): - size_list.append(x[i].shape[-2:]) - pos.append(self.pe_layer(x[i].view(bs, t, -1, size_list[-1][0], size_list[-1][1]), None).flatten(3)) - src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None]) - - # NTxCxHW => NxTxCxHW => (TxHW)xNxC - _, c, hw = src[-1].shape - pos[-1] = pos[-1].view(bs, t, c, hw).permute(1, 3, 0, 2).flatten(0, 1) - src[-1] = src[-1].view(bs, t, c, hw).permute(1, 3, 0, 2).flatten(0, 1) - - # QxNxC - query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1) - output = self.query_feat.weight.unsqueeze(1).repeat(1, bs, 1) - - predictions_class = [] - predictions_mask = [] - - # prediction heads on learnable query features - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0]) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - for i in range(self.num_layers): - level_index = i % self.num_feature_levels - attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False - # attention: cross-attention first - output = self.transformer_cross_attention_layers[i]( - output, src[level_index], - memory_mask=attn_mask, - memory_key_padding_mask=None, # here we do not apply masking on padded region - pos=pos[level_index], query_pos=query_embed - ) - - output = self.transformer_self_attention_layers[i]( - output, tgt_mask=None, - tgt_key_padding_mask=None, - query_pos=query_embed - ) - - # FFN - output = self.transformer_ffn_layers[i]( - output - ) - - outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels]) - predictions_class.append(outputs_class) - predictions_mask.append(outputs_mask) - - assert len(predictions_class) == self.num_layers + 1 - - out = { - 'pred_logits': predictions_class[-1], - 'pred_masks': predictions_mask[-1], - 'aux_outputs': self._set_aux_loss( - predictions_class if self.mask_classification else None, predictions_mask - ) - } - return out - - def forward_prediction_heads(self, output, mask_features, attn_mask_target_size): - decoder_output = self.decoder_norm(output) - decoder_output = decoder_output.transpose(0, 1) - outputs_class = self.class_embed(decoder_output) - mask_embed = self.mask_embed(decoder_output) - outputs_mask = torch.einsum("bqc,btchw->bqthw", mask_embed, mask_features) - b, q, t, _, _ = outputs_mask.shape - - # NOTE: prediction is of higher-resolution - # [B, Q, T, H, W] -> [B, Q, T*H*W] -> [B, h, Q, T*H*W] -> [B*h, Q, T*HW] - attn_mask = F.interpolate(outputs_mask.flatten(0, 1), size=attn_mask_target_size, mode="bilinear", align_corners=False).view( - b, q, t, attn_mask_target_size[0], attn_mask_target_size[1]) - # must use bool type - # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged. - attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool() - attn_mask = attn_mask.detach() - - return outputs_class, outputs_mask, attn_mask - - @torch.jit.unused - def _set_aux_loss(self, outputs_class, outputs_seg_masks): - # this is a workaround to make torchscript happy, as torchscript - # doesn't support dictionary with non-homogeneous values, such - # as a dict having both a Tensor and a list. - if self.mask_classification: - return [ - {"pred_logits": a, "pred_masks": b} - for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1]) - ] - else: - return [{"pred_masks": b} for b in outputs_seg_masks[:-1]] diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/ProcessingInstruction.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/ProcessingInstruction.pod deleted file mode 100644 index 9bedf175ed9ceaf6f4c9d8ac748b2cab80af2e09..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/ProcessingInstruction.pod +++ /dev/null @@ -1,32 +0,0 @@ -=head1 NAME - -XML::DOM::ProcessingInstruction - An XML processing instruction in XML::DOM - -=head1 DESCRIPTION - -XML::DOM::ProcessingInstruction extends L. - -It represents a "processing instruction", used in XML as a way to keep -processor-specific information in the text of the document. An example: - - - -Here, "PI" is the target and "processing instruction" is the data. - -=head2 METHODS - -=over 4 - -=item getTarget - -The target of this processing instruction. XML defines this -as being the first token following the markup that begins the -processing instruction. - -=item getData and setData (data) - -The content of this processing instruction. This is from the -first non white space character after the target to the -character immediately preceding the ?>. - -=back diff --git a/spaces/akhaliq/deeplab2/g3doc/projects/axial_deeplab.md b/spaces/akhaliq/deeplab2/g3doc/projects/axial_deeplab.md deleted file mode 100644 index c99a064659501e57e6b2e595236ea7299c73c9d5..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/g3doc/projects/axial_deeplab.md +++ /dev/null @@ -1,168 +0,0 @@ -# Axial-DeepLab - -Axial-DeepLab, improving over Panoptic-DeepLab, incorporates the powerful -axial self-attention modules [1], also known as the encoder of Axial -Transformers [2], for general dense prediction tasks. In this document, -we demonstrate the effectiveness of Axial-DeepLab on the task of panoptic -segmentation [6], unifying semantic segmentation and instance segmentation. - -To reduce the computation complexity of 2D self-attention (especially -prominent for dense pixel prediction tasks) and further to allow us to -perform attention witin a larger or even global region, we factorize the 2D -self-attention [1, 3, 4] into **two** 1D self-attention [2, 5]. We then -effectively integrate the **axial-attention** into a residual block [7], as -illustrated in Fig. 1. - -

        - -
        - Figure 1. An axial-attention (residual) block, which consists of two - axial-attention layers operating along height- and width-axis - sequentially. -

        - -The backbone of Axial-DeepLab, called Axial-ResNet, is obtained by replacing -the residual blocks in any type of ResNets (e.g., Wide ResNets [8, 9]) with -our proposed axial-attention blocks. Optionally, one could stack only the -axial-attention blocks to form an **axial** stand-alone self-attention -backbone. However, considering a better speed-accuracy trade-off -(convolutions are typically well-optimized on modern accelerators), we -adopt the hybrid CNN-Transformer architecture, where we stack the effective -**axial-attention blocks** on top of the first few stages of ResNets (e.g., -Wide ResNets). In particular, in this document, we explore the case where -we stack the axial-attention blocks after the *conv3_x*, i.e., we apply -axial-attentions after (and *including*) stride 16 feature maps. This -hybrid CNN-Transformer architecture is very effective on panoptic -segmentation tasks as shown in the Model Zoo below. - -Additionally, we propose a position-sensitive self-attention design, -which captures long range interactions with precise positional information. -We illustrate the difference between our design and the popular non-local -block in Fig. 2. - -

        - - -

        -
        Figure 2. A non-local block (left) vs. our position-sensitive -axial-attention applied along the width-axis (right). $$\otimes$$ denotes -matrix multiplication, and $$\oplus$$ denotes elementwise sum. The softmax -is performed on the last axis. Blue boxes denote 1 × 1 convolutions, and -red boxes denote relative positionalencoding.
        - -## Prerequisite - -1. Make sure the software is properly [installed](../setup/installation.md). - -2. Make sure the target dataset is correctly prepared (e.g., -[Cityscapes](../setup/cityscapes.md)). - -3. Download the ImageNet pretrained -[checkpoints](./imagenet_pretrained_checkpoints.md), and update the -`initial_checkpoint` path in the config files. - -## Model Zoo - -In the Model Zoo, we explore building axial-attention blocks on top of -SWideRNet (Scaling Wide ResNets) and MaX-DeepLab backbones (i.e., only -the ImageNet pretrained backbone without any *Mask Transformers*). - -Herein, we highlight some of the employed backbones: - -1. **Axial-SWideRNet-(1, 1, x)**, where x = $$\{1, 3, 4.5\}$$, scaling the -backbone layers (excluding the stem) of Wide-ResNet-41 by a factor of x. This -backbone augments the naive SWideRNet (i.e., no Squeeze-and-Excitation -or Switchable Atrous Convolution) with axial-attention blocks in the last -two stages. - -2. **MaX-DeepLab-S-Backbone**: The ImageNet pretrained backbone of -MaX-DeepLab-S (i.e., without any *Mask Transformers*). This backbone augments -the ResNet-50-Beta (i.e., replacing the original stem with Inception stem) -with axial-attention blocks in the last two stages. - -3. **MaX-DeepLab-L-Backbone**: The ImageNet pretrained backbone of -MaX-DeepLab-L (i.e., without any *Mask Transformers*). This backbone adds a -stacked decoder on top of the Wide ResNet-41, and incorporates -axial-attention blocks to all feature maps with output stride 16 and larger. - -#### Cityscapes Panoptic Segmentation - -We provide checkpoints pretrained on Cityscapes train-fine set below. If you -would like to train those models by yourself, please find the corresponding -config files under this [directory](../../configs/cityscapes/axial_deeplab). - -All the reported results are obtained by *single-scale* inference and -*ImageNet-1K* pretrained checkpoints. - -Backbone | Output stride | Input resolution | PQ [*] | mIoU [*] | PQ [**] | mIoU [**] | APMask [**] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------: | :---------------: | :----: | :------: | :-----: | :-------: | :--------------------: -Axial-SWideRNet-(1, 1, 1) ([config](../../configs/cityscapes/axial_deeplab/axial_swidernet_1_1_1_os16.textproto), [ckpt](https://storage.googleapis.com/gresearch/tf-deeplab/checkpoint/axial_swidernet_1_1_1_os16_axial_deeplab_cityscapes_trainfine.tar.gz)) | 16 | 1025 x 2049 | 66.1 | 82.8 | 66.63 | 83.43 | 37.18 -Axial-SWideRNet-(1, 1, 3) ([config](../../configs/cityscapes/axial_deeplab/axial_swidernet_1_1_3_os16.textproto), [ckpt](https://storage.googleapis.com/gresearch/tf-deeplab/checkpoint/axial_swidernet_1_1_3_os16_axial_deeplab_cityscapes_trainfine.tar.gz)) | 16 | 1025 x 2049 | 67.1 | 83.5 | 67.63 | 83.97 | 40.00 -Axial-SWideRNet-(1, 1, 4.5) ([config](../../configs/cityscapes/axial_deeplab/axial_swidernet_1_1_4.5_os16.textproto), [ckpt](https://storage.googleapis.com/gresearch/tf-deeplab/checkpoint/axial_swidernet_1_1_4.5_os16_axial_deeplab_cityscapes_trainfine.tar.gz)) | 16 | 1025 x 2049 | 68.0 | 83.0 | 68.53 | 83.49 | 39.51 -MaX-DeepLab-S-Backbone ([config](../../configs/cityscapes/axial_deeplab/max_deeplab_s_backbone_os16.textproto), [ckpt](https://storage.googleapis.com/gresearch/tf-deeplab/checkpoint/max_deeplab_s_backbone_os16_axial_deeplab_cityscapes_trainfine.tar.gz)) | 16 | 1025 x 2049 | 64.5 | 82.2 | 64.97 | 82.63 | 35.55 -MaX-DeepLab-L-Backbone ([config](../../configs/cityscapes/axial_deeplab/max_deeplab_l_backbone_os16.textproto), [ckpt](https://storage.googleapis.com/gresearch/tf-deeplab/checkpoint/max_deeplab_l_backbone_os16_axial_deeplab_cityscapes_trainfine.tar.gz)) | 16 | 1025 x 2049 | 66.3 | 83.1 | 66.77 | 83.67 | 38.09 - -[*]: Results evaluated by the official script. Instance segmentation evaluation -is not supported yet (need to convert our prediction format). - -[**]: Results evaluated by our pipeline. See Q4 in [FAQ](../faq.md). - - -## Citing Axial-DeepLab - -If you find this code helpful in your research or wish to refer to the baseline -results, please use the following BibTeX entry. - -* Axial-DeepLab: - -``` -@inproceedings{axial_deeplab_2020, - author={Huiyu Wang and Yukun Zhu and Bradley Green and Hartwig Adam and Alan Yuille and Liang-Chieh Chen}, - title={{Axial-DeepLab}: Stand-Alone Axial-Attention for Panoptic Segmentation}, - booktitle={ECCV}, - year={2020} -} - -``` - -* Panoptic-DeepLab: - -``` -@inproceedings{panoptic_deeplab_2020, - author={Bowen Cheng and Maxwell D Collins and Yukun Zhu and Ting Liu and Thomas S Huang and Hartwig Adam and Liang-Chieh Chen}, - title={{Panoptic-DeepLab}: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation}, - booktitle={CVPR}, - year={2020} -} - -``` - -If you use the SWideRNet backbone w/ axial attention, please consider -citing - -* SWideRNet: - -``` -@article{swidernet_2020, - title={Scaling Wide Residual Networks for Panoptic Segmentation}, - author={Chen, Liang-Chieh and Wang, Huiyu and Qiao, Siyuan}, - journal={arXiv:2011.11675}, - year={2020} -} - -``` - -If you use the MaX-DeepLab-{S,L} backbone, please consider -citing - -* MaX-DeepLab: - -``` -@inproceedings{max_deeplab_2021, - author={Huiyu Wang and Yukun Zhu and Hartwig Adam and Alan Yuille and Liang-Chieh Chen}, - title={{MaX-DeepLab}: End-to-End Panoptic Segmentation with Mask Transformers}, - booktitle={CVPR}, - year={2021} -} - -``` diff --git a/spaces/akhaliq/stylegan3_clip/training/augment.py b/spaces/akhaliq/stylegan3_clip/training/augment.py deleted file mode 100644 index c5a4b02ffd6079b089f8f0fe7f5d86703500215d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/training/augment.py +++ /dev/null @@ -1,436 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Augmentation pipeline from the paper -"Training Generative Adversarial Networks with Limited Data". -Matches the original implementation by Karras et al. at -https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py""" - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -#---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -#---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2] - s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -#---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability. - - # Pixel blitting. - self.xflip = float(xflip) # Probability multiplier for x-flip. - self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations. - self.xint = float(xint) # Probability multiplier for integer translation. - self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions. - - # General geometric transformations. - self.scale = float(scale) # Probability multiplier for isotropic scaling. - self.rotate = float(rotate) # Probability multiplier for arbitrary rotation. - self.aniso = float(aniso) # Probability multiplier for anisotropic scaling. - self.xfrac = float(xfrac) # Probability multiplier for fractional translation. - self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling. - self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle. - self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling. - self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions. - - # Color transformations. - self.brightness = float(brightness) # Probability multiplier for brightness. - self.contrast = float(contrast) # Probability multiplier for contrast. - self.lumaflip = float(lumaflip) # Probability multiplier for luma flip. - self.hue = float(hue) # Probability multiplier for hue rotation. - self.saturation = float(saturation) # Probability multiplier for saturation. - self.brightness_std = float(brightness_std) # Standard deviation of brightness. - self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast. - self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle. - self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation. - - # Image-space filtering. - self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering. - self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands. - self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification. - - # Image-space corruptions. - self.noise = float(noise) # Probability multiplier for additive RGB noise. - self.cutout = float(cutout) # Probability multiplier for cutout. - self.noise_std = float(noise_std) # Standard deviation of additive RGB noise. - self.cutout_size = float(cutout_size) # Size of the cutout rectangle, relative to image dimensions. - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand([batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand([batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx] - margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1] - margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant([width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis. - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError('Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f). - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity). - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector. - t[:, i] = t_i # Replace i'th element. - t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power. - g = g * t # Accumulate into global gain. - - # Construct combined amplification filter. - Hz_prime = g @ self.Hz_fbank # [batch, tap] - Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap] - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape([1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect') - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], device=device).abs() * self.noise_std - sigma = torch.where(torch.rand([batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv(debug_percentile) * self.noise_std) - images = images + torch.randn([batch_size, num_channels, height, width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], self.cutout_size, device=device) - size = torch.where(torch.rand([batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange(height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -#---------------------------------------------------------------------------- diff --git a/spaces/alamin655/websurfx/src/cache/mod.rs b/spaces/alamin655/websurfx/src/cache/mod.rs deleted file mode 100644 index 887f119070e57655d00c21b8711fa9e3f0a92002..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/cache/mod.rs +++ /dev/null @@ -1,7 +0,0 @@ -//! This module provides the modules which provide the functionality to cache the aggregated -//! results fetched and aggregated from the upstream search engines in a json format. - -pub mod cacher; -pub mod error; -#[cfg(feature = "redis-cache")] -pub mod redis_cacher; diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/packaging/specifiers.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/packaging/specifiers.py deleted file mode 100644 index 0e218a6f9f75ea2060a8b08d1f1a043fdad68df8..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/packaging/specifiers.py +++ /dev/null @@ -1,802 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import abc -import functools -import itertools -import re -import warnings -from typing import ( - Callable, - Dict, - Iterable, - Iterator, - List, - Optional, - Pattern, - Set, - Tuple, - TypeVar, - Union, -) - -from .utils import canonicalize_version -from .version import LegacyVersion, Version, parse - -ParsedVersion = Union[Version, LegacyVersion] -UnparsedVersion = Union[Version, LegacyVersion, str] -VersionTypeVar = TypeVar("VersionTypeVar", bound=UnparsedVersion) -CallableOperator = Callable[[ParsedVersion, str], bool] - - -class InvalidSpecifier(ValueError): - """ - An invalid specifier was found, users should refer to PEP 440. - """ - - -class BaseSpecifier(metaclass=abc.ABCMeta): - @abc.abstractmethod - def __str__(self) -> str: - """ - Returns the str representation of this Specifier like object. This - should be representative of the Specifier itself. - """ - - @abc.abstractmethod - def __hash__(self) -> int: - """ - Returns a hash value for this Specifier like object. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Returns a boolean representing whether or not the two Specifier like - objects are equal. - """ - - @abc.abstractproperty - def prereleases(self) -> Optional[bool]: - """ - Returns whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @prereleases.setter - def prereleases(self, value: bool) -> None: - """ - Sets whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @abc.abstractmethod - def contains(self, item: str, prereleases: Optional[bool] = None) -> bool: - """ - Determines if the given item is contained within this specifier. - """ - - @abc.abstractmethod - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - """ - Takes an iterable of items and filters them so that only items which - are contained within this specifier are allowed in it. - """ - - -class _IndividualSpecifier(BaseSpecifier): - - _operators: Dict[str, str] = {} - _regex: Pattern[str] - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - match = self._regex.search(spec) - if not match: - raise InvalidSpecifier(f"Invalid specifier: '{spec}'") - - self._spec: Tuple[str, str] = ( - match.group("operator").strip(), - match.group("version").strip(), - ) - - # Store whether or not this Specifier should accept prereleases - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"<{self.__class__.__name__}({str(self)!r}{pre})>" - - def __str__(self) -> str: - return "{}{}".format(*self._spec) - - @property - def _canonical_spec(self) -> Tuple[str, str]: - return self._spec[0], canonicalize_version(self._spec[1]) - - def __hash__(self) -> int: - return hash(self._canonical_spec) - - def __eq__(self, other: object) -> bool: - if isinstance(other, str): - try: - other = self.__class__(str(other)) - except InvalidSpecifier: - return NotImplemented - elif not isinstance(other, self.__class__): - return NotImplemented - - return self._canonical_spec == other._canonical_spec - - def _get_operator(self, op: str) -> CallableOperator: - operator_callable: CallableOperator = getattr( - self, f"_compare_{self._operators[op]}" - ) - return operator_callable - - def _coerce_version(self, version: UnparsedVersion) -> ParsedVersion: - if not isinstance(version, (LegacyVersion, Version)): - version = parse(version) - return version - - @property - def operator(self) -> str: - return self._spec[0] - - @property - def version(self) -> str: - return self._spec[1] - - @property - def prereleases(self) -> Optional[bool]: - return self._prereleases - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: str) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Determine if prereleases are to be allowed or not. - if prereleases is None: - prereleases = self.prereleases - - # Normalize item to a Version or LegacyVersion, this allows us to have - # a shortcut for ``"2.0" in Specifier(">=2") - normalized_item = self._coerce_version(item) - - # Determine if we should be supporting prereleases in this specifier - # or not, if we do not support prereleases than we can short circuit - # logic if this version is a prereleases. - if normalized_item.is_prerelease and not prereleases: - return False - - # Actually do the comparison to determine if this item is contained - # within this Specifier or not. - operator_callable: CallableOperator = self._get_operator(self.operator) - return operator_callable(normalized_item, self.version) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - yielded = False - found_prereleases = [] - - kw = {"prereleases": prereleases if prereleases is not None else True} - - # Attempt to iterate over all the values in the iterable and if any of - # them match, yield them. - for version in iterable: - parsed_version = self._coerce_version(version) - - if self.contains(parsed_version, **kw): - # If our version is a prerelease, and we were not set to allow - # prereleases, then we'll store it for later in case nothing - # else matches this specifier. - if parsed_version.is_prerelease and not ( - prereleases or self.prereleases - ): - found_prereleases.append(version) - # Either this is not a prerelease, or we should have been - # accepting prereleases from the beginning. - else: - yielded = True - yield version - - # Now that we've iterated over everything, determine if we've yielded - # any values, and if we have not and we have any prereleases stored up - # then we will go ahead and yield the prereleases. - if not yielded and found_prereleases: - for version in found_prereleases: - yield version - - -class LegacySpecifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(==|!=|<=|>=|<|>)) - \s* - (?P - [^,;\s)]* # Since this is a "legacy" specifier, and the version - # string can be just about anything, we match everything - # except for whitespace, a semi-colon for marker support, - # a closing paren since versions can be enclosed in - # them, and a comma since it's a version separator. - ) - """ - - _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) - - _operators = { - "==": "equal", - "!=": "not_equal", - "<=": "less_than_equal", - ">=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - } - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - super().__init__(spec, prereleases) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def _coerce_version(self, version: UnparsedVersion) -> LegacyVersion: - if not isinstance(version, LegacyVersion): - version = LegacyVersion(str(version)) - return version - - def _compare_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective == self._coerce_version(spec) - - def _compare_not_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective != self._coerce_version(spec) - - def _compare_less_than_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective <= self._coerce_version(spec) - - def _compare_greater_than_equal( - self, prospective: LegacyVersion, spec: str - ) -> bool: - return prospective >= self._coerce_version(spec) - - def _compare_less_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective < self._coerce_version(spec) - - def _compare_greater_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective > self._coerce_version(spec) - - -def _require_version_compare( - fn: Callable[["Specifier", ParsedVersion, str], bool] -) -> Callable[["Specifier", ParsedVersion, str], bool]: - @functools.wraps(fn) - def wrapped(self: "Specifier", prospective: ParsedVersion, spec: str) -> bool: - if not isinstance(prospective, Version): - return False - return fn(self, prospective, spec) - - return wrapped - - -class Specifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(~=|==|!=|<=|>=|<|>|===)) - (?P - (?: - # The identity operators allow for an escape hatch that will - # do an exact string match of the version you wish to install. - # This will not be parsed by PEP 440 and we cannot determine - # any semantic meaning from it. This operator is discouraged - # but included entirely as an escape hatch. - (?<====) # Only match for the identity operator - \s* - [^\s]* # We just match everything, except for whitespace - # since we are only testing for strict identity. - ) - | - (?: - # The (non)equality operators allow for wild card and local - # versions to be specified so we have to define these two - # operators separately to enable that. - (?<===|!=) # Only match for equals and not equals - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)* # release - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - - # You cannot use a wild card and a dev or local version - # together so group them with a | and make them optional. - (?: - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local - | - \.\* # Wild card syntax of .* - )? - ) - | - (?: - # The compatible operator requires at least two digits in the - # release segment. - (?<=~=) # Only match for the compatible operator - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - ) - | - (?: - # All other operators only allow a sub set of what the - # (non)equality operators do. Specifically they do not allow - # local versions to be specified nor do they allow the prefix - # matching wild cards. - (?=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - "===": "arbitrary", - } - - @_require_version_compare - def _compare_compatible(self, prospective: ParsedVersion, spec: str) -> bool: - - # Compatible releases have an equivalent combination of >= and ==. That - # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to - # implement this in terms of the other specifiers instead of - # implementing it ourselves. The only thing we need to do is construct - # the other specifiers. - - # We want everything but the last item in the version, but we want to - # ignore suffix segments. - prefix = ".".join( - list(itertools.takewhile(_is_not_suffix, _version_split(spec)))[:-1] - ) - - # Add the prefix notation to the end of our string - prefix += ".*" - - return self._get_operator(">=")(prospective, spec) and self._get_operator("==")( - prospective, prefix - ) - - @_require_version_compare - def _compare_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # We need special logic to handle prefix matching - if spec.endswith(".*"): - # In the case of prefix matching we want to ignore local segment. - prospective = Version(prospective.public) - # Split the spec out by dots, and pretend that there is an implicit - # dot in between a release segment and a pre-release segment. - split_spec = _version_split(spec[:-2]) # Remove the trailing .* - - # Split the prospective version out by dots, and pretend that there - # is an implicit dot in between a release segment and a pre-release - # segment. - split_prospective = _version_split(str(prospective)) - - # Shorten the prospective version to be the same length as the spec - # so that we can determine if the specifier is a prefix of the - # prospective version or not. - shortened_prospective = split_prospective[: len(split_spec)] - - # Pad out our two sides with zeros so that they both equal the same - # length. - padded_spec, padded_prospective = _pad_version( - split_spec, shortened_prospective - ) - - return padded_prospective == padded_spec - else: - # Convert our spec string into a Version - spec_version = Version(spec) - - # If the specifier does not have a local segment, then we want to - # act as if the prospective version also does not have a local - # segment. - if not spec_version.local: - prospective = Version(prospective.public) - - return prospective == spec_version - - @_require_version_compare - def _compare_not_equal(self, prospective: ParsedVersion, spec: str) -> bool: - return not self._compare_equal(prospective, spec) - - @_require_version_compare - def _compare_less_than_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) <= Version(spec) - - @_require_version_compare - def _compare_greater_than_equal( - self, prospective: ParsedVersion, spec: str - ) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) >= Version(spec) - - @_require_version_compare - def _compare_less_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is less than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective < spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a pre-release version, that we do not accept pre-release - # versions for the version mentioned in the specifier (e.g. <3.1 should - # not match 3.1.dev0, but should match 3.0.dev0). - if not spec.is_prerelease and prospective.is_prerelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # less than the spec version *and* it's not a pre-release of the same - # version in the spec. - return True - - @_require_version_compare - def _compare_greater_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is greater than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective > spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a post-release version, that we do not accept - # post-release versions for the version mentioned in the specifier - # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). - if not spec.is_postrelease and prospective.is_postrelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # Ensure that we do not allow a local version of the version mentioned - # in the specifier, which is technically greater than, to match. - if prospective.local is not None: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # greater than the spec version *and* it's not a pre-release of the - # same version in the spec. - return True - - def _compare_arbitrary(self, prospective: Version, spec: str) -> bool: - return str(prospective).lower() == str(spec).lower() - - @property - def prereleases(self) -> bool: - - # If there is an explicit prereleases set for this, then we'll just - # blindly use that. - if self._prereleases is not None: - return self._prereleases - - # Look at all of our specifiers and determine if they are inclusive - # operators, and if they are if they are including an explicit - # prerelease. - operator, version = self._spec - if operator in ["==", ">=", "<=", "~=", "==="]: - # The == specifier can include a trailing .*, if it does we - # want to remove before parsing. - if operator == "==" and version.endswith(".*"): - version = version[:-2] - - # Parse the version, and if it is a pre-release than this - # specifier allows pre-releases. - if parse(version).is_prerelease: - return True - - return False - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - -_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") - - -def _version_split(version: str) -> List[str]: - result: List[str] = [] - for item in version.split("."): - match = _prefix_regex.search(item) - if match: - result.extend(match.groups()) - else: - result.append(item) - return result - - -def _is_not_suffix(segment: str) -> bool: - return not any( - segment.startswith(prefix) for prefix in ("dev", "a", "b", "rc", "post") - ) - - -def _pad_version(left: List[str], right: List[str]) -> Tuple[List[str], List[str]]: - left_split, right_split = [], [] - - # Get the release segment of our versions - left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) - right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) - - # Get the rest of our versions - left_split.append(left[len(left_split[0]) :]) - right_split.append(right[len(right_split[0]) :]) - - # Insert our padding - left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0]))) - right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0]))) - - return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split))) - - -class SpecifierSet(BaseSpecifier): - def __init__( - self, specifiers: str = "", prereleases: Optional[bool] = None - ) -> None: - - # Split on , to break each individual specifier into it's own item, and - # strip each item to remove leading/trailing whitespace. - split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] - - # Parsed each individual specifier, attempting first to make it a - # Specifier and falling back to a LegacySpecifier. - parsed: Set[_IndividualSpecifier] = set() - for specifier in split_specifiers: - try: - parsed.add(Specifier(specifier)) - except InvalidSpecifier: - parsed.add(LegacySpecifier(specifier)) - - # Turn our parsed specifiers into a frozen set and save them for later. - self._specs = frozenset(parsed) - - # Store our prereleases value so we can use it later to determine if - # we accept prereleases or not. - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"" - - def __str__(self) -> str: - return ",".join(sorted(str(s) for s in self._specs)) - - def __hash__(self) -> int: - return hash(self._specs) - - def __and__(self, other: Union["SpecifierSet", str]) -> "SpecifierSet": - if isinstance(other, str): - other = SpecifierSet(other) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - specifier = SpecifierSet() - specifier._specs = frozenset(self._specs | other._specs) - - if self._prereleases is None and other._prereleases is not None: - specifier._prereleases = other._prereleases - elif self._prereleases is not None and other._prereleases is None: - specifier._prereleases = self._prereleases - elif self._prereleases == other._prereleases: - specifier._prereleases = self._prereleases - else: - raise ValueError( - "Cannot combine SpecifierSets with True and False prerelease " - "overrides." - ) - - return specifier - - def __eq__(self, other: object) -> bool: - if isinstance(other, (str, _IndividualSpecifier)): - other = SpecifierSet(str(other)) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - return self._specs == other._specs - - def __len__(self) -> int: - return len(self._specs) - - def __iter__(self) -> Iterator[_IndividualSpecifier]: - return iter(self._specs) - - @property - def prereleases(self) -> Optional[bool]: - - # If we have been given an explicit prerelease modifier, then we'll - # pass that through here. - if self._prereleases is not None: - return self._prereleases - - # If we don't have any specifiers, and we don't have a forced value, - # then we'll just return None since we don't know if this should have - # pre-releases or not. - if not self._specs: - return None - - # Otherwise we'll see if any of the given specifiers accept - # prereleases, if any of them do we'll return True, otherwise False. - return any(s.prereleases for s in self._specs) - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: UnparsedVersion) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Ensure that our item is a Version or LegacyVersion instance. - if not isinstance(item, (LegacyVersion, Version)): - item = parse(item) - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # We can determine if we're going to allow pre-releases by looking to - # see if any of the underlying items supports them. If none of them do - # and this item is a pre-release then we do not allow it and we can - # short circuit that here. - # Note: This means that 1.0.dev1 would not be contained in something - # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 - if not prereleases and item.is_prerelease: - return False - - # We simply dispatch to the underlying specs here to make sure that the - # given version is contained within all of them. - # Note: This use of all() here means that an empty set of specifiers - # will always return True, this is an explicit design decision. - return all(s.contains(item, prereleases=prereleases) for s in self._specs) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # If we have any specifiers, then we want to wrap our iterable in the - # filter method for each one, this will act as a logical AND amongst - # each specifier. - if self._specs: - for spec in self._specs: - iterable = spec.filter(iterable, prereleases=bool(prereleases)) - return iterable - # If we do not have any specifiers, then we need to have a rough filter - # which will filter out any pre-releases, unless there are no final - # releases, and which will filter out LegacyVersion in general. - else: - filtered: List[VersionTypeVar] = [] - found_prereleases: List[VersionTypeVar] = [] - - item: UnparsedVersion - parsed_version: Union[Version, LegacyVersion] - - for item in iterable: - # Ensure that we some kind of Version class for this item. - if not isinstance(item, (LegacyVersion, Version)): - parsed_version = parse(item) - else: - parsed_version = item - - # Filter out any item which is parsed as a LegacyVersion - if isinstance(parsed_version, LegacyVersion): - continue - - # Store any item which is a pre-release for later unless we've - # already found a final version or we are accepting prereleases - if parsed_version.is_prerelease and not prereleases: - if not filtered: - found_prereleases.append(item) - else: - filtered.append(item) - - # If we've found no items except for pre-releases, then we'll go - # ahead and use the pre-releases - if not filtered and found_prereleases and prereleases is None: - return found_prereleases - - return filtered diff --git a/spaces/aliabid94/AutoGPT/autogpt/config/__init__.py b/spaces/aliabid94/AutoGPT/autogpt/config/__init__.py deleted file mode 100644 index 726b6dcf3da95968b948c4d897e97a9cdd0928ff..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/config/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -""" -This module contains the configuration classes for AutoGPT. -""" -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config, check_openai_api_key -from autogpt.config.singleton import AbstractSingleton, Singleton - -__all__ = [ - "check_openai_api_key", - "AbstractSingleton", - "AIConfig", - "Config", - "Singleton", -] diff --git a/spaces/allknowingroger/Image-Models-Test191/app.py b/spaces/allknowingroger/Image-Models-Test191/app.py deleted file mode 100644 index c424d2deb320116f5a6d5d316575ded7582bb28f..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test191/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Yntec/OpenNijiRemix", - "Suchitha27/my-pet-cat", - "Yacong/allu-lora-trained-xl", - "Yntec/3DCute", - "frankmoire/nyjha-1-1", - "MirageML/lowpoly-game-building", - "freedeluxetrain/9angles", - "orca3315/lora-trained-xl", - "robert123231/coloringbookgenerator", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/HostApiInfo.java b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/HostApiInfo.java deleted file mode 100644 index dc2cdceb1bcc5a840d29ac97000e2ed26e4a8a06..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/HostApiInfo.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup bindings_java - - @brief Information about a JPortAudio Host API. -*/ -package com.portaudio; - -/** - * Equivalent to PaHostApiInfo - * @see PortAudio - * @see DeviceInfo - * @author Phil Burk - * - */ -public class HostApiInfo -{ - public int version; - public int type; - public String name; - public int deviceCount; - public int defaultInputDevice; - public int defaultOutputDevice; -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/jack/pa_jack.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/jack/pa_jack.c deleted file mode 100644 index 124c0f8b2e239f9d19c53daf67bb591a84dcd1b0..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/jack/pa_jack.c +++ /dev/null @@ -1,1826 +0,0 @@ -/* - * $Id$ - * PortAudio Portable Real-Time Audio Library - * Latest Version at: http://www.portaudio.com - * JACK Implementation by Joshua Haberman - * - * Copyright (c) 2004 Stefan Westerfeld - * Copyright (c) 2004 Arve Knudsen - * Copyright (c) 2002 Joshua Haberman - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** - @file - @ingroup hostapi_src -*/ - -#include -#include -#include -#include -#include -#include -#include -#include /* EBUSY */ -#include /* sig_atomic_t */ -#include -#include - -#include -#include - -#include "pa_util.h" -#include "pa_hostapi.h" -#include "pa_stream.h" -#include "pa_process.h" -#include "pa_allocation.h" -#include "pa_cpuload.h" -#include "pa_ringbuffer.h" -#include "pa_debugprint.h" - -#include "pa_jack.h" - -static pthread_t mainThread_; -static char *jackErr_ = NULL; -static const char* clientName_ = "PortAudio"; -static const char* port_regex_suffix = ":.*"; - -#define STRINGIZE_HELPER(expr) #expr -#define STRINGIZE(expr) STRINGIZE_HELPER(expr) - -/* Check PaError */ -#define ENSURE_PA(expr) \ - do { \ - PaError paErr; \ - if( (paErr = (expr)) < paNoError ) \ - { \ - if( (paErr) == paUnanticipatedHostError && pthread_self() == mainThread_ ) \ - { \ - const char *err = jackErr_; \ - if (! err ) err = "unknown error"; \ - PaUtil_SetLastHostErrorInfo( paJACK, -1, err ); \ - } \ - PaUtil_DebugPrint(( "Expression '" #expr "' failed in '" __FILE__ "', line: " STRINGIZE( __LINE__ ) "\n" )); \ - result = paErr; \ - goto error; \ - } \ - } while( 0 ) - -#define UNLESS(expr, code) \ - do { \ - if( (expr) == 0 ) \ - { \ - if( (code) == paUnanticipatedHostError && pthread_self() == mainThread_ ) \ - { \ - const char *err = jackErr_; \ - if (!err) err = "unknown error"; \ - PaUtil_SetLastHostErrorInfo( paJACK, -1, err ); \ - } \ - PaUtil_DebugPrint(( "Expression '" #expr "' failed in '" __FILE__ "', line: " STRINGIZE( __LINE__ ) "\n" )); \ - result = (code); \ - goto error; \ - } \ - } while( 0 ) - -#define ASSERT_CALL(expr, success) \ - do { \ - int err = (expr); \ - assert( err == success ); \ - } while( 0 ) - -/* - * Functions that directly map to the PortAudio stream interface - */ - -static void Terminate( struct PaUtilHostApiRepresentation *hostApi ); -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ); -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ); -static PaError CloseStream( PaStream* stream ); -static PaError StartStream( PaStream *stream ); -static PaError StopStream( PaStream *stream ); -static PaError AbortStream( PaStream *stream ); -static PaError IsStreamStopped( PaStream *s ); -static PaError IsStreamActive( PaStream *stream ); -/*static PaTime GetStreamInputLatency( PaStream *stream );*/ -/*static PaTime GetStreamOutputLatency( PaStream *stream );*/ -static PaTime GetStreamTime( PaStream *stream ); -static double GetStreamCpuLoad( PaStream* stream ); - - -/* - * Data specific to this API - */ - -struct PaJackStream; - -typedef struct -{ - PaUtilHostApiRepresentation commonHostApiRep; - PaUtilStreamInterface callbackStreamInterface; - PaUtilStreamInterface blockingStreamInterface; - - PaUtilAllocationGroup *deviceInfoMemory; - - jack_client_t *jack_client; - int jack_buffer_size; - PaHostApiIndex hostApiIndex; - - pthread_mutex_t mtx; - pthread_cond_t cond; - unsigned long inputBase, outputBase; - - /* For dealing with the process thread */ - volatile int xrun; /* Received xrun notification from JACK? */ - struct PaJackStream * volatile toAdd, * volatile toRemove; - struct PaJackStream *processQueue; - volatile sig_atomic_t jackIsDown; -} -PaJackHostApiRepresentation; - -/* PaJackStream - a stream data structure specifically for this implementation */ - -typedef struct PaJackStream -{ - PaUtilStreamRepresentation streamRepresentation; - PaUtilBufferProcessor bufferProcessor; - PaUtilCpuLoadMeasurer cpuLoadMeasurer; - PaJackHostApiRepresentation *hostApi; - - /* our input and output ports */ - jack_port_t **local_input_ports; - jack_port_t **local_output_ports; - - /* the input and output ports of the client we are connecting to */ - jack_port_t **remote_input_ports; - jack_port_t **remote_output_ports; - - int num_incoming_connections; - int num_outgoing_connections; - - jack_client_t *jack_client; - - /* The stream is running if it's still producing samples. - * The stream is active if samples it produced are still being heard. - */ - volatile sig_atomic_t is_running; - volatile sig_atomic_t is_active; - /* Used to signal processing thread that stream should start or stop, respectively */ - volatile sig_atomic_t doStart, doStop, doAbort; - - jack_nframes_t t0; - - PaUtilAllocationGroup *stream_memory; - - /* These are useful in the process callback */ - - int callbackResult; - int isSilenced; - int xrun; - - /* These are useful for the blocking API */ - - int isBlockingStream; - PaUtilRingBuffer inFIFO; - PaUtilRingBuffer outFIFO; - volatile sig_atomic_t data_available; - sem_t data_semaphore; - int bytesPerFrame; - int samplesPerFrame; - - struct PaJackStream *next; -} -PaJackStream; - -/* In calls to jack_get_ports() this filter expression is used instead of "" - * to prevent any other types (eg Midi ports etc) being listed */ -#define JACK_PORT_TYPE_FILTER "audio" - -#define TRUE 1 -#define FALSE 0 - -/* - * Functions specific to this API - */ - -static int JackCallback( jack_nframes_t frames, void *userData ); - - -/* - * - * Implementation - * - */ - -/* ---- blocking emulation layer ---- */ - -/* Allocate buffer. */ -static PaError BlockingInitFIFO( PaUtilRingBuffer *rbuf, long numFrames, long bytesPerFrame ) -{ - long numBytes = numFrames * bytesPerFrame; - char *buffer = (char *) malloc( numBytes ); - if( buffer == NULL ) return paInsufficientMemory; - memset( buffer, 0, numBytes ); - return (PaError) PaUtil_InitializeRingBuffer( rbuf, 1, numBytes, buffer ); -} - -/* Free buffer. */ -static PaError BlockingTermFIFO( PaUtilRingBuffer *rbuf ) -{ - if( rbuf->buffer ) free( rbuf->buffer ); - rbuf->buffer = NULL; - return paNoError; -} - -static int -BlockingCallback( const void *inputBuffer, - void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - struct PaJackStream *stream = (PaJackStream *)userData; - long numBytes = stream->bytesPerFrame * framesPerBuffer; - - /* This may get called with NULL inputBuffer during initial setup. */ - if( inputBuffer != NULL ) - { - PaUtil_WriteRingBuffer( &stream->inFIFO, inputBuffer, numBytes ); - } - if( outputBuffer != NULL ) - { - int numRead = PaUtil_ReadRingBuffer( &stream->outFIFO, outputBuffer, numBytes ); - /* Zero out remainder of buffer if we run out of data. */ - memset( (char *)outputBuffer + numRead, 0, numBytes - numRead ); - } - - if( !stream->data_available ) - { - stream->data_available = 1; - sem_post( &stream->data_semaphore ); - } - return paContinue; -} - -static PaError -BlockingBegin( PaJackStream *stream, int minimum_buffer_size ) -{ - long doRead = 0; - long doWrite = 0; - PaError result = paNoError; - long numFrames; - - doRead = stream->local_input_ports != NULL; - doWrite = stream->local_output_ports != NULL; - /* */ - stream->samplesPerFrame = 2; - stream->bytesPerFrame = sizeof(float) * stream->samplesPerFrame; - /* */ - numFrames = 32; - while (numFrames < minimum_buffer_size) - numFrames *= 2; - - if( doRead ) - { - ENSURE_PA( BlockingInitFIFO( &stream->inFIFO, numFrames, stream->bytesPerFrame ) ); - } - if( doWrite ) - { - long numBytes; - - ENSURE_PA( BlockingInitFIFO( &stream->outFIFO, numFrames, stream->bytesPerFrame ) ); - - /* Make Write FIFO appear full initially. */ - numBytes = PaUtil_GetRingBufferWriteAvailable( &stream->outFIFO ); - PaUtil_AdvanceRingBufferWriteIndex( &stream->outFIFO, numBytes ); - } - - stream->data_available = 0; - sem_init( &stream->data_semaphore, 0, 0 ); - -error: - return result; -} - -static void -BlockingEnd( PaJackStream *stream ) -{ - BlockingTermFIFO( &stream->inFIFO ); - BlockingTermFIFO( &stream->outFIFO ); - - sem_destroy( &stream->data_semaphore ); -} - -static PaError BlockingReadStream( PaStream* s, void *data, unsigned long numFrames ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream *)s; - - long bytesRead; - char *p = (char *) data; - long numBytes = stream->bytesPerFrame * numFrames; - while( numBytes > 0 ) - { - bytesRead = PaUtil_ReadRingBuffer( &stream->inFIFO, p, numBytes ); - numBytes -= bytesRead; - p += bytesRead; - if( numBytes > 0 ) - { - /* see write for an explanation */ - if( stream->data_available ) - stream->data_available = 0; - else - sem_wait( &stream->data_semaphore ); - } - } - - return result; -} - -static PaError BlockingWriteStream( PaStream* s, const void *data, unsigned long numFrames ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream *)s; - long bytesWritten; - char *p = (char *) data; - long numBytes = stream->bytesPerFrame * numFrames; - while( numBytes > 0 ) - { - bytesWritten = PaUtil_WriteRingBuffer( &stream->outFIFO, p, numBytes ); - numBytes -= bytesWritten; - p += bytesWritten; - if( numBytes > 0 ) - { - /* we use the following algorithm: - * (1) write data - * (2) if some data didn't fit into the ringbuffer, set data_available to 0 - * to indicate to the audio that if space becomes available, we want to know - * (3) retry to write data (because it might be that between (1) and (2) - * new space in the buffer became available) - * (4) if this failed, we are sure that the buffer is really empty and - * we will definitely receive a notification when it becomes available - * thus we can safely sleep - * - * if the algorithm bailed out in step (3) before, it leaks a count of 1 - * on the semaphore; however, it doesn't matter, because if we block in (4), - * we also do it in a loop - */ - if( stream->data_available ) - stream->data_available = 0; - else - sem_wait( &stream->data_semaphore ); - } - } - - return result; -} - -static signed long -BlockingGetStreamReadAvailable( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - int bytesFull = PaUtil_GetRingBufferReadAvailable( &stream->inFIFO ); - return bytesFull / stream->bytesPerFrame; -} - -static signed long -BlockingGetStreamWriteAvailable( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - int bytesEmpty = PaUtil_GetRingBufferWriteAvailable( &stream->outFIFO ); - return bytesEmpty / stream->bytesPerFrame; -} - -static PaError -BlockingWaitEmpty( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream *)s; - - while( PaUtil_GetRingBufferReadAvailable( &stream->outFIFO ) > 0 ) - { - stream->data_available = 0; - sem_wait( &stream->data_semaphore ); - } - return 0; -} - -/* ---- jack driver ---- */ - -/* copy null terminated string source to destination, escaping regex characters with '\\' in the process */ -static void copy_string_and_escape_regex_chars( char *destination, const char *source, size_t destbuffersize ) -{ - assert( destination != source ); - assert( destbuffersize > 0 ); - - char *dest = destination; - /* dest_stop is the last location that we can null-terminate the string */ - char *dest_stop = destination + (destbuffersize - 1); - - const char *src = source; - - while ( *src != '\0' && dest != dest_stop ) - { - const char c = *src; - if ( strchr( "\\()[]{}*+?|$^.", c ) != NULL ) - { - if( (dest + 1) == dest_stop ) - break; /* only proceed if we can write both c and the escape */ - - *dest = '\\'; - dest++; - } - *dest = c; - dest++; - - src++; - } - - *dest = '\0'; -} - -/* BuildDeviceList(): - * - * The process of determining a list of PortAudio "devices" from - * JACK's client/port system is fairly involved, so it is separated - * into its own routine. - */ - -static PaError BuildDeviceList( PaJackHostApiRepresentation *jackApi ) -{ - /* Utility macros for the repetitive process of allocating memory */ - - /* JACK has no concept of a device. To JACK, there are clients - * which have an arbitrary number of ports. To make this - * intelligible to PortAudio clients, we will group each JACK client - * into a device, and make each port of that client a channel */ - - PaError result = paNoError; - PaUtilHostApiRepresentation *commonApi = &jackApi->commonHostApiRep; - - const char **jack_ports = NULL; - char **client_names = NULL; - char *port_regex_string = NULL; - // In the worst case scenario, every character would be escaped, doubling the string size. - // Add 1 for null terminator. - size_t device_name_regex_escaped_size = jack_client_name_size() * 2 + 1; - size_t port_regex_size = device_name_regex_escaped_size + strlen(port_regex_suffix); - int port_index, client_index, i; - double globalSampleRate; - regex_t port_regex; - unsigned long numClients = 0, numPorts = 0; - char *tmp_client_name = NULL; - - commonApi->info.defaultInputDevice = paNoDevice; - commonApi->info.defaultOutputDevice = paNoDevice; - commonApi->info.deviceCount = 0; - - /* Parse the list of ports, using a regex to grab the client names */ - ASSERT_CALL( regcomp( &port_regex, "^[^:]*", REG_EXTENDED ), 0 ); - - /* since we are rebuilding the list of devices, free all memory - * associated with the previous list */ - PaUtil_FreeAllAllocations( jackApi->deviceInfoMemory ); - - port_regex_string = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, port_regex_size ); - tmp_client_name = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, jack_client_name_size() ); - - /* We can only retrieve the list of clients indirectly, by first - * asking for a list of all ports, then parsing the port names - * according to the client_name:port_name convention (which is - * enforced by jackd) - * A: If jack_get_ports returns NULL, there's nothing for us to do */ - UNLESS( (jack_ports = jack_get_ports( jackApi->jack_client, "", JACK_PORT_TYPE_FILTER, 0 )) && jack_ports[0], paNoError ); - /* Find number of ports */ - while( jack_ports[numPorts] ) - ++numPorts; - /* At least there will be one port per client :) */ - UNLESS( client_names = PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, numPorts * - sizeof (char *) ), paInsufficientMemory ); - - /* Build a list of clients from the list of ports */ - for( numClients = 0, port_index = 0; jack_ports[port_index] != NULL; port_index++ ) - { - int client_seen = FALSE; - regmatch_t match_info; - const char *port = jack_ports[port_index]; - PA_DEBUG(( "JACK port found: %s\n", port )); - - /* extract the client name from the port name, using a regex - * that parses the clientname:portname syntax */ - UNLESS( !regexec( &port_regex, port, 1, &match_info, 0 ), paInternalError ); - assert(match_info.rm_eo - match_info.rm_so < jack_client_name_size()); - memcpy( tmp_client_name, port + match_info.rm_so, - match_info.rm_eo - match_info.rm_so ); - tmp_client_name[match_info.rm_eo - match_info.rm_so] = '\0'; - - /* do we know about this port's client yet? */ - for( i = 0; i < numClients; i++ ) - { - if( strcmp( tmp_client_name, client_names[i] ) == 0 ) - client_seen = TRUE; - } - - if (client_seen) - continue; /* A: Nothing to see here, move along */ - - UNLESS( client_names[numClients] = (char*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - strlen(tmp_client_name) + 1), paInsufficientMemory ); - - /* The alsa_pcm client should go in spot 0. If this - * is the alsa_pcm client AND we are NOT about to put - * it in spot 0 put it in spot 0 and move whatever - * was already in spot 0 to the end. */ - if( strcmp( "alsa_pcm", tmp_client_name ) == 0 && numClients > 0 ) - { - /* alsa_pcm goes in spot 0 */ - strcpy( client_names[ numClients ], client_names[0] ); - strcpy( client_names[0], tmp_client_name ); - } - else - { - /* put the new client at the end of the client list */ - strcpy( client_names[ numClients ], tmp_client_name ); - } - ++numClients; - } - - /* Now we have a list of clients, which will become the list of - * PortAudio devices. */ - - /* there is one global sample rate all clients must conform to */ - - globalSampleRate = jack_get_sample_rate( jackApi->jack_client ); - UNLESS( commonApi->deviceInfos = (PaDeviceInfo**)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - sizeof(PaDeviceInfo*) * numClients ), paInsufficientMemory ); - - assert( commonApi->info.deviceCount == 0 ); - - /* Create a PaDeviceInfo structure for every client */ - for( client_index = 0; client_index < numClients; client_index++ ) - { - PaDeviceInfo *curDevInfo; - const char **clientPorts = NULL; - - UNLESS( curDevInfo = (PaDeviceInfo*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - sizeof(PaDeviceInfo) ), paInsufficientMemory ); - UNLESS( curDevInfo->name = (char*)PaUtil_GroupAllocateMemory( jackApi->deviceInfoMemory, - strlen(client_names[client_index]) + 1 ), paInsufficientMemory ); - strcpy( (char *)curDevInfo->name, client_names[client_index] ); - - curDevInfo->structVersion = 2; - curDevInfo->hostApi = jackApi->hostApiIndex; - - /* JACK is very inflexible: there is one sample rate the whole - * system must run at, and all clients must speak IEEE float. */ - curDevInfo->defaultSampleRate = globalSampleRate; - - /* To determine how many input and output channels are available, - * we re-query jackd with more specific parameters. */ - copy_string_and_escape_regex_chars( port_regex_string, - client_names[client_index], - device_name_regex_escaped_size ); - strncat( port_regex_string, port_regex_suffix, port_regex_size ); - - /* ... what are your output ports (that we could input from)? */ - clientPorts = jack_get_ports( jackApi->jack_client, port_regex_string, - JACK_PORT_TYPE_FILTER, JackPortIsOutput); - curDevInfo->maxInputChannels = 0; - curDevInfo->defaultLowInputLatency = 0.; - curDevInfo->defaultHighInputLatency = 0.; - if( clientPorts ) - { - jack_port_t *p = jack_port_by_name( jackApi->jack_client, clientPorts[0] ); - curDevInfo->defaultLowInputLatency = curDevInfo->defaultHighInputLatency = - jack_port_get_latency( p ) / globalSampleRate; - - for( i = 0; clientPorts[i] != NULL; i++) - { - /* The number of ports returned is the number of output channels. - * We don't care what they are, we just care how many */ - curDevInfo->maxInputChannels++; - } - free(clientPorts); - } - - /* ... what are your input ports (that we could output to)? */ - clientPorts = jack_get_ports( jackApi->jack_client, port_regex_string, - JACK_PORT_TYPE_FILTER, JackPortIsInput); - curDevInfo->maxOutputChannels = 0; - curDevInfo->defaultLowOutputLatency = 0.; - curDevInfo->defaultHighOutputLatency = 0.; - if( clientPorts ) - { - jack_port_t *p = jack_port_by_name( jackApi->jack_client, clientPorts[0] ); - curDevInfo->defaultLowOutputLatency = curDevInfo->defaultHighOutputLatency = - jack_port_get_latency( p ) / globalSampleRate; - - for( i = 0; clientPorts[i] != NULL; i++) - { - /* The number of ports returned is the number of input channels. - * We don't care what they are, we just care how many */ - curDevInfo->maxOutputChannels++; - } - free(clientPorts); - } - - PA_DEBUG(( "Adding JACK device %s with %d input channels and %d output channels\n", - client_names[client_index], - curDevInfo->maxInputChannels, - curDevInfo->maxOutputChannels )); - - /* Add this client to the list of devices */ - commonApi->deviceInfos[client_index] = curDevInfo; - ++commonApi->info.deviceCount; - if( commonApi->info.defaultInputDevice == paNoDevice && curDevInfo->maxInputChannels > 0 ) - commonApi->info.defaultInputDevice = client_index; - if( commonApi->info.defaultOutputDevice == paNoDevice && curDevInfo->maxOutputChannels > 0 ) - commonApi->info.defaultOutputDevice = client_index; - } - -error: - regfree( &port_regex ); - free( jack_ports ); - return result; -} - -static void UpdateSampleRate( PaJackStream *stream, double sampleRate ) -{ - /* XXX: Maybe not the cleanest way of going about this? */ - stream->cpuLoadMeasurer.samplingPeriod = stream->bufferProcessor.samplePeriod = 1. / sampleRate; - stream->streamRepresentation.streamInfo.sampleRate = sampleRate; -} - -static void JackErrorCallback( const char *msg ) -{ - if( pthread_self() == mainThread_ ) - { - assert( msg ); - jackErr_ = realloc( jackErr_, strlen( msg ) + 1 ); - strcpy( jackErr_, msg ); - } -} - -static void JackOnShutdown( void *arg ) -{ - PaJackHostApiRepresentation *jackApi = (PaJackHostApiRepresentation *)arg; - PaJackStream *stream = jackApi->processQueue; - - PA_DEBUG(( "%s: JACK server is shutting down\n", __FUNCTION__ )); - for( ; stream; stream = stream->next ) - { - stream->is_active = 0; - } - - /* Make sure that the main thread doesn't get stuck waiting on the condition */ - ASSERT_CALL( pthread_mutex_lock( &jackApi->mtx ), 0 ); - jackApi->jackIsDown = 1; - ASSERT_CALL( pthread_cond_signal( &jackApi->cond ), 0 ); - ASSERT_CALL( pthread_mutex_unlock( &jackApi->mtx ), 0 ); - -} - -static int JackSrCb( jack_nframes_t nframes, void *arg ) -{ - PaJackHostApiRepresentation *jackApi = (PaJackHostApiRepresentation *)arg; - double sampleRate = (double)nframes; - PaJackStream *stream = jackApi->processQueue; - - /* Update all streams in process queue */ - PA_DEBUG(( "%s: Acting on change in JACK samplerate: %f\n", __FUNCTION__, sampleRate )); - for( ; stream; stream = stream->next ) - { - if( stream->streamRepresentation.streamInfo.sampleRate != sampleRate ) - { - PA_DEBUG(( "%s: Updating samplerate\n", __FUNCTION__ )); - UpdateSampleRate( stream, sampleRate ); - } - } - - return 0; -} - -static int JackXRunCb(void *arg) { - PaJackHostApiRepresentation *hostApi = (PaJackHostApiRepresentation *)arg; - assert( hostApi ); - hostApi->xrun = TRUE; - PA_DEBUG(( "%s: JACK signalled xrun\n", __FUNCTION__ )); - return 0; -} - -PaError PaJack_Initialize( PaUtilHostApiRepresentation **hostApi, - PaHostApiIndex hostApiIndex ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *jackHostApi; - int activated = 0; - jack_status_t jackStatus = 0; - *hostApi = NULL; /* Initialize to NULL */ - - UNLESS( jackHostApi = (PaJackHostApiRepresentation*) - PaUtil_AllocateMemory( sizeof(PaJackHostApiRepresentation) ), paInsufficientMemory ); - UNLESS( jackHostApi->deviceInfoMemory = PaUtil_CreateAllocationGroup(), paInsufficientMemory ); - - mainThread_ = pthread_self(); - ASSERT_CALL( pthread_mutex_init( &jackHostApi->mtx, NULL ), 0 ); - ASSERT_CALL( pthread_cond_init( &jackHostApi->cond, NULL ), 0 ); - - /* Try to become a client of the JACK server. If we cannot do - * this, then this API cannot be used. - * - * Without the JackNoStartServer option, the jackd server is started - * automatically which we do not want. - */ - - jackHostApi->jack_client = jack_client_open( clientName_, JackNoStartServer, &jackStatus ); - if( !jackHostApi->jack_client ) - { - /* the V19 development docs say that if an implementation - * detects that it cannot be used, it should return a NULL - * interface and paNoError */ - PA_DEBUG(( "%s: Couldn't connect to JACK, status: %d\n", __FUNCTION__, jackStatus )); - result = paNoError; - goto error; - } - - jackHostApi->hostApiIndex = hostApiIndex; - - *hostApi = &jackHostApi->commonHostApiRep; - (*hostApi)->info.structVersion = 1; - (*hostApi)->info.type = paJACK; - (*hostApi)->info.name = "JACK Audio Connection Kit"; - - /* Build a device list by querying the JACK server */ - ENSURE_PA( BuildDeviceList( jackHostApi ) ); - - /* Register functions */ - - (*hostApi)->Terminate = Terminate; - (*hostApi)->OpenStream = OpenStream; - (*hostApi)->IsFormatSupported = IsFormatSupported; - - PaUtil_InitializeStreamInterface( &jackHostApi->callbackStreamInterface, - CloseStream, StartStream, - StopStream, AbortStream, - IsStreamStopped, IsStreamActive, - GetStreamTime, GetStreamCpuLoad, - PaUtil_DummyRead, PaUtil_DummyWrite, - PaUtil_DummyGetReadAvailable, - PaUtil_DummyGetWriteAvailable ); - - PaUtil_InitializeStreamInterface( &jackHostApi->blockingStreamInterface, CloseStream, StartStream, - StopStream, AbortStream, IsStreamStopped, IsStreamActive, - GetStreamTime, PaUtil_DummyGetCpuLoad, - BlockingReadStream, BlockingWriteStream, - BlockingGetStreamReadAvailable, BlockingGetStreamWriteAvailable ); - - jackHostApi->inputBase = jackHostApi->outputBase = 0; - jackHostApi->xrun = 0; - jackHostApi->toAdd = jackHostApi->toRemove = NULL; - jackHostApi->processQueue = NULL; - jackHostApi->jackIsDown = 0; - - jack_on_shutdown( jackHostApi->jack_client, JackOnShutdown, jackHostApi ); - jack_set_error_function( JackErrorCallback ); - jackHostApi->jack_buffer_size = jack_get_buffer_size ( jackHostApi->jack_client ); - /* Don't check for error, may not be supported (deprecated in at least jackdmp) */ - jack_set_sample_rate_callback( jackHostApi->jack_client, JackSrCb, jackHostApi ); - UNLESS( !jack_set_xrun_callback( jackHostApi->jack_client, JackXRunCb, jackHostApi ), paUnanticipatedHostError ); - UNLESS( !jack_set_process_callback( jackHostApi->jack_client, JackCallback, jackHostApi ), paUnanticipatedHostError ); - UNLESS( !jack_activate( jackHostApi->jack_client ), paUnanticipatedHostError ); - activated = 1; - - return result; - -error: - if( activated ) - ASSERT_CALL( jack_deactivate( jackHostApi->jack_client ), 0 ); - - if( jackHostApi ) - { - if( jackHostApi->jack_client ) - ASSERT_CALL( jack_client_close( jackHostApi->jack_client ), 0 ); - - if( jackHostApi->deviceInfoMemory ) - { - PaUtil_FreeAllAllocations( jackHostApi->deviceInfoMemory ); - PaUtil_DestroyAllocationGroup( jackHostApi->deviceInfoMemory ); - } - - PaUtil_FreeMemory( jackHostApi ); - } - return result; -} - - -static void Terminate( struct PaUtilHostApiRepresentation *hostApi ) -{ - PaJackHostApiRepresentation *jackHostApi = (PaJackHostApiRepresentation*)hostApi; - - /* note: this automatically disconnects all ports, since a deactivated - * client is not allowed to have any ports connected */ - ASSERT_CALL( jack_deactivate( jackHostApi->jack_client ), 0 ); - - ASSERT_CALL( pthread_mutex_destroy( &jackHostApi->mtx ), 0 ); - ASSERT_CALL( pthread_cond_destroy( &jackHostApi->cond ), 0 ); - - ASSERT_CALL( jack_client_close( jackHostApi->jack_client ), 0 ); - - if( jackHostApi->deviceInfoMemory ) - { - PaUtil_FreeAllAllocations( jackHostApi->deviceInfoMemory ); - PaUtil_DestroyAllocationGroup( jackHostApi->deviceInfoMemory ); - } - - PaUtil_FreeMemory( jackHostApi ); - - free( jackErr_ ); - jackErr_ = NULL; -} - -static PaError IsFormatSupported( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ) -{ - int inputChannelCount = 0, outputChannelCount = 0; - PaSampleFormat inputSampleFormat, outputSampleFormat; - - if( inputParameters ) - { - inputChannelCount = inputParameters->channelCount; - inputSampleFormat = inputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( inputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that input device can support inputChannelCount */ - if( inputChannelCount > hostApi->deviceInfos[ inputParameters->device ]->maxInputChannels ) - return paInvalidChannelCount; - - /* validate inputStreamInfo */ - if( inputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - inputChannelCount = 0; - } - - if( outputParameters ) - { - outputChannelCount = outputParameters->channelCount; - outputSampleFormat = outputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( outputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that output device can support inputChannelCount */ - if( outputChannelCount > hostApi->deviceInfos[ outputParameters->device ]->maxOutputChannels ) - return paInvalidChannelCount; - - /* validate outputStreamInfo */ - if( outputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - outputChannelCount = 0; - } - - /* - The following check is not necessary for JACK. - - - if a full duplex stream is requested, check that the combination - of input and output parameters is supported - - - Because the buffer adapter handles conversion between all standard - sample formats, the following checks are only required if paCustomFormat - is implemented, or under some other unusual conditions. - - - check that input device can support inputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - check that output device can support outputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - */ - - /* check that the device supports sampleRate */ - -#define ABS(x) ( (x) > 0 ? (x) : -(x) ) - if( ABS(sampleRate - jack_get_sample_rate(((PaJackHostApiRepresentation *) hostApi)->jack_client )) > 1 ) - return paInvalidSampleRate; -#undef ABS - - return paFormatIsSupported; -} - -/* Basic stream initialization */ -static PaError InitializeStream( PaJackStream *stream, PaJackHostApiRepresentation *hostApi, int numInputChannels, - int numOutputChannels ) -{ - PaError result = paNoError; - assert( stream ); - - memset( stream, 0, sizeof (PaJackStream) ); - UNLESS( stream->stream_memory = PaUtil_CreateAllocationGroup(), paInsufficientMemory ); - stream->jack_client = hostApi->jack_client; - stream->hostApi = hostApi; - - if( numInputChannels > 0 ) - { - UNLESS( stream->local_input_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numInputChannels ), - paInsufficientMemory ); - memset( stream->local_input_ports, 0, sizeof(jack_port_t*) * numInputChannels ); - UNLESS( stream->remote_output_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numInputChannels ), - paInsufficientMemory ); - memset( stream->remote_output_ports, 0, sizeof(jack_port_t*) * numInputChannels ); - } - if( numOutputChannels > 0 ) - { - UNLESS( stream->local_output_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numOutputChannels ), - paInsufficientMemory ); - memset( stream->local_output_ports, 0, sizeof(jack_port_t*) * numOutputChannels ); - UNLESS( stream->remote_input_ports = - (jack_port_t**) PaUtil_GroupAllocateMemory( stream->stream_memory, sizeof(jack_port_t*) * numOutputChannels ), - paInsufficientMemory ); - memset( stream->remote_input_ports, 0, sizeof(jack_port_t*) * numOutputChannels ); - } - - stream->num_incoming_connections = numInputChannels; - stream->num_outgoing_connections = numOutputChannels; - -error: - return result; -} - -/*! - * Free resources associated with stream, and eventually stream itself. - * - * Frees allocated memory, and closes opened pcms. - */ -static void CleanUpStream( PaJackStream *stream, int terminateStreamRepresentation, int terminateBufferProcessor ) -{ - int i; - assert( stream ); - - if( stream->isBlockingStream ) - BlockingEnd( stream ); - - for( i = 0; i < stream->num_incoming_connections; ++i ) - { - if( stream->local_input_ports[i] ) - ASSERT_CALL( jack_port_unregister( stream->jack_client, stream->local_input_ports[i] ), 0 ); - } - for( i = 0; i < stream->num_outgoing_connections; ++i ) - { - if( stream->local_output_ports[i] ) - ASSERT_CALL( jack_port_unregister( stream->jack_client, stream->local_output_ports[i] ), 0 ); - } - - if( terminateStreamRepresentation ) - PaUtil_TerminateStreamRepresentation( &stream->streamRepresentation ); - if( terminateBufferProcessor ) - PaUtil_TerminateBufferProcessor( &stream->bufferProcessor ); - - if( stream->stream_memory ) - { - PaUtil_FreeAllAllocations( stream->stream_memory ); - PaUtil_DestroyAllocationGroup( stream->stream_memory ); - } - PaUtil_FreeMemory( stream ); -} - -static PaError WaitCondition( PaJackHostApiRepresentation *hostApi ) -{ - PaError result = paNoError; - int err = 0; - PaTime pt = PaUtil_GetTime(); - struct timespec ts; - - ts.tv_sec = (time_t) floor( pt + 10 * 60 /* 10 minutes */ ); - ts.tv_nsec = (long) ((pt - floor( pt )) * 1000000000); - /* XXX: Best enclose in loop, in case of spurious wakeups? */ - err = pthread_cond_timedwait( &hostApi->cond, &hostApi->mtx, &ts ); - - /* Make sure we didn't time out */ - UNLESS( err != ETIMEDOUT, paTimedOut ); - UNLESS( !err, paInternalError ); - -error: - return result; -} - -static PaError AddStream( PaJackStream *stream ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = stream->hostApi; - /* Add to queue of streams that should be processed */ - ASSERT_CALL( pthread_mutex_lock( &hostApi->mtx ), 0 ); - if( !hostApi->jackIsDown ) - { - hostApi->toAdd = stream; - /* Unlock mutex and await signal from processing thread */ - result = WaitCondition( stream->hostApi ); - } - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - ENSURE_PA( result ); - - UNLESS( !hostApi->jackIsDown, paDeviceUnavailable ); - -error: - return result; -} - -/* Remove stream from processing queue */ -static PaError RemoveStream( PaJackStream *stream ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = stream->hostApi; - - /* Add to queue over streams that should be processed */ - ASSERT_CALL( pthread_mutex_lock( &hostApi->mtx ), 0 ); - if( !hostApi->jackIsDown ) - { - hostApi->toRemove = stream; - /* Unlock mutex and await signal from processing thread */ - result = WaitCondition( stream->hostApi ); - } - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - ENSURE_PA( result ); - -error: - return result; -} - -/* Add stream to JACK callback processing queue */ -static PaError OpenStream( struct PaUtilHostApiRepresentation *hostApi, - PaStream** s, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerBuffer, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *jackHostApi = (PaJackHostApiRepresentation*)hostApi; - PaJackStream *stream = NULL; - char *port_string = PaUtil_GroupAllocateMemory( jackHostApi->deviceInfoMemory, jack_port_name_size() ); - // In the worst case every character would be escaped which would double the string length. - // Add 1 for null terminator - size_t regex_escaped_client_name_size = jack_client_name_size() * 2 + 1; - unsigned long regex_size = regex_escaped_client_name_size + strlen(port_regex_suffix); - char *regex_pattern = PaUtil_GroupAllocateMemory( jackHostApi->deviceInfoMemory, regex_size ); - const char **jack_ports = NULL; - /* int jack_max_buffer_size = jack_get_buffer_size( jackHostApi->jack_client ); */ - int i; - int inputChannelCount, outputChannelCount; - const double jackSr = jack_get_sample_rate( jackHostApi->jack_client ); - PaSampleFormat inputSampleFormat = 0, outputSampleFormat = 0; - int bpInitialized = 0, srInitialized = 0; /* Initialized buffer processor and stream representation? */ - unsigned long ofs; - - /* validate platform specific flags */ - if( (streamFlags & paPlatformSpecificFlags) != 0 ) - return paInvalidFlag; /* unexpected platform specific flag */ - if( (streamFlags & paPrimeOutputBuffersUsingStreamCallback) != 0 ) - { - streamFlags &= ~paPrimeOutputBuffersUsingStreamCallback; - /*return paInvalidFlag;*/ /* This implementation does not support buffer priming */ - } - - if( framesPerBuffer != paFramesPerBufferUnspecified ) - { - /* Jack operates with power of two buffers, and we don't support non-integer buffer adaption (yet) */ - /*UNLESS( !(framesPerBuffer & (framesPerBuffer - 1)), paBufferTooBig );*/ /* TODO: Add descriptive error code? */ - } - - /* Preliminary checks */ - - if( inputParameters ) - { - inputChannelCount = inputParameters->channelCount; - inputSampleFormat = inputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( inputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that input device can support inputChannelCount */ - if( inputChannelCount > hostApi->deviceInfos[ inputParameters->device ]->maxInputChannels ) - return paInvalidChannelCount; - - /* validate inputStreamInfo */ - if( inputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - inputChannelCount = 0; - } - - if( outputParameters ) - { - outputChannelCount = outputParameters->channelCount; - outputSampleFormat = outputParameters->sampleFormat; - - /* unless alternate device specification is supported, reject the use of - paUseHostApiSpecificDeviceSpecification */ - - if( outputParameters->device == paUseHostApiSpecificDeviceSpecification ) - return paInvalidDevice; - - /* check that output device can support inputChannelCount */ - if( outputChannelCount > hostApi->deviceInfos[ outputParameters->device ]->maxOutputChannels ) - return paInvalidChannelCount; - - /* validate outputStreamInfo */ - if( outputParameters->hostApiSpecificStreamInfo ) - return paIncompatibleHostApiSpecificStreamInfo; /* this implementation doesn't use custom stream info */ - } - else - { - outputChannelCount = 0; - } - - /* ... check that the sample rate exactly matches the ONE acceptable rate - * A: This rate isn't necessarily constant though? */ - -#define ABS(x) ( (x) > 0 ? (x) : -(x) ) - if( ABS(sampleRate - jackSr) > 1 ) - return paInvalidSampleRate; -#undef ABS - - UNLESS( stream = (PaJackStream*)PaUtil_AllocateMemory( sizeof(PaJackStream) ), paInsufficientMemory ); - ENSURE_PA( InitializeStream( stream, jackHostApi, inputChannelCount, outputChannelCount ) ); - - /* the blocking emulation, if necessary */ - stream->isBlockingStream = !streamCallback; - if( stream->isBlockingStream ) - { - float latency = 0.001; /* 1ms is the absolute minimum we support */ - int minimum_buffer_frames = 0; - - if( inputParameters && inputParameters->suggestedLatency > latency ) - latency = inputParameters->suggestedLatency; - else if( outputParameters && outputParameters->suggestedLatency > latency ) - latency = outputParameters->suggestedLatency; - - /* the latency the user asked for indicates the minimum buffer size in frames */ - minimum_buffer_frames = (int) (latency * jack_get_sample_rate( jackHostApi->jack_client )); - - /* we also need to be able to store at least three full jack buffers to avoid dropouts */ - if( jackHostApi->jack_buffer_size * 3 > minimum_buffer_frames ) - minimum_buffer_frames = jackHostApi->jack_buffer_size * 3; - - /* setup blocking API data structures (FIXME: can fail) */ - BlockingBegin( stream, minimum_buffer_frames ); - - /* install our own callback for the blocking API */ - streamCallback = BlockingCallback; - userData = stream; - - PaUtil_InitializeStreamRepresentation( &stream->streamRepresentation, - &jackHostApi->blockingStreamInterface, streamCallback, userData ); - } - else - { - PaUtil_InitializeStreamRepresentation( &stream->streamRepresentation, - &jackHostApi->callbackStreamInterface, streamCallback, userData ); - } - srInitialized = 1; - PaUtil_InitializeCpuLoadMeasurer( &stream->cpuLoadMeasurer, jackSr ); - - /* create the JACK ports. We cannot connect them until audio - * processing begins */ - - /* Register a unique set of ports for this stream - * TODO: Robust allocation of new port names */ - - ofs = jackHostApi->inputBase; - for( i = 0; i < inputChannelCount; i++ ) - { - snprintf( port_string, jack_port_name_size(), "in_%lu", ofs + i ); - UNLESS( stream->local_input_ports[i] = jack_port_register( - jackHostApi->jack_client, port_string, - JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0 ), paInsufficientMemory ); - } - jackHostApi->inputBase += inputChannelCount; - - ofs = jackHostApi->outputBase; - for( i = 0; i < outputChannelCount; i++ ) - { - snprintf( port_string, jack_port_name_size(), "out_%lu", ofs + i ); - UNLESS( stream->local_output_ports[i] = jack_port_register( - jackHostApi->jack_client, port_string, - JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0 ), paInsufficientMemory ); - } - jackHostApi->outputBase += outputChannelCount; - - /* look up the jack_port_t's for the remote ports. We could do - * this at stream start time, but doing it here ensures the - * name lookup only happens once. */ - - if( inputChannelCount > 0 ) - { - int err = 0; - - /* Get output ports of our capture device */ - copy_string_and_escape_regex_chars( regex_pattern, - hostApi->deviceInfos[ inputParameters->device ]->name, - regex_escaped_client_name_size ); - strncat( regex_pattern, port_regex_suffix, regex_size ); - UNLESS( jack_ports = jack_get_ports( jackHostApi->jack_client, regex_pattern, - JACK_PORT_TYPE_FILTER, JackPortIsOutput ), paUnanticipatedHostError ); - for( i = 0; i < inputChannelCount && jack_ports[i]; i++ ) - { - if( (stream->remote_output_ports[i] = jack_port_by_name( - jackHostApi->jack_client, jack_ports[i] )) == NULL ) - { - err = 1; - break; - } - } - free( jack_ports ); - UNLESS( !err, paInsufficientMemory ); - - /* Fewer ports than expected? */ - UNLESS( i == inputChannelCount, paInternalError ); - } - - if( outputChannelCount > 0 ) - { - int err = 0; - - /* Get input ports of our playback device */ - copy_string_and_escape_regex_chars( regex_pattern, - hostApi->deviceInfos[ outputParameters->device ]->name, - regex_escaped_client_name_size ); - strncat( regex_pattern, port_regex_suffix, regex_size ); - UNLESS( jack_ports = jack_get_ports( jackHostApi->jack_client, regex_pattern, - JACK_PORT_TYPE_FILTER, JackPortIsInput ), paUnanticipatedHostError ); - for( i = 0; i < outputChannelCount && jack_ports[i]; i++ ) - { - if( (stream->remote_input_ports[i] = jack_port_by_name( - jackHostApi->jack_client, jack_ports[i] )) == 0 ) - { - err = 1; - break; - } - } - free( jack_ports ); - UNLESS( !err , paInsufficientMemory ); - - /* Fewer ports than expected? */ - UNLESS( i == outputChannelCount, paInternalError ); - } - - ENSURE_PA( PaUtil_InitializeBufferProcessor( - &stream->bufferProcessor, - inputChannelCount, - inputSampleFormat, - paFloat32 | paNonInterleaved, /* hostInputSampleFormat */ - outputChannelCount, - outputSampleFormat, - paFloat32 | paNonInterleaved, /* hostOutputSampleFormat */ - jackSr, - streamFlags, - framesPerBuffer, - 0, /* Ignored */ - paUtilUnknownHostBufferSize, /* Buffer size may vary on JACK's discretion */ - streamCallback, - userData ) ); - bpInitialized = 1; - - if( stream->num_incoming_connections > 0 ) - stream->streamRepresentation.streamInfo.inputLatency = (jack_port_get_latency( stream->remote_output_ports[0] ) - - jack_get_buffer_size( jackHostApi->jack_client ) /* One buffer is not counted as latency */ - + PaUtil_GetBufferProcessorInputLatencyFrames( &stream->bufferProcessor )) / sampleRate; - if( stream->num_outgoing_connections > 0 ) - stream->streamRepresentation.streamInfo.outputLatency = (jack_port_get_latency( stream->remote_input_ports[0] ) - - jack_get_buffer_size( jackHostApi->jack_client ) /* One buffer is not counted as latency */ - + PaUtil_GetBufferProcessorOutputLatencyFrames( &stream->bufferProcessor )) / sampleRate; - - stream->streamRepresentation.streamInfo.sampleRate = jackSr; - stream->t0 = jack_frame_time( jackHostApi->jack_client ); /* A: Time should run from Pa_OpenStream */ - - /* Add to queue of opened streams */ - ENSURE_PA( AddStream( stream ) ); - - *s = (PaStream*)stream; - - return result; - -error: - if( stream ) - CleanUpStream( stream, srInitialized, bpInitialized ); - - return result; -} - -/* - When CloseStream() is called, the multi-api layer ensures that - the stream has already been stopped or aborted. -*/ -static PaError CloseStream( PaStream* s ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream*)s; - - /* Remove this stream from the processing queue */ - ENSURE_PA( RemoveStream( stream ) ); - -error: - CleanUpStream( stream, 1, 1 ); - return result; -} - -static PaError RealProcess( PaJackStream *stream, jack_nframes_t frames ) -{ - PaError result = paNoError; - PaStreamCallbackTimeInfo timeInfo = {0,0,0}; - int chn; - int framesProcessed; - const double sr = jack_get_sample_rate( stream->jack_client ); /* Shouldn't change during the process callback */ - PaStreamCallbackFlags cbFlags = 0; - - /* If the user has returned !paContinue from the callback we'll want to flush the internal buffers, - * when these are empty we can finally mark the stream as inactive */ - if( stream->callbackResult != paContinue && - PaUtil_IsBufferProcessorOutputEmpty( &stream->bufferProcessor ) ) - { - stream->is_active = 0; - if( stream->streamRepresentation.streamFinishedCallback ) - stream->streamRepresentation.streamFinishedCallback( stream->streamRepresentation.userData ); - PA_DEBUG(( "%s: Callback finished\n", __FUNCTION__ )); - - goto end; - } - - timeInfo.currentTime = (jack_frame_time( stream->jack_client ) - stream->t0) / sr; - if( stream->num_incoming_connections > 0 ) - timeInfo.inputBufferAdcTime = timeInfo.currentTime - jack_port_get_latency( stream->remote_output_ports[0] ) - / sr; - if( stream->num_outgoing_connections > 0 ) - timeInfo.outputBufferDacTime = timeInfo.currentTime + jack_port_get_latency( stream->remote_input_ports[0] ) - / sr; - - PaUtil_BeginCpuLoadMeasurement( &stream->cpuLoadMeasurer ); - - if( stream->xrun ) - { - /* XXX: Any way to tell which of these occurred? */ - cbFlags = paOutputUnderflow | paInputOverflow; - stream->xrun = FALSE; - } - PaUtil_BeginBufferProcessing( &stream->bufferProcessor, &timeInfo, - cbFlags ); - - if( stream->num_incoming_connections > 0 ) - PaUtil_SetInputFrameCount( &stream->bufferProcessor, frames ); - if( stream->num_outgoing_connections > 0 ) - PaUtil_SetOutputFrameCount( &stream->bufferProcessor, frames ); - - for( chn = 0; chn < stream->num_incoming_connections; chn++ ) - { - jack_default_audio_sample_t *channel_buf = (jack_default_audio_sample_t*) - jack_port_get_buffer( stream->local_input_ports[chn], - frames ); - - PaUtil_SetNonInterleavedInputChannel( &stream->bufferProcessor, - chn, - channel_buf ); - } - - for( chn = 0; chn < stream->num_outgoing_connections; chn++ ) - { - jack_default_audio_sample_t *channel_buf = (jack_default_audio_sample_t*) - jack_port_get_buffer( stream->local_output_ports[chn], - frames ); - - PaUtil_SetNonInterleavedOutputChannel( &stream->bufferProcessor, - chn, - channel_buf ); - } - - framesProcessed = PaUtil_EndBufferProcessing( &stream->bufferProcessor, - &stream->callbackResult ); - /* We've specified a host buffer size mode where every frame should be consumed by the buffer processor */ - assert( framesProcessed == frames ); - - PaUtil_EndCpuLoadMeasurement( &stream->cpuLoadMeasurer, framesProcessed ); - -end: - return result; -} - -/* Update the JACK callback's stream processing queue. */ -static PaError UpdateQueue( PaJackHostApiRepresentation *hostApi ) -{ - PaError result = paNoError; - int queueModified = 0; - const double jackSr = jack_get_sample_rate( hostApi->jack_client ); - int err; - - if( (err = pthread_mutex_trylock( &hostApi->mtx )) != 0 ) - { - assert( err == EBUSY ); - return paNoError; - } - - if( hostApi->toAdd ) - { - if( hostApi->processQueue ) - { - PaJackStream *node = hostApi->processQueue; - /* Advance to end of queue */ - while( node->next ) - node = node->next; - - node->next = hostApi->toAdd; - } - else - { - /* The only queue entry. */ - hostApi->processQueue = (PaJackStream *)hostApi->toAdd; - } - - /* If necessary, update stream state */ - if( hostApi->toAdd->streamRepresentation.streamInfo.sampleRate != jackSr ) - UpdateSampleRate( hostApi->toAdd, jackSr ); - - hostApi->toAdd = NULL; - queueModified = 1; - } - if( hostApi->toRemove ) - { - int removed = 0; - PaJackStream *node = hostApi->processQueue, *prev = NULL; - assert( hostApi->processQueue ); - - while( node ) - { - if( node == hostApi->toRemove ) - { - if( prev ) - prev->next = node->next; - else - hostApi->processQueue = (PaJackStream *)node->next; - - removed = 1; - break; - } - - prev = node; - node = node->next; - } - UNLESS( removed, paInternalError ); - hostApi->toRemove = NULL; - PA_DEBUG(( "%s: Removed stream from processing queue\n", __FUNCTION__ )); - queueModified = 1; - } - - if( queueModified ) - { - /* Signal that we've done what was asked of us */ - ASSERT_CALL( pthread_cond_signal( &hostApi->cond ), 0 ); - } - -error: - ASSERT_CALL( pthread_mutex_unlock( &hostApi->mtx ), 0 ); - - return result; -} - -/* Audio processing callback invoked periodically from JACK. */ -static int JackCallback( jack_nframes_t frames, void *userData ) -{ - PaError result = paNoError; - PaJackHostApiRepresentation *hostApi = (PaJackHostApiRepresentation *)userData; - PaJackStream *stream = NULL; - int xrun = hostApi->xrun; - hostApi->xrun = 0; - - assert( hostApi ); - - ENSURE_PA( UpdateQueue( hostApi ) ); - - /* Process each stream */ - stream = hostApi->processQueue; - for( ; stream; stream = stream->next ) - { - if( xrun ) /* Don't override if already set */ - stream->xrun = 1; - - /* See if this stream is to be started */ - if( stream->doStart ) - { - /* If we can't obtain a lock, we'll try next time */ - int err = pthread_mutex_trylock( &stream->hostApi->mtx ); - if( !err ) - { - if( stream->doStart ) /* Could potentially change before obtaining the lock */ - { - stream->is_active = 1; - stream->doStart = 0; - PA_DEBUG(( "%s: Starting stream\n", __FUNCTION__ )); - ASSERT_CALL( pthread_cond_signal( &stream->hostApi->cond ), 0 ); - stream->callbackResult = paContinue; - stream->isSilenced = 0; - } - - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - } - else - assert( err == EBUSY ); - } - else if( stream->doStop || stream->doAbort ) /* Should we stop/abort stream? */ - { - if( stream->callbackResult == paContinue ) /* Ok, make it stop */ - { - PA_DEBUG(( "%s: Stopping stream\n", __FUNCTION__ )); - stream->callbackResult = stream->doStop ? paComplete : paAbort; - } - } - - if( stream->is_active ) - ENSURE_PA( RealProcess( stream, frames ) ); - /* If we have just entered inactive state, silence output */ - if( !stream->is_active && !stream->isSilenced ) - { - int i; - - /* Silence buffer after entering inactive state */ - PA_DEBUG(( "Silencing the output\n" )); - for( i = 0; i < stream->num_outgoing_connections; ++i ) - { - jack_default_audio_sample_t *buffer = jack_port_get_buffer( stream->local_output_ports[i], frames ); - memset( buffer, 0, sizeof (jack_default_audio_sample_t) * frames ); - } - - stream->isSilenced = 1; - } - - if( stream->doStop || stream->doAbort ) - { - /* See if RealProcess has acted on the request */ - if( !stream->is_active ) /* Ok, signal to the main thread that we've carried out the operation */ - { - /* If we can't obtain a lock, we'll try next time */ - int err = pthread_mutex_trylock( &stream->hostApi->mtx ); - if( !err ) - { - stream->doStop = stream->doAbort = 0; - ASSERT_CALL( pthread_cond_signal( &stream->hostApi->cond ), 0 ); - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - } - else - assert( err == EBUSY ); - } - } - } - - return 0; -error: - return -1; -} - -static PaError StartStream( PaStream *s ) -{ - PaError result = paNoError; - PaJackStream *stream = (PaJackStream*)s; - int i; - - /* Ready the processor */ - PaUtil_ResetBufferProcessor( &stream->bufferProcessor ); - - /* Connect the ports. Note that the ports may already have been connected by someone else in - * the meantime, in which case JACK returns EEXIST. */ - - if( stream->num_incoming_connections > 0 ) - { - for( i = 0; i < stream->num_incoming_connections; i++ ) - { - int r = jack_connect( stream->jack_client, jack_port_name( stream->remote_output_ports[i] ), - jack_port_name( stream->local_input_ports[i] ) ); - UNLESS( 0 == r || EEXIST == r, paUnanticipatedHostError ); - } - } - - if( stream->num_outgoing_connections > 0 ) - { - for( i = 0; i < stream->num_outgoing_connections; i++ ) - { - int r = jack_connect( stream->jack_client, jack_port_name( stream->local_output_ports[i] ), - jack_port_name( stream->remote_input_ports[i] ) ); - UNLESS( 0 == r || EEXIST == r, paUnanticipatedHostError ); - } - } - - stream->xrun = FALSE; - - /* Enable processing */ - - ASSERT_CALL( pthread_mutex_lock( &stream->hostApi->mtx ), 0 ); - stream->doStart = 1; - - /* Wait for stream to be started */ - result = WaitCondition( stream->hostApi ); - /* - do - { - err = pthread_cond_timedwait( &stream->hostApi->cond, &stream->hostApi->mtx, &ts ); - } while( !stream->is_active && !err ); - */ - if( result != paNoError ) /* Something went wrong, call off the stream start */ - { - stream->doStart = 0; - stream->is_active = 0; /* Cancel any processing */ - } - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - - ENSURE_PA( result ); - - stream->is_running = TRUE; - PA_DEBUG(( "%s: Stream started\n", __FUNCTION__ )); - -error: - return result; -} - -static PaError RealStop( PaJackStream *stream, int abort ) -{ - PaError result = paNoError; - int i; - - if( stream->isBlockingStream ) - BlockingWaitEmpty ( stream ); - - ASSERT_CALL( pthread_mutex_lock( &stream->hostApi->mtx ), 0 ); - if( abort ) - stream->doAbort = 1; - else - stream->doStop = 1; - - /* Wait for stream to be stopped */ - result = WaitCondition( stream->hostApi ); - ASSERT_CALL( pthread_mutex_unlock( &stream->hostApi->mtx ), 0 ); - ENSURE_PA( result ); - - UNLESS( !stream->is_active, paInternalError ); - - PA_DEBUG(( "%s: Stream stopped\n", __FUNCTION__ )); - -error: - stream->is_running = FALSE; - - /* Disconnect ports belonging to this stream */ - - if( !stream->hostApi->jackIsDown ) /* XXX: Well? */ - { - for( i = 0; i < stream->num_incoming_connections; i++ ) - { - if( jack_port_connected( stream->local_input_ports[i] ) ) - { - UNLESS( !jack_port_disconnect( stream->jack_client, stream->local_input_ports[i] ), - paUnanticipatedHostError ); - } - } - for( i = 0; i < stream->num_outgoing_connections; i++ ) - { - if( jack_port_connected( stream->local_output_ports[i] ) ) - { - UNLESS( !jack_port_disconnect( stream->jack_client, stream->local_output_ports[i] ), - paUnanticipatedHostError ); - } - } - } - - return result; -} - -static PaError StopStream( PaStream *s ) -{ - assert(s); - return RealStop( (PaJackStream *)s, 0 ); -} - -static PaError AbortStream( PaStream *s ) -{ - assert(s); - return RealStop( (PaJackStream *)s, 1 ); -} - -static PaError IsStreamStopped( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return !stream->is_running; -} - - -static PaError IsStreamActive( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return stream->is_active; -} - - -static PaTime GetStreamTime( PaStream *s ) -{ - PaJackStream *stream = (PaJackStream*)s; - - /* A: Is this relevant?? --> TODO: what if we're recording-only? */ - return (jack_frame_time( stream->jack_client ) - stream->t0) / (PaTime)jack_get_sample_rate( stream->jack_client ); -} - - -static double GetStreamCpuLoad( PaStream* s ) -{ - PaJackStream *stream = (PaJackStream*)s; - return PaUtil_GetCpuLoad( &stream->cpuLoadMeasurer ); -} - -PaError PaJack_SetClientName( const char* name ) -{ - if( strlen( name ) > jack_client_name_size() ) - { - /* OK, I don't know any better error code */ - return paInvalidFlag; - } - clientName_ = name; - return paNoError; -} - -PaError PaJack_GetClientName(const char** clientName) -{ - PaError result = paNoError; - PaJackHostApiRepresentation* jackHostApi = NULL; - PaJackHostApiRepresentation** ref = &jackHostApi; - ENSURE_PA( PaUtil_GetHostApiRepresentation( (PaUtilHostApiRepresentation**)ref, paJACK ) ); - *clientName = jack_get_client_name( jackHostApi->jack_client ); - -error: - return result; -} - diff --git a/spaces/anhnv125/FRN/main.py b/spaces/anhnv125/FRN/main.py deleted file mode 100644 index db9e7cc6ec3c4165751af115b6ded6578f5784f9..0000000000000000000000000000000000000000 --- a/spaces/anhnv125/FRN/main.py +++ /dev/null @@ -1,131 +0,0 @@ -import argparse -import os - -import pytorch_lightning as pl -import soundfile as sf -import torch -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning.utilities.model_summary import summarize -from torch.utils.data import DataLoader - -from config import CONFIG -from dataset import TrainDataset, TestLoader, BlindTestLoader -from models.frn import PLCModel, OnnxWrapper -from utils.tblogger import TensorBoardLoggerExpanded -from utils.utils import mkdir_p - -parser = argparse.ArgumentParser() - -parser.add_argument('--version', default=None, - help='version to resume') -parser.add_argument('--mode', default='train', - help='training or testing mode') - -args = parser.parse_args() -os.environ["CUDA_VISIBLE_DEVICES"] = str(CONFIG.gpus) -assert args.mode in ['train', 'eval', 'test', 'onnx'], "--mode should be 'train', 'eval', 'test' or 'onnx'" - - -def resume(train_dataset, val_dataset, version): - print("Version", version) - model_path = os.path.join(CONFIG.LOG.log_dir, 'version_{}/checkpoints/'.format(str(version))) - config_path = os.path.join(CONFIG.LOG.log_dir, 'version_{}/'.format(str(version)) + 'hparams.yaml') - model_name = [x for x in os.listdir(model_path) if x.endswith(".ckpt")][0] - ckpt_path = model_path + model_name - checkpoint = PLCModel.load_from_checkpoint(ckpt_path, - strict=True, - hparams_file=config_path, - train_dataset=train_dataset, - val_dataset=val_dataset, - window_size=CONFIG.DATA.window_size) - - return checkpoint - - -def train(): - train_dataset = TrainDataset('train') - val_dataset = TrainDataset('val') - checkpoint_callback = ModelCheckpoint(monitor='val_loss', mode='min', verbose=True, - filename='frn-{epoch:02d}-{val_loss:.4f}', save_weights_only=False) - gpus = CONFIG.gpus.split(',') - logger = TensorBoardLoggerExpanded(CONFIG.DATA.sr) - if args.version is not None: - model = resume(train_dataset, val_dataset, args.version) - else: - model = PLCModel(train_dataset, - val_dataset, - window_size=CONFIG.DATA.window_size, - enc_layers=CONFIG.MODEL.enc_layers, - enc_in_dim=CONFIG.MODEL.enc_in_dim, - enc_dim=CONFIG.MODEL.enc_dim, - pred_dim=CONFIG.MODEL.pred_dim, - pred_layers=CONFIG.MODEL.pred_layers) - - trainer = pl.Trainer(logger=logger, - gradient_clip_val=CONFIG.TRAIN.clipping_val, - gpus=len(gpus), - max_epochs=CONFIG.TRAIN.epochs, - accelerator="gpu" if len(gpus) > 1 else None, - callbacks=[checkpoint_callback] - ) - - print(model.hparams) - print( - 'Dataset: {}, Train files: {}, Val files {}'.format(CONFIG.DATA.dataset, len(train_dataset), len(val_dataset))) - trainer.fit(model) - - -def to_onnx(model, onnx_path): - model.eval() - - model = OnnxWrapper(model) - - torch.onnx.export(model, - model.sample, - onnx_path, - export_params=True, - opset_version=12, - input_names=model.input_names, - output_names=model.output_names, - do_constant_folding=True, - verbose=False) - - -if __name__ == '__main__': - - if args.mode == 'train': - train() - else: - model = resume(None, None, args.version) - print(model.hparams) - print(summarize(model)) - - model.eval() - model.freeze() - if args.mode == 'eval': - model.cuda(device=0) - trainer = pl.Trainer(accelerator='gpu', devices=1, enable_checkpointing=False, logger=False) - testset = TestLoader() - test_loader = DataLoader(testset, batch_size=1, num_workers=4) - trainer.test(model, test_loader) - print('Version', args.version) - masking = CONFIG.DATA.EVAL.masking - prob = CONFIG.DATA.EVAL.transition_probs[0] - loss_percent = (1 - prob[0]) / (2 - prob[0] - prob[1]) * 100 - print('Evaluate with real trace' if masking == 'real' else - 'Evaluate with generated trace with {:.2f}% packet loss'.format(loss_percent)) - elif args.mode == 'test': - model.cuda(device=0) - testset = BlindTestLoader(test_dir=CONFIG.TEST.in_dir) - test_loader = DataLoader(testset, batch_size=1, num_workers=4) - trainer = pl.Trainer(accelerator='gpu', devices=1, enable_checkpointing=False, logger=False) - preds = trainer.predict(model, test_loader, return_predictions=True) - mkdir_p(CONFIG.TEST.out_dir) - for idx, path in enumerate(test_loader.dataset.data_list): - out_path = os.path.join(CONFIG.TEST.out_dir, os.path.basename(path)) - sf.write(out_path, preds[idx], samplerate=CONFIG.DATA.sr, subtype='PCM_16') - - else: - onnx_path = 'lightning_logs/version_{}/checkpoints/frn.onnx'.format(str(args.version)) - to_onnx(model, onnx_path) - print('ONNX model saved to', onnx_path) diff --git a/spaces/ardha27/rvc_TTS/lib/infer_pack/attentions.py b/spaces/ardha27/rvc_TTS/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc_TTS/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tacotron/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tacotron/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tacotron/tacotron.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tacotron/tacotron.py deleted file mode 100644 index 7a47c35ef67852456d7211f32502ffb84509d61f..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tacotron/tacotron.py +++ /dev/null @@ -1,503 +0,0 @@ -# coding: utf-8 -# adapted from https://github.com/r9y9/tacotron_pytorch - -import torch -from torch import nn - -from .attentions import init_attn -from .common_layers import Prenet - - -class BatchNormConv1d(nn.Module): - r"""A wrapper for Conv1d with BatchNorm. It sets the activation - function between Conv and BatchNorm layers. BatchNorm layer - is initialized with the TF default values for momentum and eps. - - Args: - in_channels: size of each input sample - out_channels: size of each output samples - kernel_size: kernel size of conv filters - stride: stride of conv filters - padding: padding of conv filters - activation: activation function set b/w Conv1d and BatchNorm - - Shapes: - - input: (B, D) - - output: (B, D) - """ - - def __init__(self, in_channels, out_channels, kernel_size, stride, padding, activation=None): - super().__init__() - self.padding = padding - self.padder = nn.ConstantPad1d(padding, 0) - self.conv1d = nn.Conv1d( - in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=0, bias=False - ) - # Following tensorflow's default parameters - self.bn = nn.BatchNorm1d(out_channels, momentum=0.99, eps=1e-3) - self.activation = activation - # self.init_layers() - - def init_layers(self): - if isinstance(self.activation, torch.nn.ReLU): - w_gain = "relu" - elif isinstance(self.activation, torch.nn.Tanh): - w_gain = "tanh" - elif self.activation is None: - w_gain = "linear" - else: - raise RuntimeError("Unknown activation function") - torch.nn.init.xavier_uniform_(self.conv1d.weight, gain=torch.nn.init.calculate_gain(w_gain)) - - def forward(self, x): - x = self.padder(x) - x = self.conv1d(x) - x = self.bn(x) - if self.activation is not None: - x = self.activation(x) - return x - - -class Highway(nn.Module): - r"""Highway layers as explained in https://arxiv.org/abs/1505.00387 - - Args: - in_features (int): size of each input sample - out_feature (int): size of each output sample - - Shapes: - - input: (B, *, H_in) - - output: (B, *, H_out) - """ - - # TODO: Try GLU layer - def __init__(self, in_features, out_feature): - super().__init__() - self.H = nn.Linear(in_features, out_feature) - self.H.bias.data.zero_() - self.T = nn.Linear(in_features, out_feature) - self.T.bias.data.fill_(-1) - self.relu = nn.ReLU() - self.sigmoid = nn.Sigmoid() - # self.init_layers() - - def init_layers(self): - torch.nn.init.xavier_uniform_(self.H.weight, gain=torch.nn.init.calculate_gain("relu")) - torch.nn.init.xavier_uniform_(self.T.weight, gain=torch.nn.init.calculate_gain("sigmoid")) - - def forward(self, inputs): - H = self.relu(self.H(inputs)) - T = self.sigmoid(self.T(inputs)) - return H * T + inputs * (1.0 - T) - - -class CBHG(nn.Module): - """CBHG module: a recurrent neural network composed of: - - 1-d convolution banks - - Highway networks + residual connections - - Bidirectional gated recurrent units - - Args: - in_features (int): sample size - K (int): max filter size in conv bank - projections (list): conv channel sizes for conv projections - num_highways (int): number of highways layers - - Shapes: - - input: (B, C, T_in) - - output: (B, T_in, C*2) - """ - - # pylint: disable=dangerous-default-value - def __init__( - self, - in_features, - K=16, - conv_bank_features=128, - conv_projections=[128, 128], - highway_features=128, - gru_features=128, - num_highways=4, - ): - super().__init__() - self.in_features = in_features - self.conv_bank_features = conv_bank_features - self.highway_features = highway_features - self.gru_features = gru_features - self.conv_projections = conv_projections - self.relu = nn.ReLU() - # list of conv1d bank with filter size k=1...K - # TODO: try dilational layers instead - self.conv1d_banks = nn.ModuleList( - [ - BatchNormConv1d( - in_features, - conv_bank_features, - kernel_size=k, - stride=1, - padding=[(k - 1) // 2, k // 2], - activation=self.relu, - ) - for k in range(1, K + 1) - ] - ) - # max pooling of conv bank, with padding - # TODO: try average pooling OR larger kernel size - out_features = [K * conv_bank_features] + conv_projections[:-1] - activations = [self.relu] * (len(conv_projections) - 1) - activations += [None] - # setup conv1d projection layers - layer_set = [] - for in_size, out_size, ac in zip(out_features, conv_projections, activations): - layer = BatchNormConv1d(in_size, out_size, kernel_size=3, stride=1, padding=[1, 1], activation=ac) - layer_set.append(layer) - self.conv1d_projections = nn.ModuleList(layer_set) - # setup Highway layers - if self.highway_features != conv_projections[-1]: - self.pre_highway = nn.Linear(conv_projections[-1], highway_features, bias=False) - self.highways = nn.ModuleList([Highway(highway_features, highway_features) for _ in range(num_highways)]) - # bi-directional GPU layer - self.gru = nn.GRU(gru_features, gru_features, 1, batch_first=True, bidirectional=True) - - def forward(self, inputs): - # (B, in_features, T_in) - x = inputs - # (B, hid_features*K, T_in) - # Concat conv1d bank outputs - outs = [] - for conv1d in self.conv1d_banks: - out = conv1d(x) - outs.append(out) - x = torch.cat(outs, dim=1) - assert x.size(1) == self.conv_bank_features * len(self.conv1d_banks) - for conv1d in self.conv1d_projections: - x = conv1d(x) - x += inputs - x = x.transpose(1, 2) - if self.highway_features != self.conv_projections[-1]: - x = self.pre_highway(x) - # Residual connection - # TODO: try residual scaling as in Deep Voice 3 - # TODO: try plain residual layers - for highway in self.highways: - x = highway(x) - # (B, T_in, hid_features*2) - # TODO: replace GRU with convolution as in Deep Voice 3 - self.gru.flatten_parameters() - outputs, _ = self.gru(x) - return outputs - - -class EncoderCBHG(nn.Module): - r"""CBHG module with Encoder specific arguments""" - - def __init__(self): - super().__init__() - self.cbhg = CBHG( - 128, - K=16, - conv_bank_features=128, - conv_projections=[128, 128], - highway_features=128, - gru_features=128, - num_highways=4, - ) - - def forward(self, x): - return self.cbhg(x) - - -class Encoder(nn.Module): - r"""Stack Prenet and CBHG module for encoder - Args: - inputs (FloatTensor): embedding features - - Shapes: - - inputs: (B, T, D_in) - - outputs: (B, T, 128 * 2) - """ - - def __init__(self, in_features): - super().__init__() - self.prenet = Prenet(in_features, out_features=[256, 128]) - self.cbhg = EncoderCBHG() - - def forward(self, inputs): - # B x T x prenet_dim - outputs = self.prenet(inputs) - outputs = self.cbhg(outputs.transpose(1, 2)) - return outputs - - -class PostCBHG(nn.Module): - def __init__(self, mel_dim): - super().__init__() - self.cbhg = CBHG( - mel_dim, - K=8, - conv_bank_features=128, - conv_projections=[256, mel_dim], - highway_features=128, - gru_features=128, - num_highways=4, - ) - - def forward(self, x): - return self.cbhg(x) - - -class Decoder(nn.Module): - """Tacotron decoder. - - Args: - in_channels (int): number of input channels. - frame_channels (int): number of feature frame channels. - r (int): number of outputs per time step (reduction rate). - memory_size (int): size of the past window. if <= 0 memory_size = r - attn_type (string): type of attention used in decoder. - attn_windowing (bool): if true, define an attention window centered to maximum - attention response. It provides more robust attention alignment especially - at interence time. - attn_norm (string): attention normalization function. 'sigmoid' or 'softmax'. - prenet_type (string): 'original' or 'bn'. - prenet_dropout (float): prenet dropout rate. - forward_attn (bool): if true, use forward attention method. https://arxiv.org/abs/1807.06736 - trans_agent (bool): if true, use transition agent. https://arxiv.org/abs/1807.06736 - forward_attn_mask (bool): if true, mask attention values smaller than a threshold. - location_attn (bool): if true, use location sensitive attention. - attn_K (int): number of attention heads for GravesAttention. - separate_stopnet (bool): if true, detach stopnet input to prevent gradient flow. - d_vector_dim (int): size of speaker embedding vector, for multi-speaker training. - max_decoder_steps (int): Maximum number of steps allowed for the decoder. Defaults to 500. - """ - - # Pylint gets confused by PyTorch conventions here - # pylint: disable=attribute-defined-outside-init - - def __init__( - self, - in_channels, - frame_channels, - r, - memory_size, - attn_type, - attn_windowing, - attn_norm, - prenet_type, - prenet_dropout, - forward_attn, - trans_agent, - forward_attn_mask, - location_attn, - attn_K, - separate_stopnet, - max_decoder_steps, - ): - super().__init__() - self.r_init = r - self.r = r - self.in_channels = in_channels - self.max_decoder_steps = max_decoder_steps - self.use_memory_queue = memory_size > 0 - self.memory_size = memory_size if memory_size > 0 else r - self.frame_channels = frame_channels - self.separate_stopnet = separate_stopnet - self.query_dim = 256 - # memory -> |Prenet| -> processed_memory - prenet_dim = frame_channels * self.memory_size if self.use_memory_queue else frame_channels - self.prenet = Prenet(prenet_dim, prenet_type, prenet_dropout, out_features=[256, 128]) - # processed_inputs, processed_memory -> |Attention| -> Attention, attention, RNN_State - # attention_rnn generates queries for the attention mechanism - self.attention_rnn = nn.GRUCell(in_channels + 128, self.query_dim) - self.attention = init_attn( - attn_type=attn_type, - query_dim=self.query_dim, - embedding_dim=in_channels, - attention_dim=128, - location_attention=location_attn, - attention_location_n_filters=32, - attention_location_kernel_size=31, - windowing=attn_windowing, - norm=attn_norm, - forward_attn=forward_attn, - trans_agent=trans_agent, - forward_attn_mask=forward_attn_mask, - attn_K=attn_K, - ) - # (processed_memory | attention context) -> |Linear| -> decoder_RNN_input - self.project_to_decoder_in = nn.Linear(256 + in_channels, 256) - # decoder_RNN_input -> |RNN| -> RNN_state - self.decoder_rnns = nn.ModuleList([nn.GRUCell(256, 256) for _ in range(2)]) - # RNN_state -> |Linear| -> mel_spec - self.proj_to_mel = nn.Linear(256, frame_channels * self.r_init) - # learn init values instead of zero init. - self.stopnet = StopNet(256 + frame_channels * self.r_init) - - def set_r(self, new_r): - self.r = new_r - - def _reshape_memory(self, memory): - """ - Reshape the spectrograms for given 'r' - """ - # Grouping multiple frames if necessary - if memory.size(-1) == self.frame_channels: - memory = memory.view(memory.shape[0], memory.size(1) // self.r, -1) - # Time first (T_decoder, B, frame_channels) - memory = memory.transpose(0, 1) - return memory - - def _init_states(self, inputs): - """ - Initialization of decoder states - """ - B = inputs.size(0) - # go frame as zeros matrix - if self.use_memory_queue: - self.memory_input = torch.zeros(1, device=inputs.device).repeat(B, self.frame_channels * self.memory_size) - else: - self.memory_input = torch.zeros(1, device=inputs.device).repeat(B, self.frame_channels) - # decoder states - self.attention_rnn_hidden = torch.zeros(1, device=inputs.device).repeat(B, 256) - self.decoder_rnn_hiddens = [ - torch.zeros(1, device=inputs.device).repeat(B, 256) for idx in range(len(self.decoder_rnns)) - ] - self.context_vec = inputs.data.new(B, self.in_channels).zero_() - # cache attention inputs - self.processed_inputs = self.attention.preprocess_inputs(inputs) - - def _parse_outputs(self, outputs, attentions, stop_tokens): - # Back to batch first - attentions = torch.stack(attentions).transpose(0, 1) - stop_tokens = torch.stack(stop_tokens).transpose(0, 1) - outputs = torch.stack(outputs).transpose(0, 1).contiguous() - outputs = outputs.view(outputs.size(0), -1, self.frame_channels) - outputs = outputs.transpose(1, 2) - return outputs, attentions, stop_tokens - - def decode(self, inputs, mask=None): - # Prenet - processed_memory = self.prenet(self.memory_input) - # Attention RNN - self.attention_rnn_hidden = self.attention_rnn( - torch.cat((processed_memory, self.context_vec), -1), self.attention_rnn_hidden - ) - self.context_vec = self.attention(self.attention_rnn_hidden, inputs, self.processed_inputs, mask) - # Concat RNN output and attention context vector - decoder_input = self.project_to_decoder_in(torch.cat((self.attention_rnn_hidden, self.context_vec), -1)) - - # Pass through the decoder RNNs - for idx, decoder_rnn in enumerate(self.decoder_rnns): - self.decoder_rnn_hiddens[idx] = decoder_rnn(decoder_input, self.decoder_rnn_hiddens[idx]) - # Residual connection - decoder_input = self.decoder_rnn_hiddens[idx] + decoder_input - decoder_output = decoder_input - - # predict mel vectors from decoder vectors - output = self.proj_to_mel(decoder_output) - # output = torch.sigmoid(output) - # predict stop token - stopnet_input = torch.cat([decoder_output, output], -1) - if self.separate_stopnet: - stop_token = self.stopnet(stopnet_input.detach()) - else: - stop_token = self.stopnet(stopnet_input) - output = output[:, : self.r * self.frame_channels] - return output, stop_token, self.attention.attention_weights - - def _update_memory_input(self, new_memory): - if self.use_memory_queue: - if self.memory_size > self.r: - # memory queue size is larger than number of frames per decoder iter - self.memory_input = torch.cat( - [new_memory, self.memory_input[:, : (self.memory_size - self.r) * self.frame_channels].clone()], - dim=-1, - ) - else: - # memory queue size smaller than number of frames per decoder iter - self.memory_input = new_memory[:, : self.memory_size * self.frame_channels] - else: - # use only the last frame prediction - # assert new_memory.shape[-1] == self.r * self.frame_channels - self.memory_input = new_memory[:, self.frame_channels * (self.r - 1) :] - - def forward(self, inputs, memory, mask): - """ - Args: - inputs: Encoder outputs. - memory: Decoder memory (autoregression. If None (at eval-time), - decoder outputs are used as decoder inputs. If None, it uses the last - output as the input. - mask: Attention mask for sequence padding. - - Shapes: - - inputs: (B, T, D_out_enc) - - memory: (B, T_mel, D_mel) - """ - # Run greedy decoding if memory is None - memory = self._reshape_memory(memory) - outputs = [] - attentions = [] - stop_tokens = [] - t = 0 - self._init_states(inputs) - self.attention.init_states(inputs) - while len(outputs) < memory.size(0): - if t > 0: - new_memory = memory[t - 1] - self._update_memory_input(new_memory) - - output, stop_token, attention = self.decode(inputs, mask) - outputs += [output] - attentions += [attention] - stop_tokens += [stop_token.squeeze(1)] - t += 1 - return self._parse_outputs(outputs, attentions, stop_tokens) - - def inference(self, inputs): - """ - Args: - inputs: encoder outputs. - Shapes: - - inputs: batch x time x encoder_out_dim - """ - outputs = [] - attentions = [] - stop_tokens = [] - t = 0 - self._init_states(inputs) - self.attention.init_states(inputs) - while True: - if t > 0: - new_memory = outputs[-1] - self._update_memory_input(new_memory) - output, stop_token, attention = self.decode(inputs, None) - stop_token = torch.sigmoid(stop_token.data) - outputs += [output] - attentions += [attention] - stop_tokens += [stop_token] - t += 1 - if t > inputs.shape[1] / 4 and (stop_token > 0.6 or attention[:, -1].item() > 0.6): - break - if t > self.max_decoder_steps: - print(" | > Decoder stopped with 'max_decoder_steps") - break - return self._parse_outputs(outputs, attentions, stop_tokens) - - -class StopNet(nn.Module): - r"""Stopnet signalling decoder to stop inference. - Args: - in_features (int): feature dimension of input. - """ - - def __init__(self, in_features): - super().__init__() - self.dropout = nn.Dropout(0.1) - self.linear = nn.Linear(in_features, 1) - torch.nn.init.xavier_uniform_(self.linear.weight, gain=torch.nn.init.calculate_gain("linear")) - - def forward(self, inputs): - outputs = self.dropout(inputs) - outputs = self.linear(outputs) - return outputs diff --git a/spaces/arxify/RVC-beta-v2-0618/extract_feature_print.py b/spaces/arxify/RVC-beta-v2-0618/extract_feature_print.py deleted file mode 100644 index 7a9ca9f8c22905f8c31a16459cdb0958391403cb..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/extract_feature_print.py +++ /dev/null @@ -1,123 +0,0 @@ -import os, sys, traceback - -# device=sys.argv[1] -n_part = int(sys.argv[2]) -i_part = int(sys.argv[3]) -if len(sys.argv) == 5: - exp_dir = sys.argv[4] - version = sys.argv[5] -else: - i_gpu = sys.argv[4] - exp_dir = sys.argv[5] - os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) - version = sys.argv[6] -import torch -import torch.nn.functional as F -import soundfile as sf -import numpy as np -from fairseq import checkpoint_utils - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -if torch.cuda.is_available(): - device = "cuda" -elif torch.backends.mps.is_available(): - device = "mps" -else: - device = "cpu" - -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -printt(sys.argv) -model_path = "hubert_base.pt" - -printt(exp_dir) -wavPath = "%s/1_16k_wavs" % exp_dir -outPath = ( - "%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir -) -os.makedirs(outPath, exist_ok=True) - - -# wave must be 16k, hop_size=320 -def readwave(wav_path, normalize=False): - wav, sr = sf.read(wav_path) - assert sr == 16000 - feats = torch.from_numpy(wav).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - if normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - feats = feats.view(1, -1) - return feats - - -# HuBERT model -printt("load model(s) from {}".format(model_path)) -# if hubert model is exist -if os.access(model_path, os.F_OK) == False: - printt( - "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main" - % model_path - ) - exit(0) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -printt("move model to %s" % device) -if device not in ["mps", "cpu"]: - model = model.half() -model.eval() - -todo = sorted(list(os.listdir(wavPath)))[i_part::n_part] -n = max(1, len(todo) // 10) # 最多打印十条 -if len(todo) == 0: - printt("no-feature-todo") -else: - printt("all-feature-%s" % len(todo)) - for idx, file in enumerate(todo): - try: - if file.endswith(".wav"): - wav_path = "%s/%s" % (wavPath, file) - out_path = "%s/%s" % (outPath, file.replace("wav", "npy")) - - if os.path.exists(out_path): - continue - - feats = readwave(wav_path, normalize=saved_cfg.task.normalize) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device) - if device not in ["mps", "cpu"] - else feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if version == "v1" else 12, # layer 9 - } - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = ( - model.final_proj(logits[0]) if version == "v1" else logits[0] - ) - - feats = feats.squeeze(0).float().cpu().numpy() - if np.isnan(feats).sum() == 0: - np.save(out_path, feats, allow_pickle=False) - else: - printt("%s-contains nan" % file) - if idx % n == 0: - printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape)) - except: - printt(traceback.format_exc()) - printt("all-feature-done") diff --git a/spaces/arxify/RVC-beta-v2-0618/infer_uvr5.py b/spaces/arxify/RVC-beta-v2-0618/infer_uvr5.py deleted file mode 100644 index 884c841dd6179677bd0a6d5f5f639954a206a77e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/infer_uvr5.py +++ /dev/null @@ -1,363 +0,0 @@ -import os, sys, torch, warnings, pdb - -now_dir = os.getcwd() -sys.path.append(now_dir) -from json import load as ll - -warnings.filterwarnings("ignore") -import librosa -import importlib -import numpy as np -import hashlib, math -from tqdm import tqdm -from uvr5_pack.lib_v5 import spec_utils -from uvr5_pack.utils import _get_name_params, inference -from uvr5_pack.lib_v5.model_param_init import ModelParameters -import soundfile as sf -from uvr5_pack.lib_v5.nets_new import CascadedNet -from uvr5_pack.lib_v5 import nets_61968KB as nets - - -class _audio_pre_: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class _audio_pre_new: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - print("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - print("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -if __name__ == "__main__": - device = "cuda" - is_half = True - # model_path = "uvr5_weights/2_HP-UVR.pth" - # model_path = "uvr5_weights/VR-DeEchoDeReverb.pth" - # model_path = "uvr5_weights/VR-DeEchoNormal.pth" - model_path = "uvr5_weights/DeEchoNormal.pth" - # pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True,agg=10) - pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10) - audio_path = "雪雪伴奏对消HP5.wav" - save_path = "opt" - pre_fun._path_audio_(audio_path, save_path, save_path) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/__init__.py deleted file mode 100644 index 05fc139ade46ec8af9c0b747d8fab1cc25cac090..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Cipher/__init__.py +++ /dev/null @@ -1,60 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Cipher/__init__.py: Self-test for cipher modules -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test for cipher modules""" - -__revision__ = "$Id$" - -def get_tests(config={}): - tests = [] - from Crypto.SelfTest.Cipher import test_AES; tests += test_AES.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_ARC2; tests += test_ARC2.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_ARC4; tests += test_ARC4.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_Blowfish; tests += test_Blowfish.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_CAST; tests += test_CAST.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_DES3; tests += test_DES3.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_DES; tests += test_DES.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_Salsa20; tests += test_Salsa20.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_ChaCha20; tests += test_ChaCha20.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_ChaCha20_Poly1305; tests += test_ChaCha20_Poly1305.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_pkcs1_15; tests += test_pkcs1_15.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_pkcs1_oaep; tests += test_pkcs1_oaep.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_OCB; tests += test_OCB.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_CBC; tests += test_CBC.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_CFB; tests += test_CFB.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_OpenPGP; tests += test_OpenPGP.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_OFB; tests += test_OFB.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_CTR; tests += test_CTR.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_CCM; tests += test_CCM.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_EAX; tests += test_EAX.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_GCM; tests += test_GCM.get_tests(config=config) - from Crypto.SelfTest.Cipher import test_SIV; tests += test_SIV.get_tests(config=config) - return tests - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/scatter_with_loess.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/scatter_with_loess.py deleted file mode 100644 index 5e76fdac651a96af3cb10851bc9f7fd3ed36bed2..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/scatter_with_loess.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -Scatter Plot with LOESS Lines ------------------------------ -This example shows how to add a trend line to a scatter plot using -the LOESS transform (LOcally Estimated Scatterplot Smoothing). -""" -# category: scatter plots - -import altair as alt -import pandas as pd -import numpy as np - -np.random.seed(1) - -source = pd.DataFrame({ - 'x': np.arange(100), - 'A': np.random.randn(100).cumsum(), - 'B': np.random.randn(100).cumsum(), - 'C': np.random.randn(100).cumsum(), -}) - -base = alt.Chart(source).mark_circle(opacity=0.5).transform_fold( - fold=['A', 'B', 'C'], - as_=['category', 'y'] -).encode( - alt.X('x:Q'), - alt.Y('y:Q'), - alt.Color('category:N') -) - -base + base.transform_loess('x', 'y', groupby=['category']).mark_line(size=4) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/slope_graph.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/slope_graph.py deleted file mode 100644 index 422cf64b46fe4fc0bd45508b1180437d3bae7b45..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/slope_graph.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Slope Graph ------------------------ -This example shows how to make Slope Graph. -""" -# category: line charts -import altair as alt -from vega_datasets import data - -source = data.barley() - -alt.Chart(source).mark_line().encode( - x='year:O', - y='median(yield)', - color='site' -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/world_map.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/world_map.py deleted file mode 100644 index 34263937bb66c450bb5cde1eeb2c9bff42ec49a9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/world_map.py +++ /dev/null @@ -1,27 +0,0 @@ -""" -World Map ---------- - -This example shows how to create a world map using data generators for -different background layers. -""" -# category: maps - -import altair as alt -from vega_datasets import data - -# Data generators for the background -sphere = alt.sphere() -graticule = alt.graticule() - -# Source of land data -source = alt.topo_feature(data.world_110m.url, 'countries') - -# Layering and configuring the components -alt.layer( - alt.Chart(sphere).mark_geoshape(fill='lightblue'), - alt.Chart(graticule).mark_geoshape(stroke='white', strokeWidth=0.5), - alt.Chart(source).mark_geoshape(fill='ForestGreen', stroke='black') -).project( - 'naturalEarth1' -).properties(width=600, height=400).configure_view(stroke=None) diff --git a/spaces/arxnov/anotest/text/shanghainese.py b/spaces/arxnov/anotest/text/shanghainese.py deleted file mode 100644 index cb29c24a08d2e406e8399cf7bc9fe5cb43cb9c61..0000000000000000000000000000000000000000 --- a/spaces/arxnov/anotest/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/asciicorp/Legal-ai/policy.py b/spaces/asciicorp/Legal-ai/policy.py deleted file mode 100644 index ad7479aa3580b2bc953bc0679f7a32a1c2367967..0000000000000000000000000000000000000000 --- a/spaces/asciicorp/Legal-ai/policy.py +++ /dev/null @@ -1,20 +0,0 @@ -import streamlit as st -from langchain.llms import OpenAI -from langchain.prompts import PromptTemplate -import openai -import os -os.environ["OPENAI_API_KEY"] = "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" - -# Define function to check if agreement follows policy -def check_agreement(policy_text, agreement_text): - prompt = f"Does the following agreement follow the company policy?\n\nPolicy:\n{policy_text}\n\nAgreement:\n{agreement_text}\n\nAnswer:\nReasons:" - response = openai.Completion.create( - engine="text-davinci-002", - prompt=prompt, - temperature=0.5, - max_tokens=1024, - n=1, - stop=None, - timeout=15, - ) - return response.choices[0].text.strip() diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Jack Pun.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Jack Pun.html deleted file mode 100644 index 98a1175af98f37ca07bc5954f69a4e3ea87496c4..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Jack Pun.html +++ /dev/null @@ -1,136 +0,0 @@ - - - - Jack Pun - - - - -
        -

        Jack Pun

        - -
        -
        How did you hear about SM?
        • LinkedIn, saw me post. Heard of it before

        Career summary
        • DS manager at TD wealth (2 years)
        • leads team of 5
        • predictive analysis, causal analysis
        • Previously did insight ds
        • before that, PhD in Physics at Boston U. (ML for physical systems)

        Mentorship exp?
        • As a manager, yes, mentoring my juniors
        • technical and otherwise (communication, stakeholder management)
        • During PhD, was TA

        What do beginners need the most help with? How can you help?
        • these days, usually technically competent, at least no the surface
        • under the hood, they might not have good depth on some concepts
        • more important - business acumen! engage with stakeholders, manage expectations
        can help
        • talk about my exp, interview prep, cold email, cold reach outs, and I want to give back to the community
        • technical background

        career coach, life coach, tutor?
        • tutoring is how they get through the interview
        • career coach is more long-term, I can speak to what I've seen, but don't have a not of an exp
        • life coach - not sure, that is very long-term, but can speak to my experience as an immigrate
        -
        -

        Questions about SM?
        • edge cases of ISA?
          • e.g. what if the mentee already has an interview?
        • do I have to craft the ISA?
        • what happens if the mentee gets a job within two weeks?
        • How many mentors do you have?
        • How do you think about the relationship btw SM and mentors? Partnership?
        -
        -


        -
        - -
        - - - \ No newline at end of file diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/diffusionmodules/__init__.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/samplers.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/samplers.py deleted file mode 100644 index 6a68e8e1a1b3d8340b44b59fc6c3994c46de982a..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/samplers.py +++ /dev/null @@ -1,252 +0,0 @@ -from scipy import integrate -import torch -from tqdm.auto import trange, tqdm -import torch.nn as nn - - -def append_zero(x): - return torch.cat([x, x.new_zeros([1])]) - - -def append_dims(x, target_dims): - """Appends dimensions to the end of a tensor until it has target_dims dimensions.""" - dims_to_append = target_dims - x.ndim - if dims_to_append < 0: - raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less') - return x[(...,) + (None,) * dims_to_append] - -def get_ancestral_step(sigma_from, sigma_to): - """Calculates the noise level (sigma_down) to step down to and the amount - of noise to add (sigma_up) when doing an ancestral sampling step.""" - sigma_up = (sigma_to ** 2 * (sigma_from ** 2 - sigma_to ** 2) / sigma_from ** 2) ** 0.5 - sigma_down = (sigma_to ** 2 - sigma_up ** 2) ** 0.5 - return sigma_down, sigma_up - - -class DiscreteSchedule(nn.Module): - """A mapping between continuous noise levels (sigmas) and a list of discrete noise - levels.""" - - def __init__(self, sigmas, quantize): - super().__init__() - self.register_buffer('sigmas', sigmas) - self.quantize = quantize - - def get_sigmas(self, n=None): - if n is None: - return append_zero(self.sigmas.flip(0)) - t_max = len(self.sigmas) - 1 - t = torch.linspace(t_max, 0, n, device=self.sigmas.device) - return append_zero(self.t_to_sigma(t)) - - def sigma_to_t(self, sigma, quantize=None): - quantize = self.quantize if quantize is None else quantize - dists = torch.abs(sigma - self.sigmas[:, None]) - if quantize: - return torch.argmin(dists, dim=0).view(sigma.shape) - low_idx, high_idx = torch.sort(torch.topk(dists, dim=0, k=2, largest=False).indices, dim=0)[0] - low, high = self.sigmas[low_idx], self.sigmas[high_idx] - w = (low - sigma) / (low - high) - w = w.clamp(0, 1) - t = (1 - w) * low_idx + w * high_idx - return t.view(sigma.shape) - - def t_to_sigma(self, t): - t = t.float() - low_idx, high_idx, w = t.floor().long(), t.ceil().long(), t.frac() - # print(low_idx, high_idx, w ) - return (1 - w) * self.sigmas[low_idx] + w * self.sigmas[high_idx] - - -class DiscreteEpsDDPMDenoiser(DiscreteSchedule): - """A wrapper for discrete schedule DDPM models that output eps (the predicted - noise).""" - - def __init__(self, alphas_cumprod, quantize): - super().__init__(((1 - alphas_cumprod) / alphas_cumprod) ** 0.5, quantize) - self.sigma_data = 1. - - def get_scalings(self, sigma): - c_out = -sigma - c_in = 1 / (sigma ** 2 + self.sigma_data ** 2) ** 0.5 - return c_out, c_in - - def get_eps(self, *args, **kwargs): - return self.inner_model(*args, **kwargs) - - def forward(self, input, sigma, **kwargs): - c_out, c_in = [append_dims(x, input.ndim) for x in self.get_scalings(sigma)] - eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) - return input + eps * c_out - -class CompVisDenoiser(DiscreteEpsDDPMDenoiser): - """A wrapper for CompVis diffusion models.""" - - def __init__(self, alphas_cumprod, quantize=False, device='cpu'): - super().__init__(alphas_cumprod, quantize=quantize) - - def get_eps(self, *args, **kwargs): - return self.inner_model.apply_model(*args, **kwargs) - - -def to_d(x, sigma, denoised): - """Converts a denoiser output to a Karras ODE derivative.""" - return (x - denoised) / append_dims(sigma, x.ndim) - - -def get_ancestral_step(sigma_from, sigma_to): - """Calculates the noise level (sigma_down) to step down to and the amount - of noise to add (sigma_up) when doing an ancestral sampling step.""" - sigma_up = (sigma_to ** 2 * (sigma_from ** 2 - sigma_to ** 2) / sigma_from ** 2) ** 0.5 - sigma_down = (sigma_to ** 2 - sigma_up ** 2) ** 0.5 - return sigma_down, sigma_up - - -@torch.no_grad() -def sample_euler(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.): - """Implements Algorithm 2 (Euler steps) from Karras et al. (2022).""" - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - for i in trange(len(sigmas) - 1, disable=disable): - gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0. - eps = torch.randn_like(x) * s_noise - sigma_hat = sigmas[i] * (gamma + 1) - if gamma > 0: - x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5 - denoised = model(x, sigma_hat * s_in, **extra_args) - d = to_d(x, sigma_hat, denoised) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) - dt = sigmas[i + 1] - sigma_hat - # Euler method - x = x + d * dt - return x - - - -@torch.no_grad() -def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None): - """Ancestral sampling with Euler method steps.""" - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - for i in trange(len(sigmas) - 1, disable=disable): - denoised = model(x, sigmas[i] * s_in, **extra_args) - sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1]) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised}) - d = to_d(x, sigmas[i], denoised) - # Euler method - dt = sigma_down - sigmas[i] - x = x + d * dt - x = x + torch.randn_like(x) * sigma_up - return x - - -@torch.no_grad() -def sample_heun(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.): - """Implements Algorithm 2 (Heun steps) from Karras et al. (2022).""" - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - for i in trange(len(sigmas) - 1, disable=disable): - gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0. - eps = torch.randn_like(x) * s_noise - sigma_hat = sigmas[i] * (gamma + 1) - if gamma > 0: - x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5 - denoised = model(x, sigma_hat * s_in, **extra_args) - d = to_d(x, sigma_hat, denoised) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) - dt = sigmas[i + 1] - sigma_hat - if sigmas[i + 1] == 0: - # Euler method - x = x + d * dt - else: - # Heun's method - x_2 = x + d * dt - denoised_2 = model(x_2, sigmas[i + 1] * s_in, **extra_args) - d_2 = to_d(x_2, sigmas[i + 1], denoised_2) - d_prime = (d + d_2) / 2 - x = x + d_prime * dt - return x - - -@torch.no_grad() -def sample_dpm_2(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.): - """A sampler inspired by DPM-Solver-2 and Algorithm 2 from Karras et al. (2022).""" - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - for i in trange(len(sigmas) - 1, disable=disable): - gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0. - eps = torch.randn_like(x) * s_noise - sigma_hat = sigmas[i] * (gamma + 1) - if gamma > 0: - x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5 - denoised = model(x, sigma_hat * s_in, **extra_args) - d = to_d(x, sigma_hat, denoised) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) - # Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule - sigma_mid = ((sigma_hat ** (1 / 3) + sigmas[i + 1] ** (1 / 3)) / 2) ** 3 - dt_1 = sigma_mid - sigma_hat - dt_2 = sigmas[i + 1] - sigma_hat - x_2 = x + d * dt_1 - denoised_2 = model(x_2, sigma_mid * s_in, **extra_args) - d_2 = to_d(x_2, sigma_mid, denoised_2) - x = x + d_2 * dt_2 - return x - - -@torch.no_grad() -def sample_dpm_2_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None): - """Ancestral sampling with DPM-Solver inspired second-order steps.""" - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - for i in trange(len(sigmas) - 1, disable=disable): - denoised = model(x, sigmas[i] * s_in, **extra_args) - sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1]) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised}) - d = to_d(x, sigmas[i], denoised) - # Midpoint method, where the midpoint is chosen according to a rho=3 Karras schedule - sigma_mid = ((sigmas[i] ** (1 / 3) + sigma_down ** (1 / 3)) / 2) ** 3 - dt_1 = sigma_mid - sigmas[i] - dt_2 = sigma_down - sigmas[i] - x_2 = x + d * dt_1 - denoised_2 = model(x_2, sigma_mid * s_in, **extra_args) - d_2 = to_d(x_2, sigma_mid, denoised_2) - x = x + d_2 * dt_2 - x = x + torch.randn_like(x) * sigma_up - return x - - -def linear_multistep_coeff(order, t, i, j): - if order - 1 > i: - raise ValueError(f'Order {order} too high for step {i}') - def fn(tau): - prod = 1. - for k in range(order): - if j == k: - continue - prod *= (tau - t[i - k]) / (t[i - j] - t[i - k]) - return prod - return integrate.quad(fn, t[i], t[i + 1], epsrel=1e-4)[0] - - -@torch.no_grad() -def sample_lms(model, x, sigmas, extra_args=None, callback=None, disable=None, order=4): - extra_args = {} if extra_args is None else extra_args - s_in = x.new_ones([x.shape[0]]) - ds = [] - for i in trange(len(sigmas) - 1, disable=disable): - denoised = model(x, sigmas[i] * s_in, **extra_args) - d = to_d(x, sigmas[i], denoised) - ds.append(d) - if len(ds) > order: - ds.pop(0) - if callback is not None: - callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised}) - cur_order = min(i + 1, order) - coeffs = [linear_multistep_coeff(cur_order, sigmas.cpu(), i, j) for j in range(cur_order)] - x = x + sum(coeff * d for coeff, d in zip(coeffs, reversed(ds))) - return x diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/start.sh b/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/start.sh deleted file mode 100644 index d6c75ad865f76f5e56dae23639a890231cf24b1b..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/start.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash - -source installer/bin/activate - -conda-unpack - -scripts/on_env_start.sh diff --git a/spaces/awacke1/Joke-Book-No-Pun-Intended/README.md b/spaces/awacke1/Joke-Book-No-Pun-Intended/README.md deleted file mode 100644 index 9852b20bd6c39e018caf685d13f6f57f621b32ff..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Joke-Book-No-Pun-Intended/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Joke Book App - No Pun Intended -emoji: 🤪🤣😎 -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -📣 Press Release: "No Pun Intended" Joke App on HuggingFace! -🎉 We're excited to announce the launch of our new program, the "No Pun Intended" Jokes to make you laugh! 🎉 - -🤪 Are you tired of boring jokes that make you yawn? Do you want to laugh until your tummy hurts? - -Look no further than "No Pun Intended" Joke App for All! 🤣 - -😎 With 20 hilarious jokes to choose from, you'll never run out of funny puns to tell your friends and family. - -Whether you're in the classroom, on the playground, or at home, our joke app is the perfect way to brighten up your day. 😃 - -📚 And the best part? You can even add your own jokes to the joke book! Use the file IO elements to load and save jokes to the program. 📝 - -👦👧 So what are you waiting for? Bookmark and like the "No Pun Intended" Joke App today and start laughing out loud! 😂 \ No newline at end of file diff --git a/spaces/awacke1/Slot-Machine-HTML5/README.md b/spaces/awacke1/Slot-Machine-HTML5/README.md deleted file mode 100644 index 3484ef5619c7987f150a7a247b482ffe2c8999c9..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Slot-Machine-HTML5/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Slot Machine HTML5 -emoji: 🍇🍒🍉 -colorFrom: purple -colorTo: purple -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Streamlit.Data.Editor/app.py b/spaces/awacke1/Streamlit.Data.Editor/app.py deleted file mode 100644 index 69156637e706d9603805e354f1c399812baa9e64..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Streamlit.Data.Editor/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import streamlit as st -import pandas as pd -from PIL import Image -import os - -# Define the function signature -def display_data(data: pd.DataFrame): - st.experimental_data_editor(data) - #st.experimental_data_editor - -# Create the Streamlit app -def main(): - st.title("Upload Files and Display Data in Function Signature") - - # Create the "uploads" directory if it does not exist - if not os.path.exists("uploads"): - os.makedirs("uploads") - - # Create an upload form for CSV files - st.subheader("Upload CSV Files") - csv_file = st.file_uploader("Choose a CSV file", type=["csv"]) - - # Create an upload form for image files - st.subheader("Upload Image Files") - image_file = st.file_uploader("Choose an image file", type=["jpg", "jpeg", "png"]) - - # Add OS file listing of uploaded files - st.subheader("Uploaded Files") - files = os.listdir("uploads") - for file in files: - st.markdown(f"[{file}]({os.path.join('uploads', file)})") - - # Check if files are uploaded and display data in function signature - if csv_file is not None: - with open(os.path.join("uploads", csv_file.name), "wb") as f: - f.write(csv_file.getbuffer()) - data = pd.read_csv(os.path.join("uploads", csv_file.name)) - display_data(data) - - if image_file is not None: - with open(os.path.join("uploads", image_file.name), "wb") as f: - f.write(image_file.getbuffer()) - image = Image.open(os.path.join("uploads", image_file.name)) - st.image(image, caption="Uploaded Image") - -if __name__ == "__main__": - main() diff --git a/spaces/awacke1/StreamlitMultiplayerTicTacToe/app.py b/spaces/awacke1/StreamlitMultiplayerTicTacToe/app.py deleted file mode 100644 index 93ad70a131f67b52f2eb7aebaf0fbe39398e1268..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitMultiplayerTicTacToe/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st - -def multiplayer_game(): - st.title("Tic Tac Toe") - - game_state = [[' ' for i in range(3)] for j in range(3)] - player = "X" - winner = None - - def check_winner(): - # Check rows - for row in game_state: - if row == ["X", "X", "X"]: - return "X" - elif row == ["O", "O", "O"]: - return "O" - - # Check columns - for col in range(3): - if game_state[0][col] == game_state[1][col] == game_state[2][col] == "X": - return "X" - elif game_state[0][col] == game_state[1][col] == game_state[2][col] == "O": - return "O" - - # Check diagonals - if (game_state[0][0] == game_state[1][1] == game_state[2][2] == "X" or - game_state[0][2] == game_state[1][1] == game_state[2][0] == "X"): - return "X" - elif (game_state[0][0] == game_state[1][1] == game_state[2][2] == "O" or - game_state[0][2] == game_state[1][1] == game_state[2][0] == "O"): - return "O" - - return None - - def render_game_state(): - for row_num, row in enumerate(game_state): - row_string = "|".join(row) - st.write(row_string) - if row_num != 2: - st.write("-+-+-") - - for i in range(9): - render_game_state() - st.write("Player ", player, " turn. Choose a square (1-9).") - chosen_square = st.number_input("", min_value=1, max_value=9, key=f"number_input_{i}") - row = (chosen_square - 1) // 3 - col = (chosen_square - 1) % 3 - if game_state[row][col] != " ": - st.write("Square already taken. Please choose another.") - continue - game_state[row][col] = player - winner = check_winner() - if winner: - st.write("Player ", winner, " wins!") - break - if player == "X": - player = "O" - else: - player = "X" - if not winner: - st.write("It's a draw!") - -if __name__ == "__main__": - multiplayer_game() \ No newline at end of file diff --git a/spaces/awen666/web-ui/_next/static/chunks/ff48af57.9bd46c4f54ef29df.js b/spaces/awen666/web-ui/_next/static/chunks/ff48af57.9bd46c4f54ef29df.js deleted file mode 100644 index 8a73871624398bc94d1c8bfbed822e88ae459e11..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/chunks/ff48af57.9bd46c4f54ef29df.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[680],{48136:function(a,t,h){h.d(t,{etG:function(){return n}});var r=h(83270);function n(a){return(0,r.w_)({tag:"svg",attr:{fill:"currentColor",viewBox:"0 0 16 16"},child:[{tag:"path",attr:{d:"M4 1.5H3a2 2 0 0 0-2 2V14a2 2 0 0 0 2 2h10a2 2 0 0 0 2-2V3.5a2 2 0 0 0-2-2h-1v1h1a1 1 0 0 1 1 1V14a1 1 0 0 1-1 1H3a1 1 0 0 1-1-1V3.5a1 1 0 0 1 1-1h1v-1z"}},{tag:"path",attr:{d:"M9.5 1a.5.5 0 0 1 .5.5v1a.5.5 0 0 1-.5.5h-3a.5.5 0 0 1-.5-.5v-1a.5.5 0 0 1 .5-.5h3zm-3-1A1.5 1.5 0 0 0 5 1.5v1A1.5 1.5 0 0 0 6.5 4h3A1.5 1.5 0 0 0 11 2.5v-1A1.5 1.5 0 0 0 9.5 0h-3z"}}]})(a)}}}]); \ No newline at end of file diff --git a/spaces/awinml/dl-optimizers/app.py b/spaces/awinml/dl-optimizers/app.py deleted file mode 100644 index 38b8bfc94bf2f0664293003afb4e05a089909547..0000000000000000000000000000000000000000 --- a/spaces/awinml/dl-optimizers/app.py +++ /dev/null @@ -1,204 +0,0 @@ -import pandas as pd -import numpy as np -import tensorflow as tf -from keras.optimizers import SGD, Adagrad, RMSprop, Adadelta, Adam -from keras.models import Sequential -from keras.layers import Dense, Dropout -from keras.callbacks import EarlyStopping - -import matplotlib.pyplot as plt -from matplotlib.backends.backend_agg import RendererAgg -_lock = RendererAgg.lock -from sklearn import linear_model, datasets -from sklearn.model_selection import train_test_split -from sklearn.metrics import accuracy_score -from sklearn.tree import DecisionTreeClassifier - -tf.random.set_seed(0) - -import streamlit as st - -data = datasets.load_breast_cancer() -df = pd.DataFrame(data["data"], columns=data["feature_names"]) -df["target"] = data["target"] - -X = df.drop("target", axis=1) -y = df["target"] -X_train, X_test, y_train, y_test = train_test_split( - X, y, train_size=0.7, random_state=0 -) - - -def plot_loss(history): - fig, ax = plt.subplots() - ax.plot(history.history["loss"], label="loss") - ax.plot(history.history["val_loss"], label="val_loss") - ax.set_xlabel("Epoch") - ax.set_ylabel("Error") - ax.set_title("Train Loss vs Validation Loss") - ax.legend() - ax.grid(True) - return fig - - -def create_model(): - model = Sequential() - model.add( - Dense(32, kernel_initializer="normal", input_dim=30, activation="leaky_relu") - ) - model.add(Dense(16, kernel_initializer="uniform", activation="leaky_relu")) - model.add(Dropout(rate=0.3)) - model.add(Dense(16, kernel_initializer="uniform", activation="sigmoid")) - model.add(Dropout(rate=0.4)) - model.add(Dense(1, activation="sigmoid")) - return model - - -def fit_model(model, optmizer, X_train, X_test, y_test, batch_size=32): - model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"]) - callback = EarlyStopping(monitor="loss", patience=10) - history = model.fit( - X_train, - y_train, - # Setting Batch Size to number of samples for Vanilla GD - batch_size=X_train.shape[0], - validation_data=(X_test, y_test), - epochs=150, - callbacks=[callback], - verbose=0, - ) - return history - - -gd_types = [ - "Gradient Descent", - "Stochastic Gradient Descent", - "Mini-Batch Gradient Descent", - "Adagrad", - "RMSProp", - "Adam", -] - -with st.sidebar: - choice = st.selectbox("Optimizer:", options=gd_types) - - if ( - choice == "Gradient Descent" - or choice == "Stochastic Gradient Descent" - or choice == "Mini-Batch Gradient Descent" - ): - lr = st.slider( - "Learning Rate:", min_value=0.01, max_value=1.00, value=0.01, step=0.01 - ) - if choice == "Mini-Batch Gradient Descent": - batch_size = st.slider( - "Batch Size:", min_value=1, max_value=100, value=50, step=10 - ) - else: - batch_size = st.select_slider( - "Batch Size:", [1, 2, 4, 8, 16, 32, 64], disabled=True - ) - momentum = st.slider( - "Momentum Factor:", min_value=0.01, max_value=1.00, value=0.01, step=0.01 - ) - nag = st.checkbox("Nesterov Accelerated Momentum") - - elif choice == "Adagrad": - lr = st.slider( - "Learning Rate:", min_value=0.01, max_value=1.00, value=0.1, step=0.01 - ) - batch_size = st.slider( - "Batch Size:", min_value=1, max_value=100, value=50, step=10 - ) - - elif choice == "RMSProp": - lr = st.slider( - "Learning Rate:", min_value=0.01, max_value=1.00, value=0.01, step=0.01 - ) - batch_size = st.slider( - "Batch Size:", min_value=1, max_value=100, value=50, step=10 - ) - rho = st.slider( - "Exponential Decay Rate:", min_value=0.1, max_value=1.0, value=0.9, step=0.1 - ) - - elif choice == "Adam": - lr = st.slider( - "Learning Rate:", min_value=0.01, max_value=1.00, value=0.01, step=0.01 - ) - batch_size = st.slider( - "Batch Size:", min_value=1, max_value=100, value=50, step=10 - ) - beta1 = st.slider( - "Exponential Decay Rate for Moments:", - min_value=0.1, - max_value=1.0, - value=0.9, - step=0.1, - ) - beta2 = st.slider( - "Exponential Decay Rate for Variance:", - min_value=0.01, - max_value=1.00, - value=0.99, - step=0.01, - ) - -st.title("Optimizers in Deep Learning") - -st.write( - "A Neural Network has been trained on the Breast Cancer Dataset. We monitor the convergence of different optimizers during the training." -) - -st.subheader(choice) - -if choice == "Gradient Descent": - model = create_model() - optimizer = SGD(learning_rate=lr, momentum=momentum, nesterov=nag) - history = fit_model( - model, optimizer, X_train, X_test, y_test, batch_size=X_train.shape[0] - ) - st.pyplot(plot_loss(history)) - -elif choice == "Stochastic Gradient Descent": - model = create_model() - optimizer = SGD(learning_rate=lr, momentum=momentum, nesterov=nag) - history = fit_model(model, optimizer, X_train, X_test, y_test, batch_size=1) - st.pyplot(plot_loss(history)) - -elif choice == "Mini-Batch Gradient Descent": - model = create_model() - optimizer = SGD(learning_rate=lr, momentum=momentum, nesterov=nag) - history = fit_model( - model, optimizer, X_train, X_test, y_test, batch_size=batch_size - ) - st.pyplot(plot_loss(history)) - -elif choice == "Adagrad": - model = create_model() - optimizer = Adagrad(learning_rate=lr) - history = fit_model( - model, optimizer, X_train, X_test, y_test, batch_size=batch_size - ) - st.pyplot(plot_loss(history)) - -elif choice == "RMSProp": - model = create_model() - optimizer = RMSprop(learning_rate=lr, rho=rho) - history = fit_model( - model, optimizer, X_train, X_test, y_test, batch_size=batch_size - ) - st.pyplot(plot_loss(history)) - -elif choice == "Adam": - model = create_model() - optimizer = Adam(learning_rate=lr, beta_1=beta1, beta_2=beta2) - history = fit_model( - model, optimizer, X_train, X_test, y_test, batch_size=batch_size - ) - st.pyplot(plot_loss(history)) - - -st.write("The dataset can be viewed below:") - -st.dataframe(data=df, width=1000, height=200) diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/LineCurve3.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/LineCurve3.d.ts deleted file mode 100644 index a3d975f5ab2b7008447702e0f1b6f4c5a1f79c74..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/curves/LineCurve3.d.ts +++ /dev/null @@ -1,11 +0,0 @@ -import { Vector3 } from './../../math/Vector3'; -import { Curve } from './../core/Curve'; - -export class LineCurve3 extends Curve { - constructor(v1: Vector3, v2: Vector3); - - v1: Vector3; - v2: Vector3; - - getPoint(t: number): Vector3; -} diff --git a/spaces/banana-projects/web3d/supervisor.sh b/spaces/banana-projects/web3d/supervisor.sh deleted file mode 100644 index f948c47551919dd05a7a625ba7af378122ecc0f8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/supervisor.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash - -./post-compile.sh -fswatch -o dist/*.js | xargs -n 1 './post-compile.sh' & -php -S localhost:8000 diff --git a/spaces/bguberfain/Detic/detic/modeling/meta_arch/d2_deformable_detr.py b/spaces/bguberfain/Detic/detic/modeling/meta_arch/d2_deformable_detr.py deleted file mode 100644 index 47ff220fc3946d1bf68fad87076589e46b274ef3..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/detic/modeling/meta_arch/d2_deformable_detr.py +++ /dev/null @@ -1,308 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -import torch.nn.functional as F -from torch import nn -import math - -from detectron2.modeling import META_ARCH_REGISTRY, build_backbone -from detectron2.structures import Boxes, Instances -from ..utils import load_class_freq, get_fed_loss_inds - -from models.backbone import Joiner -from models.deformable_detr import DeformableDETR, SetCriterion, MLP -from models.deformable_detr import _get_clones -from models.matcher import HungarianMatcher -from models.position_encoding import PositionEmbeddingSine -from models.deformable_transformer import DeformableTransformer -from models.segmentation import sigmoid_focal_loss -from util.box_ops import box_cxcywh_to_xyxy, box_xyxy_to_cxcywh -from util.misc import NestedTensor, accuracy - - -__all__ = ["DeformableDetr"] - -class CustomSetCriterion(SetCriterion): - def __init__(self, num_classes, matcher, weight_dict, losses, \ - focal_alpha=0.25, use_fed_loss=False): - super().__init__(num_classes, matcher, weight_dict, losses, focal_alpha) - self.use_fed_loss = use_fed_loss - if self.use_fed_loss: - self.register_buffer( - 'fed_loss_weight', load_class_freq(freq_weight=0.5)) - - def loss_labels(self, outputs, targets, indices, num_boxes, log=True): - """Classification loss (NLL) - targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes] - """ - assert 'pred_logits' in outputs - src_logits = outputs['pred_logits'] - - idx = self._get_src_permutation_idx(indices) - target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]) - target_classes = torch.full(src_logits.shape[:2], self.num_classes, - dtype=torch.int64, device=src_logits.device) - target_classes[idx] = target_classes_o - - target_classes_onehot = torch.zeros( - [src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1], - dtype=src_logits.dtype, layout=src_logits.layout, - device=src_logits.device) - target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1) - - target_classes_onehot = target_classes_onehot[:,:,:-1] # B x N x C - if self.use_fed_loss: - inds = get_fed_loss_inds( - gt_classes=target_classes_o, - num_sample_cats=50, - weight=self.fed_loss_weight, - C=target_classes_onehot.shape[2]) - loss_ce = sigmoid_focal_loss( - src_logits[:, :, inds], - target_classes_onehot[:, :, inds], - num_boxes, - alpha=self.focal_alpha, - gamma=2) * src_logits.shape[1] - else: - loss_ce = sigmoid_focal_loss( - src_logits, target_classes_onehot, num_boxes, - alpha=self.focal_alpha, - gamma=2) * src_logits.shape[1] - losses = {'loss_ce': loss_ce} - - if log: - # TODO this should probably be a separate loss, not hacked in this one here - losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0] - return losses - - -class MaskedBackbone(nn.Module): - """ This is a thin wrapper around D2's backbone to provide padding masking""" - - def __init__(self, cfg): - super().__init__() - self.backbone = build_backbone(cfg) - backbone_shape = self.backbone.output_shape() - self.feature_strides = [backbone_shape[f].stride for f in backbone_shape.keys()] - self.strides = [backbone_shape[f].stride for f in backbone_shape.keys()] - self.num_channels = [backbone_shape[x].channels for x in backbone_shape.keys()] - - def forward(self, tensor_list: NestedTensor): - xs = self.backbone(tensor_list.tensors) - out = {} - for name, x in xs.items(): - m = tensor_list.mask - assert m is not None - mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0] - out[name] = NestedTensor(x, mask) - return out - -@META_ARCH_REGISTRY.register() -class DeformableDetr(nn.Module): - """ - Implement Deformable Detr - """ - - def __init__(self, cfg): - super().__init__() - self.with_image_labels = cfg.WITH_IMAGE_LABELS - self.weak_weight = cfg.MODEL.DETR.WEAK_WEIGHT - - self.device = torch.device(cfg.MODEL.DEVICE) - self.test_topk = cfg.TEST.DETECTIONS_PER_IMAGE - self.num_classes = cfg.MODEL.DETR.NUM_CLASSES - self.mask_on = cfg.MODEL.MASK_ON - hidden_dim = cfg.MODEL.DETR.HIDDEN_DIM - num_queries = cfg.MODEL.DETR.NUM_OBJECT_QUERIES - - # Transformer parameters: - nheads = cfg.MODEL.DETR.NHEADS - dropout = cfg.MODEL.DETR.DROPOUT - dim_feedforward = cfg.MODEL.DETR.DIM_FEEDFORWARD - enc_layers = cfg.MODEL.DETR.ENC_LAYERS - dec_layers = cfg.MODEL.DETR.DEC_LAYERS - num_feature_levels = cfg.MODEL.DETR.NUM_FEATURE_LEVELS - two_stage = cfg.MODEL.DETR.TWO_STAGE - with_box_refine = cfg.MODEL.DETR.WITH_BOX_REFINE - - # Loss parameters: - giou_weight = cfg.MODEL.DETR.GIOU_WEIGHT - l1_weight = cfg.MODEL.DETR.L1_WEIGHT - deep_supervision = cfg.MODEL.DETR.DEEP_SUPERVISION - cls_weight = cfg.MODEL.DETR.CLS_WEIGHT - focal_alpha = cfg.MODEL.DETR.FOCAL_ALPHA - - N_steps = hidden_dim // 2 - d2_backbone = MaskedBackbone(cfg) - backbone = Joiner(d2_backbone, PositionEmbeddingSine(N_steps, normalize=True)) - - transformer = DeformableTransformer( - d_model=hidden_dim, - nhead=nheads, - num_encoder_layers=enc_layers, - num_decoder_layers=dec_layers, - dim_feedforward=dim_feedforward, - dropout=dropout, - activation="relu", - return_intermediate_dec=True, - num_feature_levels=num_feature_levels, - dec_n_points=4, - enc_n_points=4, - two_stage=two_stage, - two_stage_num_proposals=num_queries) - - self.detr = DeformableDETR( - backbone, transformer, num_classes=self.num_classes, - num_queries=num_queries, - num_feature_levels=num_feature_levels, - aux_loss=deep_supervision, - with_box_refine=with_box_refine, - two_stage=two_stage, - ) - - if self.mask_on: - assert 0, 'Mask is not supported yet :(' - - matcher = HungarianMatcher( - cost_class=cls_weight, cost_bbox=l1_weight, cost_giou=giou_weight) - weight_dict = {"loss_ce": cls_weight, "loss_bbox": l1_weight} - weight_dict["loss_giou"] = giou_weight - if deep_supervision: - aux_weight_dict = {} - for i in range(dec_layers - 1): - aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()}) - weight_dict.update(aux_weight_dict) - print('weight_dict', weight_dict) - losses = ["labels", "boxes", "cardinality"] - if self.mask_on: - losses += ["masks"] - self.criterion = CustomSetCriterion( - self.num_classes, matcher=matcher, weight_dict=weight_dict, - focal_alpha=focal_alpha, - losses=losses, - use_fed_loss=cfg.MODEL.DETR.USE_FED_LOSS - ) - pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(3, 1, 1) - pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(self.device).view(3, 1, 1) - self.normalizer = lambda x: (x - pixel_mean) / pixel_std - - - def forward(self, batched_inputs): - """ - Args: - Returns: - dict[str: Tensor]: - mapping from a named loss to a tensor storing the loss. Used during training only. - """ - images = self.preprocess_image(batched_inputs) - output = self.detr(images) - if self.training: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - targets = self.prepare_targets(gt_instances) - loss_dict = self.criterion(output, targets) - weight_dict = self.criterion.weight_dict - for k in loss_dict.keys(): - if k in weight_dict: - loss_dict[k] *= weight_dict[k] - if self.with_image_labels: - if batched_inputs[0]['ann_type'] in ['image', 'captiontag']: - loss_dict['loss_image'] = self.weak_weight * self._weak_loss( - output, batched_inputs) - else: - loss_dict['loss_image'] = images[0].new_zeros( - [1], dtype=torch.float32)[0] - # import pdb; pdb.set_trace() - return loss_dict - else: - image_sizes = output["pred_boxes"].new_tensor( - [(t["height"], t["width"]) for t in batched_inputs]) - results = self.post_process(output, image_sizes) - return results - - - def prepare_targets(self, targets): - new_targets = [] - for targets_per_image in targets: - h, w = targets_per_image.image_size - image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float, device=self.device) - gt_classes = targets_per_image.gt_classes - gt_boxes = targets_per_image.gt_boxes.tensor / image_size_xyxy - gt_boxes = box_xyxy_to_cxcywh(gt_boxes) - new_targets.append({"labels": gt_classes, "boxes": gt_boxes}) - if self.mask_on and hasattr(targets_per_image, 'gt_masks'): - assert 0, 'Mask is not supported yet :(' - gt_masks = targets_per_image.gt_masks - gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w) - new_targets[-1].update({'masks': gt_masks}) - return new_targets - - - def post_process(self, outputs, target_sizes): - """ - """ - out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes'] - assert len(out_logits) == len(target_sizes) - assert target_sizes.shape[1] == 2 - - prob = out_logits.sigmoid() - topk_values, topk_indexes = torch.topk( - prob.view(out_logits.shape[0], -1), self.test_topk, dim=1) - scores = topk_values - topk_boxes = topk_indexes // out_logits.shape[2] - labels = topk_indexes % out_logits.shape[2] - boxes = box_cxcywh_to_xyxy(out_bbox) - boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4)) - - # and from relative [0, 1] to absolute [0, height] coordinates - img_h, img_w = target_sizes.unbind(1) - scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1) - boxes = boxes * scale_fct[:, None, :] - - results = [] - for s, l, b, size in zip(scores, labels, boxes, target_sizes): - r = Instances((size[0], size[1])) - r.pred_boxes = Boxes(b) - r.scores = s - r.pred_classes = l - results.append({'instances': r}) - return results - - - def preprocess_image(self, batched_inputs): - """ - Normalize, pad and batch the input images. - """ - images = [self.normalizer(x["image"].to(self.device)) for x in batched_inputs] - return images - - - def _weak_loss(self, outputs, batched_inputs): - loss = 0 - for b, x in enumerate(batched_inputs): - labels = x['pos_category_ids'] - pred_logits = [outputs['pred_logits'][b]] - pred_boxes = [outputs['pred_boxes'][b]] - for xx in outputs['aux_outputs']: - pred_logits.append(xx['pred_logits'][b]) - pred_boxes.append(xx['pred_boxes'][b]) - pred_logits = torch.stack(pred_logits, dim=0) # L x N x C - pred_boxes = torch.stack(pred_boxes, dim=0) # L x N x 4 - for label in labels: - loss += self._max_size_loss( - pred_logits, pred_boxes, label) / len(labels) - loss = loss / len(batched_inputs) - return loss - - - def _max_size_loss(self, logits, boxes, label): - ''' - Inputs: - logits: L x N x C - boxes: L x N x 4 - ''' - target = logits.new_zeros((logits.shape[0], logits.shape[2])) - target[:, label] = 1. - sizes = boxes[..., 2] * boxes[..., 3] # L x N - ind = sizes.argmax(dim=1) # L - loss = F.binary_cross_entropy_with_logits( - logits[range(len(ind)), ind], target, reduction='sum') - return loss \ No newline at end of file diff --git a/spaces/bhavyagiri/retrieving-memes/app.py b/spaces/bhavyagiri/retrieving-memes/app.py deleted file mode 100644 index 82429755c3c5201e316d228c7b44a9012a11bd16..0000000000000000000000000000000000000000 --- a/spaces/bhavyagiri/retrieving-memes/app.py +++ /dev/null @@ -1,40 +0,0 @@ -from sentence_transformers import SentenceTransformer, util -from huggingface_hub import hf_hub_download -import os -import pickle -import pandas as pd -from PIL import Image -import requests -from io import BytesIO -import gradio as gr - -pd.options.mode.chained_assignment = None # Turn off SettingWithCopyWarning - -embeddings = pickle.load(open(hf_hub_download("bhavyagiri/semantic-memes", repo_type="dataset", filename="meme-embeddings.pkl"), "rb")) -df = pd.read_csv(hf_hub_download("bhavyagiri/semantic-memes", repo_type="dataset", filename="input.csv")) - -model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') - -def generate_memes(prompt): - prompt_embedding = model.encode(prompt, convert_to_tensor=True) - hits = util.semantic_search(prompt_embedding, embeddings, top_k=6) - hits = pd.DataFrame(hits[0], columns=['corpus_id', 'score']) - desired_ids = hits["corpus_id"] - filtered_df = df.loc[df['id'].isin(desired_ids)] - filtered_list = list(filtered_df["url"]) - images = [Image.open(BytesIO(requests.get(img).content)) for img in filtered_list] - return ( - images - ) -input_textbox = gr.inputs.Textbox(lines=1, label="Search something cool") -output_gallery = gr.Gallery( - label="Retrieved Memes", show_label=False, elem_id="gallery" - ).style(columns=[3], rows=[2], object_fit="contain", height="auto") -title = "Semantic Search for Memes" -description = "Search Memes from small dataset of 6k memes. Check out [GitHub Repo](https://github.com/bhavya-giri/retrieving-memes)" -examples = ["Get Shreked","Going Crazy","Spiderman is my teacher"] -interpretation='default' -enable_queue=True - -iface = gr.Interface(fn=generate_memes, inputs=input_textbox, outputs=output_gallery,examples=examples,cache_examples=True,title=title,description=description,interpretation=interpretation,enable_queue=enable_queue) -iface.launch(inline=False) diff --git a/spaces/bigjoker/stable-diffusion-webui/javascript/progressbar.js b/spaces/bigjoker/stable-diffusion-webui/javascript/progressbar.js deleted file mode 100644 index ff6d757bae88f5f622767376e5315b9acf8271cd..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/javascript/progressbar.js +++ /dev/null @@ -1,243 +0,0 @@ -// code related to showing and updating progressbar shown as the image is being made - - -galleries = {} -storedGallerySelections = {} -galleryObservers = {} - -function rememberGallerySelection(id_gallery){ - storedGallerySelections[id_gallery] = getGallerySelectedIndex(id_gallery) -} - -function getGallerySelectedIndex(id_gallery){ - let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item') - let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2') - - let currentlySelectedIndex = -1 - galleryButtons.forEach(function(v, i){ if(v==galleryBtnSelected) { currentlySelectedIndex = i } }) - - return currentlySelectedIndex -} - -// this is a workaround for https://github.com/gradio-app/gradio/issues/2984 -function check_gallery(id_gallery){ - let gallery = gradioApp().getElementById(id_gallery) - // if gallery has no change, no need to setting up observer again. - if (gallery && galleries[id_gallery] !== gallery){ - galleries[id_gallery] = gallery; - if(galleryObservers[id_gallery]){ - galleryObservers[id_gallery].disconnect(); - } - - storedGallerySelections[id_gallery] = -1 - - galleryObservers[id_gallery] = new MutationObserver(function (){ - let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item') - let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2') - let currentlySelectedIndex = getGallerySelectedIndex(id_gallery) - prevSelectedIndex = storedGallerySelections[id_gallery] - storedGallerySelections[id_gallery] = -1 - - if (prevSelectedIndex !== -1 && galleryButtons.length>prevSelectedIndex && !galleryBtnSelected) { - // automatically re-open previously selected index (if exists) - activeElement = gradioApp().activeElement; - let scrollX = window.scrollX; - let scrollY = window.scrollY; - - galleryButtons[prevSelectedIndex].click(); - showGalleryImage(); - - // When the gallery button is clicked, it gains focus and scrolls itself into view - // We need to scroll back to the previous position - setTimeout(function (){ - window.scrollTo(scrollX, scrollY); - }, 50); - - if(activeElement){ - // i fought this for about an hour; i don't know why the focus is lost or why this helps recover it - // if someone has a better solution please by all means - setTimeout(function (){ - activeElement.focus({ - preventScroll: true // Refocus the element that was focused before the gallery was opened without scrolling to it - }) - }, 1); - } - } - }) - galleryObservers[id_gallery].observe( gallery, { childList:true, subtree:false }) - } -} - -onUiUpdate(function(){ - check_gallery('txt2img_gallery') - check_gallery('img2img_gallery') -}) - -function request(url, data, handler, errorHandler){ - var xhr = new XMLHttpRequest(); - var url = url; - xhr.open("POST", url, true); - xhr.setRequestHeader("Content-Type", "application/json"); - xhr.onreadystatechange = function () { - if (xhr.readyState === 4) { - if (xhr.status === 200) { - try { - var js = JSON.parse(xhr.responseText); - handler(js) - } catch (error) { - console.error(error); - errorHandler() - } - } else{ - errorHandler() - } - } - }; - var js = JSON.stringify(data); - xhr.send(js); -} - -function pad2(x){ - return x<10 ? '0'+x : x -} - -function formatTime(secs){ - if(secs > 3600){ - return pad2(Math.floor(secs/60/60)) + ":" + pad2(Math.floor(secs/60)%60) + ":" + pad2(Math.floor(secs)%60) - } else if(secs > 60){ - return pad2(Math.floor(secs/60)) + ":" + pad2(Math.floor(secs)%60) - } else{ - return Math.floor(secs) + "s" - } -} - -function setTitle(progress){ - var title = 'Stable Diffusion' - - if(opts.show_progress_in_title && progress){ - title = '[' + progress.trim() + '] ' + title; - } - - if(document.title != title){ - document.title = title; - } -} - - -function randomId(){ - return "task(" + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7)+")" -} - -// starts sending progress requests to "/internal/progress" uri, creating progressbar above progressbarContainer element and -// preview inside gallery element. Cleans up all created stuff when the task is over and calls atEnd. -// calls onProgress every time there is a progress update -function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgress){ - var dateStart = new Date() - var wasEverActive = false - var parentProgressbar = progressbarContainer.parentNode - var parentGallery = gallery ? gallery.parentNode : null - - var divProgress = document.createElement('div') - divProgress.className='progressDiv' - divProgress.style.display = opts.show_progressbar ? "" : "none" - var divInner = document.createElement('div') - divInner.className='progress' - - divProgress.appendChild(divInner) - parentProgressbar.insertBefore(divProgress, progressbarContainer) - - if(parentGallery){ - var livePreview = document.createElement('div') - livePreview.className='livePreview' - parentGallery.insertBefore(livePreview, gallery) - } - - var removeProgressBar = function(){ - setTitle("") - parentProgressbar.removeChild(divProgress) - if(parentGallery) parentGallery.removeChild(livePreview) - atEnd() - } - - var fun = function(id_task, id_live_preview){ - request("./internal/progress", {"id_task": id_task, "id_live_preview": id_live_preview}, function(res){ - if(res.completed){ - removeProgressBar() - return - } - - var rect = progressbarContainer.getBoundingClientRect() - - if(rect.width){ - divProgress.style.width = rect.width + "px"; - } - - progressText = "" - - divInner.style.width = ((res.progress || 0) * 100.0) + '%' - divInner.style.background = res.progress ? "" : "transparent" - - if(res.progress > 0){ - progressText = ((res.progress || 0) * 100.0).toFixed(0) + '%' - } - - if(res.eta){ - progressText += " ETA: " + formatTime(res.eta) - } - - - setTitle(progressText) - - if(res.textinfo && res.textinfo.indexOf("\n") == -1){ - progressText = res.textinfo + " " + progressText - } - - divInner.textContent = progressText - - var elapsedFromStart = (new Date() - dateStart) / 1000 - - if(res.active) wasEverActive = true; - - if(! res.active && wasEverActive){ - removeProgressBar() - return - } - - if(elapsedFromStart > 5 && !res.queued && !res.active){ - removeProgressBar() - return - } - - - if(res.live_preview && gallery){ - var rect = gallery.getBoundingClientRect() - if(rect.width){ - livePreview.style.width = rect.width + "px" - livePreview.style.height = rect.height + "px" - } - - var img = new Image(); - img.onload = function() { - livePreview.appendChild(img) - if(livePreview.childElementCount > 2){ - livePreview.removeChild(livePreview.firstElementChild) - } - } - img.src = res.live_preview; - } - - - if(onProgress){ - onProgress(res) - } - - setTimeout(() => { - fun(id_task, res.id_live_preview); - }, opts.live_preview_refresh_period || 500) - }, function(){ - removeProgressBar() - }) - } - - fun(id_task, 0) -} diff --git a/spaces/bingbing520/ChatGPT2/run_macOS.command b/spaces/bingbing520/ChatGPT2/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT2/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/bioriAsaeru/text-to-voice/3gp Car Mms Com ((FREE)).md b/spaces/bioriAsaeru/text-to-voice/3gp Car Mms Com ((FREE)).md deleted file mode 100644 index 1e4bda9bb4ed8c01c122dbd18f37340505434253..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/3gp Car Mms Com ((FREE)).md +++ /dev/null @@ -1,24 +0,0 @@ - -

        Ragini mms Hot Scene Showing Boobs Karishma Sharma Part 2 - Fancy of watch Indian girls naked? Here at Doodhwali Indian sex videos got you find all the FREE Indian sex videos HD and in Ultra HD and the hottest pictures of real Indians

        -

        Ragini mms Hot Scene Showing Boobs Karishma Sharma Part 3 Fancy of watch Indian girls naked? Here at Doodhwali Indian sex videos got you find all the FREE Indian sex videos HD and in Ultra HD and the hottest pictures of real Indians

        -

        3gp car mms com


        Download Filehttps://urloso.com/2uyOTC



        -

        Ragini mms Hot Scene Showing Boobs Karishma Sharma - Fancy of watch Indian girls naked? Here at Doodhwali Indian sex videos got you find all the FREE Indian sex videos HD and in Ultra HD and the hottest pictures of real Indians

        -

        The Multi Media Interface (MMI) system is an in-car user interface media system developed by Audi, and was launched at the 2001 Frankfurt Motor Show on the Audi Avantissimo concept car.[1] Production MMI was introduced in the second generation Audi A8 D3 in late 2002 and implemented in majority of its latest series of automobiles.

        -

        The central element of the MMI terminal is the control dial. This dial can be rotated, to navigate up and down through menus, and pressed to activate a selected highlighted function. Starting with MMI 3G system an integrated joystick in the main control dial can be used to (for example) navigate the map. Depending on the MMI generation and configuration, four to eight function buttons surround the control dial which are used to launch the various features. The MMI screen is available as a five-inch monochrome black-and-red or seven-inch 16:9 full colour display, depending on the variation of MMI fitted in the car. MMI uses Media Oriented Systems Transport (MOST) technology to interconnect the various systems. Harman Becker Automotive Systems manufactures the MMI system, utilizing QNX Neutrino's Real Time Operating System (RTOS) software.

        -

        MMI operates a large number of in car entertainment components, car electronics, and other functions. The list below indicates the scope of systems controllable by MMI. However, depending on the actual car model, along with which version was specified (MMI Basic, MMI High, etc.), only some, and not all functions will be applicable or available.

        -

        Certain cars have a "pseudo" type of MMI. These are the Audi A3 (8P), A4 (B6 and B7), A6 (C5), TT (8J), the R8, SEAT Exeo and Lamborghini Gallardo - when fitted with the RNS-E DVD based "Audi Navigation Plus" system.[5] Whilst appearing to be a similar layout, and operating in a similar manner, these two systems are very different, are unable to share mapping discs or software, and are not able to control non-ICE functions (such as climate, convenience or suspension settings).

        -

        -

        On members of the B8 family of vehicles (the A4 (Typ 8K), A5 (Typ 8T), and Q5 (Typ 8R)) without full navigation capability, Audi does not describe this infotainment system as MMI, although an MMI-esque control dial and function keys is provided on the radio/CD head unit.

        -

        While seemingly intuitive and user-friendly, MMI can be difficult to operate when driving. Attempts have been made to improve access: the MMI 3G features a new Joystick on the central knob to make it easier to for example input a destination using the navigation map.[18] However, the issues remain.[19] The Audi Q5's MMI infotainment control system is especially difficult to navigate, partly due to the location of its controls low down on the center console.[20]

        -

        The new MIB navigation system is the first system that allows the customer to update the vehicle's navigation system on their own. The map data is available on the myAudi website for download or available in the MMI via an OTA update. For the 2015-2016 Audi A3 with MIB1, there is no OTA option in the MMI, thus the map data can only be updated using the SD card method.

        -

        For MIB1 & MIB2 vehicles, map updates are free for the first three years after the production date of the vehicle. The vehicle is automatically activated from the factory to allow the customer to update the MMI navigation with the next five releases within the next three years. This means the customer can attempt to update the MMI with the same release as many times as they want. Release schedules for map updates are approximately Calendar Week 22 (May/June) and Calendar Week 45(October/November) of each year.[46]

        -

        When you enter indiansexmms.me, you swear that you are of legal age in your area to view the adult material and that you want to display it. All porn videos and photos are owned and copyright of their respective owners. All models were 18 years of age or older at the time of depiction. indiansexmms.me has a zero-tolerance policy against illegal pornography. indiansexmms.me uses the "Restricted To Adults" (RTA) website label to better enable parental filtering, so parents please protect your children from adult content and block access to this site by using parental control programs.
        2020 © indiansexmms.me.

        -

        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Photos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Unrated Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp XXX Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp HD Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Indian Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp MP4 Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Indian Images
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Leaked Videos
        Search chubby saudi milf aunty pumping cock getting fucked in car mms 3gp Leaked Pics

        -

        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Photos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Unrated Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp XXX Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp HD Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Indian Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp MP4 Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Indian Images
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Leaked Videos
        Search naughty girl applying jam to boyfriends dick and sucking it in car mms 3gp Leaked Pics

        -

        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Photos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Unrated Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp XXX Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp HD Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Indian Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp MP4 Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Indian Images
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Leaked Videos
        Search kathua girl riding dick and getting tit fucked inside car mms 3gp Leaked Pics

        -

        Mr. Yang Liu is a seasoned executive with 15 years of experience in an array of industries including technology, finance and investment banking. Mr. Liu served as the Chairman of the board and the Chief Executive Officer of Color Star Technology Co. Ltd, a Nasdaq company, from March 2019 to July 2020. Through a series of acquisitions and dispositions, he successfully transformed the company from a traditional Chinese manufacturer into a global technology company in education and entertainment. He also served as the CEO at Wave Sync Corporation, an OTC technology company from July 2017 to August 2018.

        -

        From 2015 to 2017, Mr. Liu worked as the Murex Regional Manager in UBS, overseeing the North America regional Murex team for production support and implementation coordination. Before that, Mr. Liu served as a Senior Consultant for Murex North America, leading a global team to manage both production supports and upgrades for selective large institutions in North America. Murex is the 3rd largest software publisher in France, providing technology solutions for trading, treasury, risk, and post-trade operations for financial markets.

        -

        Mr. Liu received two M.S. degrees in financial mathematics and electrical engineering from New Mexico State University in the United States. He also received a B.S. Degree in Electric Engineering from Tsinghua University in China.

        -

        AIT News Desk is a trained group of web journalists and reporters who collect news from all over the technology landscape. The technical space includes advanced technologies related to AI, ML, ITops, Cloud Security, Privacy and Security, Cyberthreat intelligence, Space, Big data and Analytics, Blockchain and Crypto.
        To connect, please write to AiT Analyst at news@martechseries.com.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Builders Of Egypt Full Crack [BEST Xforce].md b/spaces/bioriAsaeru/text-to-voice/Builders Of Egypt Full Crack [BEST Xforce].md deleted file mode 100644 index ca772f3362894b42def7f44a81ac51f45a8b7476..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Builders Of Egypt Full Crack [BEST Xforce].md +++ /dev/null @@ -1,40 +0,0 @@ -

        Builders Of Egypt full crack [Xforce]


        Download Zip ··· https://urloso.com/2uyRmc



        -
        -Nile Valley, Egypt. The building of the pyramids in Giza is one of the world's greatest engineering feats. Was the design for the pyramids a collaborative effort? Find out in our short documentary! ?.. In this Egypt expert video, you'll learn more about.. - -The pyramids of Giza in Egypt. A similar column-and-pillar construction was used on the Lighthouse of Alexandria, built c. BC 150. The technology and science behind the Giza pyramids led to the discovery of electricity.Q: - -How to get the UTC time of the current moment in.net - -I am using TimeZoneInfo.ConvertTime(DateTime.Now, TimeZoneInfo.Utc, DateTimeKind.Utc) but it gives me the time in GMT + 5 hours. - -TimeZoneInfo.ConvertTime(DateTime.Now, TimeZoneInfo.Utc, DateTimeKind.Utc) in C# - -A: - -You can use DateTime.SpecifyKind, as in this answer: - -DateTime date = DateTime.SpecifyKind(DateTime.Now, DateTimeKind.Utc); - -I know this is quite old but I had to share. - -if (DateTime.UtcNow.Kind == DateTimeKind.Utc) - - DateTime.UtcNow.ToUniversalTime(); - -And if you want to get rid of the timezone change: - -DateTime.Now is actually returning you the local time. - -TimeZoneInfo.ConvertTime(DateTime.Now, TimeZoneInfo.Utc, DateTimeKind.Utc) - -gives you the current UTC time. You can use - -instead. - -See MSDN for all available time zone types - -The 13th annual Acoustic Celebration & Fundraiser for the Juvenile Diabetes Research Foundation® (JDRF) will be held on Saturday, May 4, 2016 from 11:30 a.m. – 6 p.m. at Galleria Dallas, located 4fefd39f24
        -
        -
        -

        diff --git a/spaces/bioriAsaeru/text-to-voice/Download Garageband For Mac Os X 10.7 5 and Discover the Power of Digital Music Creation.md b/spaces/bioriAsaeru/text-to-voice/Download Garageband For Mac Os X 10.7 5 and Discover the Power of Digital Music Creation.md deleted file mode 100644 index 60f4e86ad46614b7b06e508ee581c262d4fba1bd..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Garageband For Mac Os X 10.7 5 and Discover the Power of Digital Music Creation.md +++ /dev/null @@ -1,23 +0,0 @@ -
        -

        You may find an installer for GarageBand '11 on the system install disks that came with your Mac, if it came with Snow Leopard installed. Mac with MacOS 10.7.5 Lion came without system installer disks. Then you should find a download link for GarageBand on the Purchases tab of the AppStore.

        -

        Hi I have a vintage Macbook that can only run up to OS X 10.7.5 and I'm going to need to install Garageband, but the Apple Store only provides the newest version of Garageband which is not compatible with my older OS, so how can I simply download the older version of Garageband? This should not be a difficult problem to solve yet hours of internet research yields no solution. This is extremely frustrating because I need to get to work on recording, but I' not about to dish out thousands of dollars on a new mac when an earlier version of garageband would work fine on the mac I currently have.

        -

        Download Garageband For Mac Os X 10.7 5


        Download Ziphttps://urloso.com/2uyPLn



        -

        I moved my files form iOS to my mac successfully but when I try to open them, it pops a compatibility update and starts to download it, and after it is done, it says it can not be installed and have to try again, so I CAN NOT OPEN MY FILES. I have done this like ten times and it´s always the same. WHAT CAN I DO?????? HELP!!!!!!!

        -

        With the release of 5.3.2, we have officially discontinued support for 32-bit plug-ins. With macOS Catalina requiring all applications to be 64-bit, and with the vast majority of our PC customers using 64-bit plug-ins, we decided it was the right time to end 32-bit support. Our most recent 32-bit installers are available to download below.

        -

        The Logic Pro 10.7.5 update also comes with two additional sound packs, including Beat Tape (Hip Hop) and Modular Rhythms (Synth Drums). This gives both new and experienced users some fresh new sounds to inspire the creative process and expand your library.

        -

        A preview of OS X 10.7 Lion was publicly shown at the "Back to the Mac" Apple Special Event on October 20, 2010. It brought many developments made in Apple's iOS, such as an easily navigable display of installed applications, to the Mac, and includes support for the Mac App Store, as introduced in Mac OS X 10.6 Snow Leopard version 10.6.6.[7][8]On February 24, 2011, the first developer's preview of Lion (11A390) was released to subscribers to the Apple Developer program.[9] Other developer previews were subsequently released, with Lion Preview 4 (11A480b) being released at WWDC 2011.[10]

        -

        Although originally paid, Apple later allowed free downloads of the OS, especially for customers of older and no longer officially supported Mac computers, starting on June 30, 2021.[15] The same practice was applied to its successor, OS X Mountain Lion.

        -

        Apple did not initially announce any physical media distribution for Lion, such as a set of CD-ROMs or a DVD-ROM as used for past releases. Instead, the operating system was said to be available exclusively as a download from the Mac App Store for US$29.99.[17][18] The only prior version of OS X that supports the Mac App Store is Snow Leopard, which implied that any machines that support Lion currently running Tiger or Leopard would first have to be upgraded to Snow Leopard, as opposed to allowing a direct upgrade to Lion.

        -

        Apple later announced two alternative distribution mechanisms for the benefit of users without broadband Internet access: in-store downloads at retail Apple Stores, and a USB flash drive containing the OS, priced at US$69, available through the online Apple Store beginning in August.[2] On August 4, 2011, Apple started to take orders for OS X Lion's USB installation flash drives for $69.99.[19]

        -

        -

        This update includes Ivory 1.72.03 Audio Unit component (for all AU hosts including Ivory Standalone), VST plug-in, RTAS plug-in, updated PACE iLok drivers, and other supporting files for Mac OS X 10.4 (Tiger) through Mac OS X 10.7 (Lion).

        -

        Listed below are Ivory 1.7 DVD Installers for Macintosh OS X 10.7 (Lion). Run these to install Ivory from your Ivory 1.7 installation DVDs on systems running Lion. After all DVDs have been installed, please run the 1.72 updater found above.

        -

        Mac OS X 10.5 Leopard (Intel Mac),Mac OS X 10.5 Leopard (PPC Mac),Mac OS X 10.6 Snow Leopard,Mac OS X 10.7 Lion,Mac OS X 10.8 Mountain Lion,Mac OS X 10.9 Maverics,Mac OS X 10.10 Yosemite,Mac OS X 10.11 El Capitan

        -

        If you believe that the downloading process was faulty, you may contact Yamaha, and Yamaha shall permit you to re-download the SOFTWARE, provided that you first destroy any copies or partial copies of the SOFTWARE that you obtained through your previous download attempt. This permission to re-download shall not limit in any manner the disclaimer of warranty set forth in Section 5 below.

        -

        I don't want to use ProTools; I just want to use the MBox as a simple USB audio interface for Audacity/Garageband. The last standalone driver Digidesign produced was for Mac OS X 10.5 Leopard. Is there any way I can get this device working with OS X 10.7 Lion?

        -

        You first need to purchase or install any version of GarageBand '11. Then you can update it to GarageBand 6.0.5 from the downloads page. The update download is here: GarageBand 6.0.5 Download java for mac sierra.

        -

        If it's any encouragement, Reaper is much loved by its user base. It's a small download, it supports all common file formats at whatever quality you need, and it supports the use of free VST plugins, putting thousands of instruments and effects at your fingertips.

        -

        What Reaper lacks compared to similar commercial products is a sound library. The internet is packed with thousands of free downloadable samples you can use to build your own, though, so it likely will not be a dealbreaker for you.

        -

        System Requirement: OS X 10.6 and later is used. Audacity runs best with at least 1 GB RAM and a 1 GHz processor (2 GB RAM/2 GHz on OS X 10.7 and later). For lengthy multi-track projects, a minimum of 2 GB RAM and 2 GHz processor (4 GB RAM on OS X 10.7 and later) is required.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/breynolds1247/StarryNight_StyleTransfer/README.md b/spaces/breynolds1247/StarryNight_StyleTransfer/README.md deleted file mode 100644 index cfc583ef31887f962d045c529ace523ac6da81c5..0000000000000000000000000000000000000000 --- a/spaces/breynolds1247/StarryNight_StyleTransfer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: StarryNight_StyleTransfer -emoji: 🐠 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.0.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/brjathu/HMR2.0/hmr2/models/smpl_wrapper.py b/spaces/brjathu/HMR2.0/hmr2/models/smpl_wrapper.py deleted file mode 100644 index 4f9845405abd459632e19a8ad2ec2f12a3521c00..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/hmr2/models/smpl_wrapper.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch -import numpy as np -import pickle -from typing import Optional -import smplx -from smplx.lbs import vertices2joints -from smplx.utils import SMPLOutput - - -class SMPL(smplx.SMPLLayer): - def __init__(self, *args, joint_regressor_extra: Optional[str] = None, update_hips: bool = False, **kwargs): - """ - Extension of the official SMPL implementation to support more joints. - Args: - Same as SMPLLayer. - joint_regressor_extra (str): Path to extra joint regressor. - """ - super(SMPL, self).__init__(*args, **kwargs) - smpl_to_openpose = [24, 12, 17, 19, 21, 16, 18, 20, 0, 2, 5, 8, 1, 4, - 7, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] - - if joint_regressor_extra is not None: - self.register_buffer('joint_regressor_extra', torch.tensor(pickle.load(open(joint_regressor_extra, 'rb'), encoding='latin1'), dtype=torch.float32)) - self.register_buffer('joint_map', torch.tensor(smpl_to_openpose, dtype=torch.long)) - self.update_hips = update_hips - - def forward(self, *args, **kwargs) -> SMPLOutput: - """ - Run forward pass. Same as SMPL and also append an extra set of joints if joint_regressor_extra is specified. - """ - smpl_output = super(SMPL, self).forward(*args, **kwargs) - joints = smpl_output.joints[:, self.joint_map, :] - if self.update_hips: - joints[:,[9,12]] = joints[:,[9,12]] + \ - 0.25*(joints[:,[9,12]]-joints[:,[12,9]]) + \ - 0.5*(joints[:,[8]] - 0.5*(joints[:,[9,12]] + joints[:,[12,9]])) - if hasattr(self, 'joint_regressor_extra'): - extra_joints = vertices2joints(self.joint_regressor_extra, smpl_output.vertices) - joints = torch.cat([joints, extra_joints], dim=1) - smpl_output.joints = joints - return smpl_output diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/base.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/base.py deleted file mode 100644 index c9dbe56cecff6dbbc1a1fda5a89c5f917513dcd8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/converters/base.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any, Tuple, Type -import torch - - -class BaseConverter: - """ - Converter base class to be reused by various converters. - Converter allows one to convert data from various source types to a particular - destination type. Each source type needs to register its converter. The - registration for each source type is valid for all descendants of that type. - """ - - @classmethod - def register(cls, from_type: Type, converter: Any = None): - """ - Registers a converter for the specified type. - Can be used as a decorator (if converter is None), or called as a method. - - Args: - from_type (type): type to register the converter for; - all instances of this type will use the same converter - converter (callable): converter to be registered for the given - type; if None, this method is assumed to be a decorator for the converter - """ - - if converter is not None: - cls._do_register(from_type, converter) - - def wrapper(converter: Any) -> Any: - cls._do_register(from_type, converter) - return converter - - return wrapper - - @classmethod - def _do_register(cls, from_type: Type, converter: Any): - cls.registry[from_type] = converter # pyre-ignore[16] - - @classmethod - def _lookup_converter(cls, from_type: Type) -> Any: - """ - Perform recursive lookup for the given type - to find registered converter. If a converter was found for some base - class, it gets registered for this class to save on further lookups. - - Args: - from_type: type for which to find a converter - Return: - callable or None - registered converter or None - if no suitable entry was found in the registry - """ - if from_type in cls.registry: # pyre-ignore[16] - return cls.registry[from_type] - for base in from_type.__bases__: - converter = cls._lookup_converter(base) - if converter is not None: - cls._do_register(from_type, converter) - return converter - return None - - @classmethod - def convert(cls, instance: Any, *args, **kwargs): - """ - Convert an instance to the destination type using some registered - converter. Does recursive lookup for base classes, so there's no need - for explicit registration for derived classes. - - Args: - instance: source instance to convert to the destination type - Return: - An instance of the destination type obtained from the source instance - Raises KeyError, if no suitable converter found - """ - instance_type = type(instance) - converter = cls._lookup_converter(instance_type) - if converter is None: - if cls.dst_type is None: # pyre-ignore[16] - output_type_str = "itself" - else: - output_type_str = cls.dst_type - raise KeyError(f"Could not find converter from {instance_type} to {output_type_str}") - return converter(instance, *args, **kwargs) - - -IntTupleBox = Tuple[int, int, int, int] - - -def make_int_box(box: torch.Tensor) -> IntTupleBox: - int_box = [0, 0, 0, 0] - int_box[0], int_box[1], int_box[2], int_box[3] = tuple(box.long().tolist()) - return int_box[0], int_box[1], int_box[2], int_box[3] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/__init__.py deleted file mode 100644 index e5c593700e7274ea9cbaf8f4a52e8a229ef4c5a1..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/losses/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from .chart import DensePoseChartLoss -from .chart_with_confidences import DensePoseChartWithConfidenceLoss -from .cse import DensePoseCseLoss -from .registry import DENSEPOSE_LOSS_REGISTRY - - -__all__ = [ - "DensePoseChartLoss", - "DensePoseChartWithConfidenceLoss", - "DensePoseCseLoss", - "DENSEPOSE_LOSS_REGISTRY", -] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_yacs_config.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_yacs_config.py deleted file mode 100644 index 01dd6955f78e2700ffc10ed723ab1c95df0e5a18..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/config/test_yacs_config.py +++ /dev/null @@ -1,270 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - - -import os -import tempfile -import unittest -import torch -from omegaconf import OmegaConf - -from detectron2 import model_zoo -from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config -from detectron2.layers import ShapeSpec -from detectron2.modeling import build_model - -_V0_CFG = """ -MODEL: - RPN_HEAD: - NAME: "TEST" -VERSION: 0 -""" - -_V1_CFG = """ -MODEL: - WEIGHT: "/path/to/weight" -""" - - -class TestConfigVersioning(unittest.TestCase): - def test_upgrade_downgrade_consistency(self): - cfg = get_cfg() - # check that custom is preserved - cfg.USER_CUSTOM = 1 - - down = downgrade_config(cfg, to_version=0) - up = upgrade_config(down) - self.assertTrue(up == cfg) - - def _merge_cfg_str(self, cfg, merge_str): - f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) - try: - f.write(merge_str) - f.close() - cfg.merge_from_file(f.name) - finally: - os.remove(f.name) - return cfg - - def test_auto_upgrade(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - cfg.USER_CUSTOM = 1 - - self._merge_cfg_str(cfg, _V0_CFG) - - self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST") - self.assertEqual(cfg.VERSION, latest_ver) - - def test_guess_v1(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - self._merge_cfg_str(cfg, _V1_CFG) - self.assertEqual(cfg.VERSION, latest_ver) - - -class _TestClassA(torch.nn.Module): - @configurable - def __init__(self, arg1, arg2, arg3=3): - super().__init__() - self.arg1 = arg1 - self.arg2 = arg2 - self.arg3 = arg3 - assert arg1 == 1 - assert arg2 == 2 - assert arg3 == 3 - - @classmethod - def from_config(cls, cfg): - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - return args - - -class _TestClassB(_TestClassA): - @configurable - def __init__(self, input_shape, arg1, arg2, arg3=3): - """ - Doc of _TestClassB - """ - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - @classmethod - def from_config(cls, cfg, input_shape): # test extra positional arg in from_config - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - return args - - -class _LegacySubClass(_TestClassB): - # an old subclass written in cfg style - def __init__(self, cfg, input_shape, arg4=4): - super().__init__(cfg, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _NewSubClassNewInit(_TestClassB): - # test new subclass with a new __init__ - @configurable - def __init__(self, input_shape, arg4=4, **kwargs): - super().__init__(input_shape, **kwargs) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _LegacySubClassNotCfg(_TestClassB): - # an old subclass written in cfg style, but argument is not called "cfg" - def __init__(self, config, input_shape): - super().__init__(config, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _TestClassC(_TestClassB): - @classmethod - def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - args.update(kwargs) - return args - - -class _TestClassD(_TestClassA): - @configurable - def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3): - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - # _TestClassA.from_config does not have input_shape args. - # Test whether input_shape will be forwarded to __init__ - - -@configurable(from_config=lambda cfg, arg2: {"arg1": cfg.ARG1, "arg2": arg2, "arg3": cfg.ARG3}) -def _test_func(arg1, arg2=2, arg3=3, arg4=4): - return arg1, arg2, arg3, arg4 - - -class TestConfigurable(unittest.TestCase): - def testInitWithArgs(self): - _ = _TestClassA(arg1=1, arg2=2, arg3=3) - _ = _TestClassB("shape", arg1=1, arg2=2) - _ = _TestClassC("shape", arg1=1, arg2=2) - _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3) - - def testPatchedAttr(self): - self.assertTrue("Doc" in _TestClassB.__init__.__doc__) - self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int) - - def testInitWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - cfg.ARG3 = 3 - _ = _TestClassA(cfg) - _ = _TestClassB(cfg, input_shape="shape") - _ = _TestClassC(cfg, input_shape="shape") - _ = _TestClassD(cfg, input_shape="shape") - _ = _LegacySubClass(cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(cfg, input_shape="shape") - with self.assertRaises(TypeError): - # disallow forwarding positional args to __init__ since it's prone to errors - _ = _TestClassD(cfg, "shape") - - # call with kwargs instead - _ = _TestClassA(cfg=cfg) - _ = _TestClassB(cfg=cfg, input_shape="shape") - _ = _TestClassC(cfg=cfg, input_shape="shape") - _ = _TestClassD(cfg=cfg, input_shape="shape") - _ = _LegacySubClass(cfg=cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape") - - def testInitWithCfgOverwrite(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 999 # wrong config - with self.assertRaises(AssertionError): - _ = _TestClassA(cfg, arg3=3) - - # overwrite arg2 with correct config later: - _ = _TestClassA(cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3) - - # call with kwargs cfg=cfg instead - _ = _TestClassA(cfg=cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - - def testInitWithCfgWrongArgs(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - with self.assertRaises(TypeError): - _ = _TestClassB(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassC(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassD(cfg, "shape", not_exist=1) - - def testBadClass(self): - class _BadClass1: - @configurable - def __init__(self, a=1, b=2): - pass - - class _BadClass2: - @configurable - def __init__(self, a=1, b=2): - pass - - def from_config(self, cfg): # noqa - pass - - class _BadClass3: - @configurable - def __init__(self, a=1, b=2): - pass - - # bad name: must be cfg - @classmethod - def from_config(cls, config): # noqa - pass - - with self.assertRaises(AttributeError): - _ = _BadClass1(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass2(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass3(get_cfg()) - - def testFuncWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 10 - cfg.ARG3 = 30 - - self.assertEqual(_test_func(1), (1, 2, 3, 4)) - with self.assertRaises(TypeError): - _test_func(cfg) - self.assertEqual(_test_func(cfg, arg2=2), (10, 2, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20), (100, 20, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20, arg4=40), (100, 20, 30, 40)) - - self.assertTrue(callable(_test_func.from_config)) - - def testOmegaConf(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - cfg = OmegaConf.create(cfg.dump()) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - # test that a model can be built with omegaconf config as well - build_model(cfg) diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/data/scripts/get_coco128.sh b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/data/scripts/get_coco128.sh deleted file mode 100644 index ee05a867e5644be8cc7549b89cad89d5e84573d0..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/data/scripts/get_coco128.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) -# Example usage: bash data/scripts/get_coco128.sh -# parent -# ├── yolov5 -# └── datasets -# └── coco128 ← downloads here - -# Download/unzip images and labels -d='../datasets' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -f='coco128.zip' # or 'coco128-segments.zip', 68 MB -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & - -wait # finish background tasks diff --git a/spaces/bwconrad/anime-character-classification/README.md b/spaces/bwconrad/anime-character-classification/README.md deleted file mode 100644 index 5899779f15095b8daf4747336b614a4ddeb29f3a..0000000000000000000000000000000000000000 --- a/spaces/bwconrad/anime-character-classification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anime Character Classification -emoji: 🌍 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/Misc/torchvision_imagenet_R_50.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/Misc/torchvision_imagenet_R_50.py deleted file mode 100644 index 0d75305bcf7445b98db84b3d489a1505d2fce5af..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/Misc/torchvision_imagenet_R_50.py +++ /dev/null @@ -1,150 +0,0 @@ -""" -An example config file to train a ImageNet classifier with detectron2. -Model and dataloader both come from torchvision. -This shows how to use detectron2 as a general engine for any new models and tasks. - -To run, use the following command: - -python tools/lazyconfig_train_net.py --config-file configs/Misc/torchvision_imagenet_R_50.py \ - --num-gpus 8 dataloader.train.dataset.root=/path/to/imagenet/ - -""" - - -import torch -from torch import nn -from torch.nn import functional as F -from omegaconf import OmegaConf -import torchvision -from torchvision.transforms import transforms as T -from torchvision.models.resnet import ResNet, Bottleneck -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2.solver import WarmupParamScheduler -from detectron2.solver.build import get_default_optimizer_params -from detectron2.config import LazyCall as L -from detectron2.model_zoo import get_config -from detectron2.data.samplers import TrainingSampler, InferenceSampler -from detectron2.evaluation import DatasetEvaluator -from detectron2.utils import comm - - -""" -Note: Here we put reusable code (models, evaluation, data) together with configs just as a -proof-of-concept, to easily demonstrate what's needed to train a ImageNet classifier in detectron2. -Writing code in configs offers extreme flexibility but is often not a good engineering practice. -In practice, you might want to put code in your project and import them instead. -""" - - -def build_data_loader(dataset, batch_size, num_workers, training=True): - return torch.utils.data.DataLoader( - dataset, - sampler=(TrainingSampler if training else InferenceSampler)(len(dataset)), - batch_size=batch_size, - num_workers=num_workers, - pin_memory=True, - ) - - -class ClassificationNet(nn.Module): - def __init__(self, model: nn.Module): - super().__init__() - self.model = model - - @property - def device(self): - return list(self.model.parameters())[0].device - - def forward(self, inputs): - image, label = inputs - pred = self.model(image.to(self.device)) - if self.training: - label = label.to(self.device) - return F.cross_entropy(pred, label) - else: - return pred - - -class ClassificationAcc(DatasetEvaluator): - def reset(self): - self.corr = self.total = 0 - - def process(self, inputs, outputs): - image, label = inputs - self.corr += (outputs.argmax(dim=1).cpu() == label.cpu()).sum().item() - self.total += len(label) - - def evaluate(self): - all_corr_total = comm.all_gather([self.corr, self.total]) - corr = sum(x[0] for x in all_corr_total) - total = sum(x[1] for x in all_corr_total) - return {"accuracy": corr / total} - - -# --- End of code that could be in a project and be imported - - -dataloader = OmegaConf.create() -dataloader.train = L(build_data_loader)( - dataset=L(torchvision.datasets.ImageNet)( - root="/path/to/imagenet", - split="train", - transform=L(T.Compose)( - transforms=[ - L(T.RandomResizedCrop)(size=224), - L(T.RandomHorizontalFlip)(), - T.ToTensor(), - L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), - ] - ), - ), - batch_size=256 // 8, - num_workers=4, - training=True, -) - -dataloader.test = L(build_data_loader)( - dataset=L(torchvision.datasets.ImageNet)( - root="${...train.dataset.root}", - split="val", - transform=L(T.Compose)( - transforms=[ - L(T.Resize)(size=256), - L(T.CenterCrop)(size=224), - T.ToTensor(), - L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), - ] - ), - ), - batch_size=256 // 8, - num_workers=4, - training=False, -) - -dataloader.evaluator = L(ClassificationAcc)() - -model = L(ClassificationNet)( - model=(ResNet)(block=Bottleneck, layers=[3, 4, 6, 3], zero_init_residual=True) -) - - -optimizer = L(torch.optim.SGD)( - params=L(get_default_optimizer_params)(), - lr=0.1, - momentum=0.9, - weight_decay=1e-4, -) - -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01, 0.001], milestones=[30, 60, 90, 100] - ), - warmup_length=1 / 100, - warmup_factor=0.1, -) - - -train = get_config("common/train.py").train -train.init_checkpoint = None -train.max_iter = 100 * 1281167 // 256 diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_b_in21k_3x.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_b_in21k_3x.py deleted file mode 100644 index 7c3bdce0a2206b3afd1a33245a193292f0cd2a35..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_b_in21k_3x.py +++ /dev/null @@ -1,3 +0,0 @@ -from .cascade_mask_rcnn_mvitv2_b_3x import model, dataloader, optimizer, lr_multiplier, train - -train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_B_in21k.pyth" diff --git a/spaces/cc00/THUDM-chatglm-6b-int4-qe/README.md b/spaces/cc00/THUDM-chatglm-6b-int4-qe/README.md deleted file mode 100644 index 2df108d7e92cba478d58ff2288d7e295caa33016..0000000000000000000000000000000000000000 --- a/spaces/cc00/THUDM-chatglm-6b-int4-qe/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: THUDM Chatglm 6b Int4 Qe -emoji: 💻 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/charanhu/GPT-4/app.py b/spaces/charanhu/GPT-4/app.py deleted file mode 100644 index 71679e77bdd246e741ca26f0c29907b70415b542..0000000000000000000000000000000000000000 --- a/spaces/charanhu/GPT-4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g").launch() \ No newline at end of file diff --git a/spaces/chenxc1029/Local-Code-Interpreter/functional.py b/spaces/chenxc1029/Local-Code-Interpreter/functional.py deleted file mode 100644 index c28e9c5298996da3319aa9630f8e01470e5a3b1c..0000000000000000000000000000000000000000 --- a/spaces/chenxc1029/Local-Code-Interpreter/functional.py +++ /dev/null @@ -1,116 +0,0 @@ -from bot_backend import * -import base64 -import time - - -def chat_completion(bot_backend: BotBackend): - model_choice = bot_backend.gpt_model_choice - config = bot_backend.config - kwargs_for_chat_completion = bot_backend.kwargs_for_chat_completion - - assert config['model'][model_choice]['available'], f"{model_choice} is not available for you API key" - - response = openai.ChatCompletion.create(**kwargs_for_chat_completion) - return response - - -def add_function_response_to_bot_history(content_to_display, history, unique_id): - images, text = [], [] - - # terminal output - error_occurred = False - for mark, out_str in content_to_display: - if mark in ('stdout', 'execute_result_text', 'display_text'): - text.append(out_str) - elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'): - if 'png' in mark: - images.append(('png', out_str)) - else: - images.append(('jpg', out_str)) - elif mark == 'error': - text.append(delete_color_control_char(out_str)) - error_occurred = True - text = '\n'.join(text).strip('\n') - if error_occurred: - history.append([None, f'❌Terminal output:\n```shell\n\n{text}\n```']) - else: - history.append([None, f'✔️Terminal output:\n```shell\n{text}\n```']) - - # image output - for filetype, img in images: - image_bytes = base64.b64decode(img) - temp_path = f'cache/temp_{unique_id}' - if not os.path.exists(temp_path): - os.mkdir(temp_path) - path = f'{temp_path}/{hash(time.time())}.{filetype}' - with open(path, 'wb') as f: - f.write(image_bytes) - history.append( - [ - None, - f'' - ] - ) - - -def parse_json(function_args: str, finished: bool): - """ - GPT may generate non-standard JSON format string, which contains '\n' in string value, leading to error when using - `json.loads()`. - Here we implement a parser to extract code directly from non-standard JSON string. - :return: code string if successfully parsed otherwise None - """ - parser_log = { - 'met_begin_{': False, - 'begin_"code"': False, - 'end_"code"': False, - 'met_:': False, - 'met_end_}': False, - 'met_end_code_"': False, - "code_begin_index": 0, - "code_end_index": 0 - } - try: - for index, char in enumerate(function_args): - if char == '{': - parser_log['met_begin_{'] = True - elif parser_log['met_begin_{'] and char == '"': - if parser_log['met_:']: - if finished: - parser_log['code_begin_index'] = index + 1 - break - else: - if index + 1 == len(function_args): - return '' - else: - temp_code_str = function_args[index + 1:] - if '\n' in temp_code_str: - return temp_code_str.strip('\n') - else: - return json.loads(function_args + '"}')['code'] - elif parser_log['begin_"code"']: - parser_log['end_"code"'] = True - else: - parser_log['begin_"code"'] = True - elif parser_log['end_"code"'] and char == ':': - parser_log['met_:'] = True - else: - continue - if finished: - for index, char in enumerate(function_args[::-1]): - back_index = -1 - index - if char == '}': - parser_log['met_end_}'] = True - elif parser_log['met_end_}'] and char == '"': - parser_log['code_end_index'] = back_index - 1 - break - else: - continue - code_str = function_args[parser_log['code_begin_index']: parser_log['code_end_index'] + 1] - if '\n' in code_str: - return code_str.strip('\n') - else: - return json.loads(function_args)['code'] - - except Exception as e: - return None diff --git a/spaces/chilge/taoli/vdecoder/__init__.py b/spaces/chilge/taoli/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/unknown_fields.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/unknown_fields.py deleted file mode 100644 index 3bd828619fbe66101352e977c53273f19d5fe5ed..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/unknown_fields.py +++ /dev/null @@ -1,120 +0,0 @@ -# Protocol Buffers - Google's data interchange format -# Copyright 2008 Google Inc. All rights reserved. -# https://developers.google.com/protocol-buffers/ -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following disclaimer -# in the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Google Inc. nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -"""Contains Unknown Fields APIs. - -Simple usage example: - unknown_field_set = UnknownFieldSet(message) - for unknown_field in unknown_field_set: - wire_type = unknown_field.wire_type - field_number = unknown_field.field_number - data = unknown_field.data -""" - - -from google.protobuf.internal import api_implementation - -if api_implementation._c_module is not None: # pylint: disable=protected-access - UnknownFieldSet = api_implementation._c_module.UnknownFieldSet # pylint: disable=protected-access -else: - from google.protobuf.internal import decoder # pylint: disable=g-import-not-at-top - from google.protobuf.internal import wire_format # pylint: disable=g-import-not-at-top - - class UnknownField: - """A parsed unknown field.""" - - # Disallows assignment to other attributes. - __slots__ = ['_field_number', '_wire_type', '_data'] - - def __init__(self, field_number, wire_type, data): - self._field_number = field_number - self._wire_type = wire_type - self._data = data - return - - @property - def field_number(self): - return self._field_number - - @property - def wire_type(self): - return self._wire_type - - @property - def data(self): - return self._data - - class UnknownFieldSet: - """UnknownField container.""" - - # Disallows assignment to other attributes. - __slots__ = ['_values'] - - def __init__(self, msg): - - def InternalAdd(field_number, wire_type, data): - unknown_field = UnknownField(field_number, wire_type, data) - self._values.append(unknown_field) - - self._values = [] - msg_des = msg.DESCRIPTOR - # pylint: disable=protected-access - unknown_fields = msg._unknown_fields - if (msg_des.has_options and - msg_des.GetOptions().message_set_wire_format): - local_decoder = decoder.UnknownMessageSetItemDecoder() - for _, buffer in unknown_fields: - (field_number, data) = local_decoder(memoryview(buffer)) - InternalAdd(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED, data) - else: - for tag_bytes, buffer in unknown_fields: - # pylint: disable=protected-access - (tag, _) = decoder._DecodeVarint(tag_bytes, 0) - field_number, wire_type = wire_format.UnpackTag(tag) - if field_number == 0: - raise RuntimeError('Field number 0 is illegal.') - (data, _) = decoder._DecodeUnknownField( - memoryview(buffer), 0, wire_type) - InternalAdd(field_number, wire_type, data) - - def __getitem__(self, index): - size = len(self._values) - if index < 0: - index += size - if index < 0 or index >= size: - raise IndexError('index %d out of range'.index) - - return self._values[index] - - def __len__(self): - return len(self._values) - - def __iter__(self): - return iter(self._values) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/markdown.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/markdown.py deleted file mode 100644 index a441a2de706034bdbb2cc719af43a3572bd65b11..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/markdown.py +++ /dev/null @@ -1,89 +0,0 @@ -"""gr.Markdown() component.""" - -from __future__ import annotations - -import inspect -from typing import Any, Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio import utils -from gradio.components.base import Component, IOComponent, _Keywords -from gradio.events import ( - Changeable, -) - -set_documentation_group("component") - - -@document() -class Markdown(IOComponent, Changeable, StringSerializable): - """ - Used to render arbitrary Markdown output. Can also render latex enclosed by dollar signs. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a valid {str} that can be rendered as Markdown. - - Demos: blocks_hello, blocks_kinematics - Guides: key-features - """ - - def __init__( - self, - value: str | Callable = "", - *, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Value to show in Markdown component. If callable, the function will be called whenever the app loads to set the initial value of the component. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.md = utils.get_markdown_parser() - IOComponent.__init__( - self, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def postprocess(self, y: str | None) -> str | None: - """ - Parameters: - y: markdown representation - Returns: - HTML rendering of markdown - """ - if y is None: - return None - unindented_y = inspect.cleandoc(y) - return self.md.render(unindented_y) - - def get_config(self): - return { - "value": self.value, - **Component.get_config(self), - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - visible: bool | None = None, - ): - updated_config = { - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def as_example(self, input_data: str | None) -> str: - postprocessed = self.postprocess(input_data) - return postprocessed if postprocessed else "" diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrnbdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrnbdec.c deleted file mode 100644 index bfdcbba7787ef294d080c9f7af45a4eeb1d5cf39..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrnbdec.c +++ /dev/null @@ -1,1109 +0,0 @@ -/* - * AMR narrowband decoder - * Copyright (c) 2006-2007 Robert Swain - * Copyright (c) 2009 Colin McQuillan - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - - -/** - * @file - * AMR narrowband decoder - * - * This decoder uses floats for simplicity and so is not bit-exact. One - * difference is that differences in phase can accumulate. The test sequences - * in 3GPP TS 26.074 can still be useful. - * - * - Comparing this file's output to the output of the ref decoder gives a - * PSNR of 30 to 80. Plotting the output samples shows a difference in - * phase in some areas. - * - * - Comparing both decoders against their input, this decoder gives a similar - * PSNR. If the test sequence homing frames are removed (this decoder does - * not detect them), the PSNR is at least as good as the reference on 140 - * out of 169 tests. - */ - - -#include -#include - -#include "libavutil/channel_layout.h" -#include "avcodec.h" -#include "libavutil/common.h" -#include "libavutil/avassert.h" -#include "celp_math.h" -#include "celp_filters.h" -#include "acelp_filters.h" -#include "acelp_vectors.h" -#include "acelp_pitch_delay.h" -#include "lsp.h" -#include "amr.h" -#include "codec_internal.h" -#include "decode.h" - -#include "amrnbdata.h" - -#define AMR_BLOCK_SIZE 160 ///< samples per frame -#define AMR_SAMPLE_BOUND 32768.0 ///< threshold for synthesis overflow - -/** - * Scale from constructed speech to [-1,1] - * - * AMR is designed to produce 16-bit PCM samples (3GPP TS 26.090 4.2) but - * upscales by two (section 6.2.2). - * - * Fundamentally, this scale is determined by energy_mean through - * the fixed vector contribution to the excitation vector. - */ -#define AMR_SAMPLE_SCALE (2.0 / 32768.0) - -/** Prediction factor for 12.2kbit/s mode */ -#define PRED_FAC_MODE_12k2 0.65 - -#define LSF_R_FAC (8000.0 / 32768.0) ///< LSF residual tables to Hertz -#define MIN_LSF_SPACING (50.0488 / 8000.0) ///< Ensures stability of LPC filter -#define PITCH_LAG_MIN_MODE_12k2 18 ///< Lower bound on decoded lag search in 12.2kbit/s mode - -/** Initial energy in dB. Also used for bad frames (unimplemented). */ -#define MIN_ENERGY -14.0 - -/** Maximum sharpening factor - * - * The specification says 0.8, which should be 13107, but the reference C code - * uses 13017 instead. (Amusingly the same applies to SHARP_MAX in g729dec.c.) - */ -#define SHARP_MAX 0.79449462890625 - -/** Number of impulse response coefficients used for tilt factor */ -#define AMR_TILT_RESPONSE 22 -/** Tilt factor = 1st reflection coefficient * gamma_t */ -#define AMR_TILT_GAMMA_T 0.8 -/** Adaptive gain control factor used in post-filter */ -#define AMR_AGC_ALPHA 0.9 - -typedef struct AMRContext { - AMRNBFrame frame; ///< decoded AMR parameters (lsf coefficients, codebook indexes, etc) - uint8_t bad_frame_indicator; ///< bad frame ? 1 : 0 - enum Mode cur_frame_mode; - - int16_t prev_lsf_r[LP_FILTER_ORDER]; ///< residual LSF vector from previous subframe - double lsp[4][LP_FILTER_ORDER]; ///< lsp vectors from current frame - double prev_lsp_sub4[LP_FILTER_ORDER]; ///< lsp vector for the 4th subframe of the previous frame - - float lsf_q[4][LP_FILTER_ORDER]; ///< Interpolated LSF vector for fixed gain smoothing - float lsf_avg[LP_FILTER_ORDER]; ///< vector of averaged lsf vector - - float lpc[4][LP_FILTER_ORDER]; ///< lpc coefficient vectors for 4 subframes - - uint8_t pitch_lag_int; ///< integer part of pitch lag from current subframe - - float excitation_buf[PITCH_DELAY_MAX + LP_FILTER_ORDER + 1 + AMR_SUBFRAME_SIZE]; ///< current excitation and all necessary excitation history - float *excitation; ///< pointer to the current excitation vector in excitation_buf - - float pitch_vector[AMR_SUBFRAME_SIZE]; ///< adaptive code book (pitch) vector - float fixed_vector[AMR_SUBFRAME_SIZE]; ///< algebraic codebook (fixed) vector (must be kept zero between frames) - - float prediction_error[4]; ///< quantified prediction errors {20log10(^gamma_gc)} for previous four subframes - float pitch_gain[5]; ///< quantified pitch gains for the current and previous four subframes - float fixed_gain[5]; ///< quantified fixed gains for the current and previous four subframes - - float beta; ///< previous pitch_gain, bounded by [0.0,SHARP_MAX] - uint8_t diff_count; ///< the number of subframes for which diff has been above 0.65 - uint8_t hang_count; ///< the number of subframes since a hangover period started - - float prev_sparse_fixed_gain; ///< previous fixed gain; used by anti-sparseness processing to determine "onset" - uint8_t prev_ir_filter_nr; ///< previous impulse response filter "impNr": 0 - strong, 1 - medium, 2 - none - uint8_t ir_filter_onset; ///< flag for impulse response filter strength - - float postfilter_mem[10]; ///< previous intermediate values in the formant filter - float tilt_mem; ///< previous input to tilt compensation filter - float postfilter_agc; ///< previous factor used for adaptive gain control - float high_pass_mem[2]; ///< previous intermediate values in the high-pass filter - - float samples_in[LP_FILTER_ORDER + AMR_SUBFRAME_SIZE]; ///< floating point samples - - ACELPFContext acelpf_ctx; ///< context for filters for ACELP-based codecs - ACELPVContext acelpv_ctx; ///< context for vector operations for ACELP-based codecs - CELPFContext celpf_ctx; ///< context for filters for CELP-based codecs - CELPMContext celpm_ctx; ///< context for fixed point math operations - -} AMRContext; - -typedef struct AMRChannelsContext { - AMRContext ch[2]; -} AMRChannelsContext; - -/** Double version of ff_weighted_vector_sumf() */ -static void weighted_vector_sumd(double *out, const double *in_a, - const double *in_b, double weight_coeff_a, - double weight_coeff_b, int length) -{ - int i; - - for (i = 0; i < length; i++) - out[i] = weight_coeff_a * in_a[i] - + weight_coeff_b * in_b[i]; -} - -static av_cold int amrnb_decode_init(AVCodecContext *avctx) -{ - AMRChannelsContext *s = avctx->priv_data; - int i; - - if (avctx->ch_layout.nb_channels > 2) { - avpriv_report_missing_feature(avctx, ">2 channel AMR"); - return AVERROR_PATCHWELCOME; - } - - if (!avctx->ch_layout.nb_channels) { - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - } - if (!avctx->sample_rate) - avctx->sample_rate = 8000; - avctx->sample_fmt = AV_SAMPLE_FMT_FLTP; - - for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - AMRContext *p = &s->ch[ch]; - // p->excitation always points to the same position in p->excitation_buf - p->excitation = &p->excitation_buf[PITCH_DELAY_MAX + LP_FILTER_ORDER + 1]; - - for (i = 0; i < LP_FILTER_ORDER; i++) { - p->prev_lsp_sub4[i] = lsp_sub4_init[i] * 1000 / (float)(1 << 15); - p->lsf_avg[i] = p->lsf_q[3][i] = lsp_avg_init[i] / (float)(1 << 15); - } - - for (i = 0; i < 4; i++) - p->prediction_error[i] = MIN_ENERGY; - - ff_acelp_filter_init(&p->acelpf_ctx); - ff_acelp_vectors_init(&p->acelpv_ctx); - ff_celp_filter_init(&p->celpf_ctx); - ff_celp_math_init(&p->celpm_ctx); - } - - return 0; -} - - -/** - * Unpack an RFC4867 speech frame into the AMR frame mode and parameters. - * - * The order of speech bits is specified by 3GPP TS 26.101. - * - * @param p the context - * @param buf pointer to the input buffer - * @param buf_size size of the input buffer - * - * @return the frame mode - */ -static enum Mode unpack_bitstream(AMRContext *p, const uint8_t *buf, - int buf_size) -{ - enum Mode mode; - - // Decode the first octet. - mode = buf[0] >> 3 & 0x0F; // frame type - p->bad_frame_indicator = (buf[0] & 0x4) != 0x4; // quality bit - - if (mode >= N_MODES || buf_size < frame_sizes_nb[mode] + 1) { - return NO_DATA; - } - - if (mode < MODE_DTX) - ff_amr_bit_reorder((uint16_t *) &p->frame, sizeof(AMRNBFrame), buf + 1, - amr_unpacking_bitmaps_per_mode[mode]); - - return mode; -} - - -/// @name AMR pitch LPC coefficient decoding functions -/// @{ - -/** - * Interpolate the LSF vector (used for fixed gain smoothing). - * The interpolation is done over all four subframes even in MODE_12k2. - * - * @param[in] ctx The Context - * @param[in,out] lsf_q LSFs in [0,1] for each subframe - * @param[in] lsf_new New LSFs in [0,1] for subframe 4 - */ -static void interpolate_lsf(ACELPVContext *ctx, float lsf_q[4][LP_FILTER_ORDER], float *lsf_new) -{ - int i; - - for (i = 0; i < 4; i++) - ctx->weighted_vector_sumf(lsf_q[i], lsf_q[3], lsf_new, - 0.25 * (3 - i), 0.25 * (i + 1), - LP_FILTER_ORDER); -} - -/** - * Decode a set of 5 split-matrix quantized lsf indexes into an lsp vector. - * - * @param p the context - * @param lsp output LSP vector - * @param lsf_no_r LSF vector without the residual vector added - * @param lsf_quantizer pointers to LSF dictionary tables - * @param quantizer_offset offset in tables - * @param sign for the 3 dictionary table - * @param update store data for computing the next frame's LSFs - */ -static void lsf2lsp_for_mode12k2(AMRContext *p, double lsp[LP_FILTER_ORDER], - const float lsf_no_r[LP_FILTER_ORDER], - const int16_t *lsf_quantizer[5], - const int quantizer_offset, - const int sign, const int update) -{ - int16_t lsf_r[LP_FILTER_ORDER]; // residual LSF vector - float lsf_q[LP_FILTER_ORDER]; // quantified LSF vector - int i; - - for (i = 0; i < LP_FILTER_ORDER >> 1; i++) - memcpy(&lsf_r[i << 1], &lsf_quantizer[i][quantizer_offset], - 2 * sizeof(*lsf_r)); - - if (sign) { - lsf_r[4] *= -1; - lsf_r[5] *= -1; - } - - if (update) - memcpy(p->prev_lsf_r, lsf_r, LP_FILTER_ORDER * sizeof(*lsf_r)); - - for (i = 0; i < LP_FILTER_ORDER; i++) - lsf_q[i] = lsf_r[i] * (LSF_R_FAC / 8000.0) + lsf_no_r[i] * (1.0 / 8000.0); - - ff_set_min_dist_lsf(lsf_q, MIN_LSF_SPACING, LP_FILTER_ORDER); - - if (update) - interpolate_lsf(&p->acelpv_ctx, p->lsf_q, lsf_q); - - ff_acelp_lsf2lspd(lsp, lsf_q, LP_FILTER_ORDER); -} - -/** - * Decode a set of 5 split-matrix quantized lsf indexes into 2 lsp vectors. - * - * @param p pointer to the AMRContext - */ -static void lsf2lsp_5(AMRContext *p) -{ - const uint16_t *lsf_param = p->frame.lsf; - float lsf_no_r[LP_FILTER_ORDER]; // LSFs without the residual vector - const int16_t *lsf_quantizer[5]; - int i; - - lsf_quantizer[0] = lsf_5_1[lsf_param[0]]; - lsf_quantizer[1] = lsf_5_2[lsf_param[1]]; - lsf_quantizer[2] = lsf_5_3[lsf_param[2] >> 1]; - lsf_quantizer[3] = lsf_5_4[lsf_param[3]]; - lsf_quantizer[4] = lsf_5_5[lsf_param[4]]; - - for (i = 0; i < LP_FILTER_ORDER; i++) - lsf_no_r[i] = p->prev_lsf_r[i] * LSF_R_FAC * PRED_FAC_MODE_12k2 + lsf_5_mean[i]; - - lsf2lsp_for_mode12k2(p, p->lsp[1], lsf_no_r, lsf_quantizer, 0, lsf_param[2] & 1, 0); - lsf2lsp_for_mode12k2(p, p->lsp[3], lsf_no_r, lsf_quantizer, 2, lsf_param[2] & 1, 1); - - // interpolate LSP vectors at subframes 1 and 3 - weighted_vector_sumd(p->lsp[0], p->prev_lsp_sub4, p->lsp[1], 0.5, 0.5, LP_FILTER_ORDER); - weighted_vector_sumd(p->lsp[2], p->lsp[1] , p->lsp[3], 0.5, 0.5, LP_FILTER_ORDER); -} - -/** - * Decode a set of 3 split-matrix quantized lsf indexes into an lsp vector. - * - * @param p pointer to the AMRContext - */ -static void lsf2lsp_3(AMRContext *p) -{ - const uint16_t *lsf_param = p->frame.lsf; - int16_t lsf_r[LP_FILTER_ORDER]; // residual LSF vector - float lsf_q[LP_FILTER_ORDER]; // quantified LSF vector - const int16_t *lsf_quantizer; - int i, j; - - lsf_quantizer = (p->cur_frame_mode == MODE_7k95 ? lsf_3_1_MODE_7k95 : lsf_3_1)[lsf_param[0]]; - memcpy(lsf_r, lsf_quantizer, 3 * sizeof(*lsf_r)); - - lsf_quantizer = lsf_3_2[lsf_param[1] << (p->cur_frame_mode <= MODE_5k15)]; - memcpy(lsf_r + 3, lsf_quantizer, 3 * sizeof(*lsf_r)); - - lsf_quantizer = (p->cur_frame_mode <= MODE_5k15 ? lsf_3_3_MODE_5k15 : lsf_3_3)[lsf_param[2]]; - memcpy(lsf_r + 6, lsf_quantizer, 4 * sizeof(*lsf_r)); - - // calculate mean-removed LSF vector and add mean - for (i = 0; i < LP_FILTER_ORDER; i++) - lsf_q[i] = (lsf_r[i] + p->prev_lsf_r[i] * pred_fac[i]) * (LSF_R_FAC / 8000.0) + lsf_3_mean[i] * (1.0 / 8000.0); - - ff_set_min_dist_lsf(lsf_q, MIN_LSF_SPACING, LP_FILTER_ORDER); - - // store data for computing the next frame's LSFs - interpolate_lsf(&p->acelpv_ctx, p->lsf_q, lsf_q); - memcpy(p->prev_lsf_r, lsf_r, LP_FILTER_ORDER * sizeof(*lsf_r)); - - ff_acelp_lsf2lspd(p->lsp[3], lsf_q, LP_FILTER_ORDER); - - // interpolate LSP vectors at subframes 1, 2 and 3 - for (i = 1; i <= 3; i++) - for(j = 0; j < LP_FILTER_ORDER; j++) - p->lsp[i-1][j] = p->prev_lsp_sub4[j] + - (p->lsp[3][j] - p->prev_lsp_sub4[j]) * 0.25 * i; -} - -/// @} - - -/// @name AMR pitch vector decoding functions -/// @{ - -/** - * Like ff_decode_pitch_lag(), but with 1/6 resolution - */ -static void decode_pitch_lag_1_6(int *lag_int, int *lag_frac, int pitch_index, - const int prev_lag_int, const int subframe) -{ - if (subframe == 0 || subframe == 2) { - if (pitch_index < 463) { - *lag_int = (pitch_index + 107) * 10923 >> 16; - *lag_frac = pitch_index - *lag_int * 6 + 105; - } else { - *lag_int = pitch_index - 368; - *lag_frac = 0; - } - } else { - *lag_int = ((pitch_index + 5) * 10923 >> 16) - 1; - *lag_frac = pitch_index - *lag_int * 6 - 3; - *lag_int += av_clip(prev_lag_int - 5, PITCH_LAG_MIN_MODE_12k2, - PITCH_DELAY_MAX - 9); - } -} - -static void decode_pitch_vector(AMRContext *p, - const AMRNBSubframe *amr_subframe, - const int subframe) -{ - int pitch_lag_int, pitch_lag_frac; - enum Mode mode = p->cur_frame_mode; - - if (p->cur_frame_mode == MODE_12k2) { - decode_pitch_lag_1_6(&pitch_lag_int, &pitch_lag_frac, - amr_subframe->p_lag, p->pitch_lag_int, - subframe); - } else { - ff_decode_pitch_lag(&pitch_lag_int, &pitch_lag_frac, - amr_subframe->p_lag, - p->pitch_lag_int, subframe, - mode != MODE_4k75 && mode != MODE_5k15, - mode <= MODE_6k7 ? 4 : (mode == MODE_7k95 ? 5 : 6)); - pitch_lag_frac *= 2; - } - - p->pitch_lag_int = pitch_lag_int; // store previous lag in a uint8_t - - pitch_lag_int += pitch_lag_frac > 0; - - /* Calculate the pitch vector by interpolating the past excitation at the - pitch lag using a b60 hamming windowed sinc function. */ - p->acelpf_ctx.acelp_interpolatef(p->excitation, - p->excitation + 1 - pitch_lag_int, - ff_b60_sinc, 6, - pitch_lag_frac + 6 - 6*(pitch_lag_frac > 0), - 10, AMR_SUBFRAME_SIZE); - - memcpy(p->pitch_vector, p->excitation, AMR_SUBFRAME_SIZE * sizeof(float)); -} - -/// @} - - -/// @name AMR algebraic code book (fixed) vector decoding functions -/// @{ - -/** - * Decode a 10-bit algebraic codebook index from a 10.2 kbit/s frame. - */ -static void decode_10bit_pulse(int code, int pulse_position[8], - int i1, int i2, int i3) -{ - // coded using 7+3 bits with the 3 LSBs being, individually, the LSB of 1 of - // the 3 pulses and the upper 7 bits being coded in base 5 - const uint8_t *positions = base_five_table[code >> 3]; - pulse_position[i1] = (positions[2] << 1) + ( code & 1); - pulse_position[i2] = (positions[1] << 1) + ((code >> 1) & 1); - pulse_position[i3] = (positions[0] << 1) + ((code >> 2) & 1); -} - -/** - * Decode the algebraic codebook index to pulse positions and signs and - * construct the algebraic codebook vector for MODE_10k2. - * - * @param fixed_index positions of the eight pulses - * @param fixed_sparse pointer to the algebraic codebook vector - */ -static void decode_8_pulses_31bits(const int16_t *fixed_index, - AMRFixed *fixed_sparse) -{ - int pulse_position[8]; - int i, temp; - - decode_10bit_pulse(fixed_index[4], pulse_position, 0, 4, 1); - decode_10bit_pulse(fixed_index[5], pulse_position, 2, 6, 5); - - // coded using 5+2 bits with the 2 LSBs being, individually, the LSB of 1 of - // the 2 pulses and the upper 5 bits being coded in base 5 - temp = ((fixed_index[6] >> 2) * 25 + 12) >> 5; - pulse_position[3] = temp % 5; - pulse_position[7] = temp / 5; - if (pulse_position[7] & 1) - pulse_position[3] = 4 - pulse_position[3]; - pulse_position[3] = (pulse_position[3] << 1) + ( fixed_index[6] & 1); - pulse_position[7] = (pulse_position[7] << 1) + ((fixed_index[6] >> 1) & 1); - - fixed_sparse->n = 8; - for (i = 0; i < 4; i++) { - const int pos1 = (pulse_position[i] << 2) + i; - const int pos2 = (pulse_position[i + 4] << 2) + i; - const float sign = fixed_index[i] ? -1.0 : 1.0; - fixed_sparse->x[i ] = pos1; - fixed_sparse->x[i + 4] = pos2; - fixed_sparse->y[i ] = sign; - fixed_sparse->y[i + 4] = pos2 < pos1 ? -sign : sign; - } -} - -/** - * Decode the algebraic codebook index to pulse positions and signs, - * then construct the algebraic codebook vector. - * - * nb of pulses | bits encoding pulses - * For MODE_4k75 or MODE_5k15, 2 | 1-3, 4-6, 7 - * MODE_5k9, 2 | 1, 2-4, 5-6, 7-9 - * MODE_6k7, 3 | 1-3, 4, 5-7, 8, 9-11 - * MODE_7k4 or MODE_7k95, 4 | 1-3, 4-6, 7-9, 10, 11-13 - * - * @param fixed_sparse pointer to the algebraic codebook vector - * @param pulses algebraic codebook indexes - * @param mode mode of the current frame - * @param subframe current subframe number - */ -static void decode_fixed_sparse(AMRFixed *fixed_sparse, const uint16_t *pulses, - const enum Mode mode, const int subframe) -{ - av_assert1(MODE_4k75 <= (signed)mode && mode <= MODE_12k2); - - if (mode == MODE_12k2) { - ff_decode_10_pulses_35bits(pulses, fixed_sparse, gray_decode, 5, 3); - } else if (mode == MODE_10k2) { - decode_8_pulses_31bits(pulses, fixed_sparse); - } else { - int *pulse_position = fixed_sparse->x; - int i, pulse_subset; - const int fixed_index = pulses[0]; - - if (mode <= MODE_5k15) { - pulse_subset = ((fixed_index >> 3) & 8) + (subframe << 1); - pulse_position[0] = ( fixed_index & 7) * 5 + track_position[pulse_subset]; - pulse_position[1] = ((fixed_index >> 3) & 7) * 5 + track_position[pulse_subset + 1]; - fixed_sparse->n = 2; - } else if (mode == MODE_5k9) { - pulse_subset = ((fixed_index & 1) << 1) + 1; - pulse_position[0] = ((fixed_index >> 1) & 7) * 5 + pulse_subset; - pulse_subset = (fixed_index >> 4) & 3; - pulse_position[1] = ((fixed_index >> 6) & 7) * 5 + pulse_subset + (pulse_subset == 3 ? 1 : 0); - fixed_sparse->n = pulse_position[0] == pulse_position[1] ? 1 : 2; - } else if (mode == MODE_6k7) { - pulse_position[0] = (fixed_index & 7) * 5; - pulse_subset = (fixed_index >> 2) & 2; - pulse_position[1] = ((fixed_index >> 4) & 7) * 5 + pulse_subset + 1; - pulse_subset = (fixed_index >> 6) & 2; - pulse_position[2] = ((fixed_index >> 8) & 7) * 5 + pulse_subset + 2; - fixed_sparse->n = 3; - } else { // mode <= MODE_7k95 - pulse_position[0] = gray_decode[ fixed_index & 7]; - pulse_position[1] = gray_decode[(fixed_index >> 3) & 7] + 1; - pulse_position[2] = gray_decode[(fixed_index >> 6) & 7] + 2; - pulse_subset = (fixed_index >> 9) & 1; - pulse_position[3] = gray_decode[(fixed_index >> 10) & 7] + pulse_subset + 3; - fixed_sparse->n = 4; - } - for (i = 0; i < fixed_sparse->n; i++) - fixed_sparse->y[i] = (pulses[1] >> i) & 1 ? 1.0 : -1.0; - } -} - -/** - * Apply pitch lag to obtain the sharpened fixed vector (section 6.1.2) - * - * @param p the context - * @param subframe unpacked amr subframe - * @param mode mode of the current frame - * @param fixed_sparse sparse representation of the fixed vector - */ -static void pitch_sharpening(AMRContext *p, int subframe, enum Mode mode, - AMRFixed *fixed_sparse) -{ - // The spec suggests the current pitch gain is always used, but in other - // modes the pitch and codebook gains are jointly quantized (sec 5.8.2) - // so the codebook gain cannot depend on the quantized pitch gain. - if (mode == MODE_12k2) - p->beta = FFMIN(p->pitch_gain[4], 1.0); - - fixed_sparse->pitch_lag = p->pitch_lag_int; - fixed_sparse->pitch_fac = p->beta; - - // Save pitch sharpening factor for the next subframe - // MODE_4k75 only updates on the 2nd and 4th subframes - this follows from - // the fact that the gains for two subframes are jointly quantized. - if (mode != MODE_4k75 || subframe & 1) - p->beta = av_clipf(p->pitch_gain[4], 0.0, SHARP_MAX); -} -/// @} - - -/// @name AMR gain decoding functions -/// @{ - -/** - * fixed gain smoothing - * Note that where the spec specifies the "spectrum in the q domain" - * in section 6.1.4, in fact frequencies should be used. - * - * @param p the context - * @param lsf LSFs for the current subframe, in the range [0,1] - * @param lsf_avg averaged LSFs - * @param mode mode of the current frame - * - * @return fixed gain smoothed - */ -static float fixed_gain_smooth(AMRContext *p , const float *lsf, - const float *lsf_avg, const enum Mode mode) -{ - float diff = 0.0; - int i; - - for (i = 0; i < LP_FILTER_ORDER; i++) - diff += fabs(lsf_avg[i] - lsf[i]) / lsf_avg[i]; - - // If diff is large for ten subframes, disable smoothing for a 40-subframe - // hangover period. - p->diff_count++; - if (diff <= 0.65) - p->diff_count = 0; - - if (p->diff_count > 10) { - p->hang_count = 0; - p->diff_count--; // don't let diff_count overflow - } - - if (p->hang_count < 40) { - p->hang_count++; - } else if (mode < MODE_7k4 || mode == MODE_10k2) { - const float smoothing_factor = av_clipf(4.0 * diff - 1.6, 0.0, 1.0); - const float fixed_gain_mean = (p->fixed_gain[0] + p->fixed_gain[1] + - p->fixed_gain[2] + p->fixed_gain[3] + - p->fixed_gain[4]) * 0.2; - return smoothing_factor * p->fixed_gain[4] + - (1.0 - smoothing_factor) * fixed_gain_mean; - } - return p->fixed_gain[4]; -} - -/** - * Decode pitch gain and fixed gain factor (part of section 6.1.3). - * - * @param p the context - * @param amr_subframe unpacked amr subframe - * @param mode mode of the current frame - * @param subframe current subframe number - * @param fixed_gain_factor decoded gain correction factor - */ -static void decode_gains(AMRContext *p, const AMRNBSubframe *amr_subframe, - const enum Mode mode, const int subframe, - float *fixed_gain_factor) -{ - if (mode == MODE_12k2 || mode == MODE_7k95) { - p->pitch_gain[4] = qua_gain_pit [amr_subframe->p_gain ] - * (1.0 / 16384.0); - *fixed_gain_factor = qua_gain_code[amr_subframe->fixed_gain] - * (1.0 / 2048.0); - } else { - const uint16_t *gains; - - if (mode >= MODE_6k7) { - gains = gains_high[amr_subframe->p_gain]; - } else if (mode >= MODE_5k15) { - gains = gains_low [amr_subframe->p_gain]; - } else { - // gain index is only coded in subframes 0,2 for MODE_4k75 - gains = gains_MODE_4k75[(p->frame.subframe[subframe & 2].p_gain << 1) + (subframe & 1)]; - } - - p->pitch_gain[4] = gains[0] * (1.0 / 16384.0); - *fixed_gain_factor = gains[1] * (1.0 / 4096.0); - } -} - -/// @} - - -/// @name AMR preprocessing functions -/// @{ - -/** - * Circularly convolve a sparse fixed vector with a phase dispersion impulse - * response filter (D.6.2 of G.729 and 6.1.5 of AMR). - * - * @param out vector with filter applied - * @param in source vector - * @param filter phase filter coefficients - * - * out[n] = sum(i,0,len-1){ in[i] * filter[(len + n - i)%len] } - */ -static void apply_ir_filter(float *out, const AMRFixed *in, - const float *filter) -{ - float filter1[AMR_SUBFRAME_SIZE], ///< filters at pitch lag*1 and *2 - filter2[AMR_SUBFRAME_SIZE]; - int lag = in->pitch_lag; - float fac = in->pitch_fac; - int i; - - if (lag < AMR_SUBFRAME_SIZE) { - ff_celp_circ_addf(filter1, filter, filter, lag, fac, - AMR_SUBFRAME_SIZE); - - if (lag < AMR_SUBFRAME_SIZE >> 1) - ff_celp_circ_addf(filter2, filter, filter1, lag, fac, - AMR_SUBFRAME_SIZE); - } - - memset(out, 0, sizeof(float) * AMR_SUBFRAME_SIZE); - for (i = 0; i < in->n; i++) { - int x = in->x[i]; - float y = in->y[i]; - const float *filterp; - - if (x >= AMR_SUBFRAME_SIZE - lag) { - filterp = filter; - } else if (x >= AMR_SUBFRAME_SIZE - (lag << 1)) { - filterp = filter1; - } else - filterp = filter2; - - ff_celp_circ_addf(out, out, filterp, x, y, AMR_SUBFRAME_SIZE); - } -} - -/** - * Reduce fixed vector sparseness by smoothing with one of three IR filters. - * Also know as "adaptive phase dispersion". - * - * This implements 3GPP TS 26.090 section 6.1(5). - * - * @param p the context - * @param fixed_sparse algebraic codebook vector - * @param fixed_vector unfiltered fixed vector - * @param fixed_gain smoothed gain - * @param out space for modified vector if necessary - */ -static const float *anti_sparseness(AMRContext *p, AMRFixed *fixed_sparse, - const float *fixed_vector, - float fixed_gain, float *out) -{ - int ir_filter_nr; - - if (p->pitch_gain[4] < 0.6) { - ir_filter_nr = 0; // strong filtering - } else if (p->pitch_gain[4] < 0.9) { - ir_filter_nr = 1; // medium filtering - } else - ir_filter_nr = 2; // no filtering - - // detect 'onset' - if (fixed_gain > 2.0 * p->prev_sparse_fixed_gain) { - p->ir_filter_onset = 2; - } else if (p->ir_filter_onset) - p->ir_filter_onset--; - - if (!p->ir_filter_onset) { - int i, count = 0; - - for (i = 0; i < 5; i++) - if (p->pitch_gain[i] < 0.6) - count++; - if (count > 2) - ir_filter_nr = 0; - - if (ir_filter_nr > p->prev_ir_filter_nr + 1) - ir_filter_nr--; - } else if (ir_filter_nr < 2) - ir_filter_nr++; - - // Disable filtering for very low level of fixed_gain. - // Note this step is not specified in the technical description but is in - // the reference source in the function Ph_disp. - if (fixed_gain < 5.0) - ir_filter_nr = 2; - - if (p->cur_frame_mode != MODE_7k4 && p->cur_frame_mode < MODE_10k2 - && ir_filter_nr < 2) { - apply_ir_filter(out, fixed_sparse, - (p->cur_frame_mode == MODE_7k95 ? - ir_filters_lookup_MODE_7k95 : - ir_filters_lookup)[ir_filter_nr]); - fixed_vector = out; - } - - // update ir filter strength history - p->prev_ir_filter_nr = ir_filter_nr; - p->prev_sparse_fixed_gain = fixed_gain; - - return fixed_vector; -} - -/// @} - - -/// @name AMR synthesis functions -/// @{ - -/** - * Conduct 10th order linear predictive coding synthesis. - * - * @param p pointer to the AMRContext - * @param lpc pointer to the LPC coefficients - * @param fixed_gain fixed codebook gain for synthesis - * @param fixed_vector algebraic codebook vector - * @param samples pointer to the output speech samples - * @param overflow 16-bit overflow flag - */ -static int synthesis(AMRContext *p, float *lpc, - float fixed_gain, const float *fixed_vector, - float *samples, uint8_t overflow) -{ - int i; - float excitation[AMR_SUBFRAME_SIZE]; - - // if an overflow has been detected, the pitch vector is scaled down by a - // factor of 4 - if (overflow) - for (i = 0; i < AMR_SUBFRAME_SIZE; i++) - p->pitch_vector[i] *= 0.25; - - p->acelpv_ctx.weighted_vector_sumf(excitation, p->pitch_vector, fixed_vector, - p->pitch_gain[4], fixed_gain, AMR_SUBFRAME_SIZE); - - // emphasize pitch vector contribution - if (p->pitch_gain[4] > 0.5 && !overflow) { - float energy = p->celpm_ctx.dot_productf(excitation, excitation, - AMR_SUBFRAME_SIZE); - float pitch_factor = - p->pitch_gain[4] * - (p->cur_frame_mode == MODE_12k2 ? - 0.25 * FFMIN(p->pitch_gain[4], 1.0) : - 0.5 * FFMIN(p->pitch_gain[4], SHARP_MAX)); - - for (i = 0; i < AMR_SUBFRAME_SIZE; i++) - excitation[i] += pitch_factor * p->pitch_vector[i]; - - ff_scale_vector_to_given_sum_of_squares(excitation, excitation, energy, - AMR_SUBFRAME_SIZE); - } - - p->celpf_ctx.celp_lp_synthesis_filterf(samples, lpc, excitation, - AMR_SUBFRAME_SIZE, - LP_FILTER_ORDER); - - // detect overflow - for (i = 0; i < AMR_SUBFRAME_SIZE; i++) - if (fabsf(samples[i]) > AMR_SAMPLE_BOUND) { - return 1; - } - - return 0; -} - -/// @} - - -/// @name AMR update functions -/// @{ - -/** - * Update buffers and history at the end of decoding a subframe. - * - * @param p pointer to the AMRContext - */ -static void update_state(AMRContext *p) -{ - memcpy(p->prev_lsp_sub4, p->lsp[3], LP_FILTER_ORDER * sizeof(p->lsp[3][0])); - - memmove(&p->excitation_buf[0], &p->excitation_buf[AMR_SUBFRAME_SIZE], - (PITCH_DELAY_MAX + LP_FILTER_ORDER + 1) * sizeof(float)); - - memmove(&p->pitch_gain[0], &p->pitch_gain[1], 4 * sizeof(float)); - memmove(&p->fixed_gain[0], &p->fixed_gain[1], 4 * sizeof(float)); - - memmove(&p->samples_in[0], &p->samples_in[AMR_SUBFRAME_SIZE], - LP_FILTER_ORDER * sizeof(float)); -} - -/// @} - - -/// @name AMR Postprocessing functions -/// @{ - -/** - * Get the tilt factor of a formant filter from its transfer function - * - * @param p The Context - * @param lpc_n LP_FILTER_ORDER coefficients of the numerator - * @param lpc_d LP_FILTER_ORDER coefficients of the denominator - */ -static float tilt_factor(AMRContext *p, float *lpc_n, float *lpc_d) -{ - float rh0, rh1; // autocorrelation at lag 0 and 1 - - // LP_FILTER_ORDER prior zeros are needed for ff_celp_lp_synthesis_filterf - float impulse_buffer[LP_FILTER_ORDER + AMR_TILT_RESPONSE] = { 0 }; - float *hf = impulse_buffer + LP_FILTER_ORDER; // start of impulse response - - hf[0] = 1.0; - memcpy(hf + 1, lpc_n, sizeof(float) * LP_FILTER_ORDER); - p->celpf_ctx.celp_lp_synthesis_filterf(hf, lpc_d, hf, - AMR_TILT_RESPONSE, - LP_FILTER_ORDER); - - rh0 = p->celpm_ctx.dot_productf(hf, hf, AMR_TILT_RESPONSE); - rh1 = p->celpm_ctx.dot_productf(hf, hf + 1, AMR_TILT_RESPONSE - 1); - - // The spec only specifies this check for 12.2 and 10.2 kbit/s - // modes. But in the ref source the tilt is always non-negative. - return rh1 >= 0.0 ? rh1 / rh0 * AMR_TILT_GAMMA_T : 0.0; -} - -/** - * Perform adaptive post-filtering to enhance the quality of the speech. - * See section 6.2.1. - * - * @param p pointer to the AMRContext - * @param lpc interpolated LP coefficients for this subframe - * @param buf_out output of the filter - */ -static void postfilter(AMRContext *p, float *lpc, float *buf_out) -{ - int i; - float *samples = p->samples_in + LP_FILTER_ORDER; // Start of input - - float speech_gain = p->celpm_ctx.dot_productf(samples, samples, - AMR_SUBFRAME_SIZE); - - float pole_out[AMR_SUBFRAME_SIZE + LP_FILTER_ORDER]; // Output of pole filter - const float *gamma_n, *gamma_d; // Formant filter factor table - float lpc_n[LP_FILTER_ORDER], lpc_d[LP_FILTER_ORDER]; // Transfer function coefficients - - if (p->cur_frame_mode == MODE_12k2 || p->cur_frame_mode == MODE_10k2) { - gamma_n = ff_pow_0_7; - gamma_d = ff_pow_0_75; - } else { - gamma_n = ff_pow_0_55; - gamma_d = ff_pow_0_7; - } - - for (i = 0; i < LP_FILTER_ORDER; i++) { - lpc_n[i] = lpc[i] * gamma_n[i]; - lpc_d[i] = lpc[i] * gamma_d[i]; - } - - memcpy(pole_out, p->postfilter_mem, sizeof(float) * LP_FILTER_ORDER); - p->celpf_ctx.celp_lp_synthesis_filterf(pole_out + LP_FILTER_ORDER, lpc_d, samples, - AMR_SUBFRAME_SIZE, LP_FILTER_ORDER); - memcpy(p->postfilter_mem, pole_out + AMR_SUBFRAME_SIZE, - sizeof(float) * LP_FILTER_ORDER); - - p->celpf_ctx.celp_lp_zero_synthesis_filterf(buf_out, lpc_n, - pole_out + LP_FILTER_ORDER, - AMR_SUBFRAME_SIZE, LP_FILTER_ORDER); - - ff_tilt_compensation(&p->tilt_mem, tilt_factor(p, lpc_n, lpc_d), buf_out, - AMR_SUBFRAME_SIZE); - - ff_adaptive_gain_control(buf_out, buf_out, speech_gain, AMR_SUBFRAME_SIZE, - AMR_AGC_ALPHA, &p->postfilter_agc); -} - -/// @} - -static int amrnb_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - - AMRChannelsContext *s = avctx->priv_data; // pointer to private data - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - int ret; - - /* get output buffer */ - frame->nb_samples = AMR_BLOCK_SIZE; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - AMRContext *p = &s->ch[ch]; - float fixed_gain_factor; - AMRFixed fixed_sparse = {0}; // fixed vector up to anti-sparseness processing - float spare_vector[AMR_SUBFRAME_SIZE]; // extra stack space to hold result from anti-sparseness processing - float synth_fixed_gain; // the fixed gain that synthesis should use - const float *synth_fixed_vector; // pointer to the fixed vector that synthesis should use - float *buf_out = (float *)frame->extended_data[ch]; - int channel_size; - int i, subframe; - - p->cur_frame_mode = unpack_bitstream(p, buf, buf_size); - if (p->cur_frame_mode == NO_DATA) { - av_log(avctx, AV_LOG_ERROR, "Corrupt bitstream\n"); - return AVERROR_INVALIDDATA; - } - if (p->cur_frame_mode == MODE_DTX) { - avpriv_report_missing_feature(avctx, "dtx mode"); - av_log(avctx, AV_LOG_INFO, "Note: libopencore_amrnb supports dtx\n"); - return AVERROR_PATCHWELCOME; - } - - channel_size = frame_sizes_nb[p->cur_frame_mode] + 1; // +7 for rounding and +8 for TOC - if (p->cur_frame_mode == MODE_12k2) { - lsf2lsp_5(p); - } else - lsf2lsp_3(p); - - for (i = 0; i < 4; i++) - ff_acelp_lspd2lpc(p->lsp[i], p->lpc[i], 5); - - for (subframe = 0; subframe < 4; subframe++) { - const AMRNBSubframe *amr_subframe = &p->frame.subframe[subframe]; - - decode_pitch_vector(p, amr_subframe, subframe); - - decode_fixed_sparse(&fixed_sparse, amr_subframe->pulses, - p->cur_frame_mode, subframe); - - // The fixed gain (section 6.1.3) depends on the fixed vector - // (section 6.1.2), but the fixed vector calculation uses - // pitch sharpening based on the on the pitch gain (section 6.1.3). - // So the correct order is: pitch gain, pitch sharpening, fixed gain. - decode_gains(p, amr_subframe, p->cur_frame_mode, subframe, - &fixed_gain_factor); - - pitch_sharpening(p, subframe, p->cur_frame_mode, &fixed_sparse); - - if (fixed_sparse.pitch_lag == 0) { - av_log(avctx, AV_LOG_ERROR, "The file is corrupted, pitch_lag = 0 is not allowed\n"); - return AVERROR_INVALIDDATA; - } - ff_set_fixed_vector(p->fixed_vector, &fixed_sparse, 1.0, - AMR_SUBFRAME_SIZE); - - p->fixed_gain[4] = - ff_amr_set_fixed_gain(fixed_gain_factor, - p->celpm_ctx.dot_productf(p->fixed_vector, - p->fixed_vector, - AMR_SUBFRAME_SIZE) / - AMR_SUBFRAME_SIZE, - p->prediction_error, - energy_mean[p->cur_frame_mode], energy_pred_fac); - - // The excitation feedback is calculated without any processing such - // as fixed gain smoothing. This isn't mentioned in the specification. - for (i = 0; i < AMR_SUBFRAME_SIZE; i++) - p->excitation[i] *= p->pitch_gain[4]; - ff_set_fixed_vector(p->excitation, &fixed_sparse, p->fixed_gain[4], - AMR_SUBFRAME_SIZE); - - // In the ref decoder, excitation is stored with no fractional bits. - // This step prevents buzz in silent periods. The ref encoder can - // emit long sequences with pitch factor greater than one. This - // creates unwanted feedback if the excitation vector is nonzero. - // (e.g. test sequence T19_795.COD in 3GPP TS 26.074) - for (i = 0; i < AMR_SUBFRAME_SIZE; i++) - p->excitation[i] = truncf(p->excitation[i]); - - // Smooth fixed gain. - // The specification is ambiguous, but in the reference source, the - // smoothed value is NOT fed back into later fixed gain smoothing. - synth_fixed_gain = fixed_gain_smooth(p, p->lsf_q[subframe], - p->lsf_avg, p->cur_frame_mode); - - synth_fixed_vector = anti_sparseness(p, &fixed_sparse, p->fixed_vector, - synth_fixed_gain, spare_vector); - - if (synthesis(p, p->lpc[subframe], synth_fixed_gain, - synth_fixed_vector, &p->samples_in[LP_FILTER_ORDER], 0)) - // overflow detected -> rerun synthesis scaling pitch vector down - // by a factor of 4, skipping pitch vector contribution emphasis - // and adaptive gain control - synthesis(p, p->lpc[subframe], synth_fixed_gain, - synth_fixed_vector, &p->samples_in[LP_FILTER_ORDER], 1); - - postfilter(p, p->lpc[subframe], buf_out + subframe * AMR_SUBFRAME_SIZE); - - // update buffers and history - ff_clear_fixed_vector(p->fixed_vector, &fixed_sparse, AMR_SUBFRAME_SIZE); - update_state(p); - } - - p->acelpf_ctx.acelp_apply_order_2_transfer_function(buf_out, - buf_out, highpass_zeros, - highpass_poles, - highpass_gain * AMR_SAMPLE_SCALE, - p->high_pass_mem, AMR_BLOCK_SIZE); - - /* Update averaged lsf vector (used for fixed gain smoothing). - * - * Note that lsf_avg should not incorporate the current frame's LSFs - * for fixed_gain_smooth. - * The specification has an incorrect formula: the reference decoder uses - * qbar(n-1) rather than qbar(n) in section 6.1(4) equation 71. */ - p->acelpv_ctx.weighted_vector_sumf(p->lsf_avg, p->lsf_avg, p->lsf_q[3], - 0.84, 0.16, LP_FILTER_ORDER); - buf += channel_size; - buf_size -= channel_size; - } - - *got_frame_ptr = 1; - - return buf - avpkt->data; -} - - -const FFCodec ff_amrnb_decoder = { - .p.name = "amrnb", - CODEC_LONG_NAME("AMR-NB (Adaptive Multi-Rate NarrowBand)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_AMR_NB, - .priv_data_size = sizeof(AMRChannelsContext), - .init = amrnb_decode_init, - FF_CODEC_DECODE_CB(amrnb_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF, - .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_FLTP, - AV_SAMPLE_FMT_NONE }, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plusdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plusdec.c deleted file mode 100644 index aa4d42f44a39140ca3f6dda62127db52538e9c34..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plusdec.c +++ /dev/null @@ -1,443 +0,0 @@ -/* - * ATRAC3+ compatible decoder - * - * Copyright (c) 2010-2013 Maxim Poliakovski - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Sony ATRAC3+ compatible decoder. - * - * Container formats used to store its data: - * RIFF WAV (.at3) and Sony OpenMG (.oma, .aa3). - * - * Technical description of this codec can be found here: - * http://wiki.multimedia.cx/index.php?title=ATRAC3plus - * - * Kudos to Benjamin Larsson and Michael Karcher - * for their precious technical help! - */ - -#include -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/float_dsp.h" -#include "libavutil/mem_internal.h" -#include "libavutil/thread.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" -#include "atrac.h" -#include "atrac3plus.h" - -static const uint8_t channel_map[8][8] = { - { 0, }, - { 0, 1, }, - { 0, 1, 2, }, - { 0, 1, 2, 3, }, - { 0, }, - { 0, 1, 2, 4, 5, 3, }, - { 0, 1, 2, 4, 5, 6, 3, }, - { 0, 1, 2, 4, 5, 6, 7, 3, }, -}; - -typedef struct ATRAC3PContext { - GetBitContext gb; - AVFloatDSPContext *fdsp; - - DECLARE_ALIGNED(32, float, samples)[2][ATRAC3P_FRAME_SAMPLES]; ///< quantized MDCT spectrum - DECLARE_ALIGNED(32, float, mdct_buf)[2][ATRAC3P_FRAME_SAMPLES]; ///< output of the IMDCT - DECLARE_ALIGNED(32, float, time_buf)[2][ATRAC3P_FRAME_SAMPLES]; ///< output of the gain compensation - DECLARE_ALIGNED(32, float, outp_buf)[2][ATRAC3P_FRAME_SAMPLES]; - - AtracGCContext gainc_ctx; ///< gain compensation context - AVTXContext *mdct_ctx; - av_tx_fn mdct_fn; - AVTXContext *ipqf_dct_ctx; ///< IDCT context used by IPQF - av_tx_fn ipqf_dct_fn; - - Atrac3pChanUnitCtx *ch_units; ///< global channel units - - int num_channel_blocks; ///< number of channel blocks - uint8_t channel_blocks[5]; ///< channel configuration descriptor - const uint8_t *channel_map; ///< channel layout map -} ATRAC3PContext; - -static av_cold int atrac3p_decode_close(AVCodecContext *avctx) -{ - ATRAC3PContext *ctx = avctx->priv_data; - - av_freep(&ctx->ch_units); - av_freep(&ctx->fdsp); - - av_tx_uninit(&ctx->mdct_ctx); - av_tx_uninit(&ctx->ipqf_dct_ctx); - - return 0; -} - -static av_cold int set_channel_params(ATRAC3PContext *ctx, - AVCodecContext *avctx) -{ - int channels = avctx->ch_layout.nb_channels; - memset(ctx->channel_blocks, 0, sizeof(ctx->channel_blocks)); - - av_channel_layout_uninit(&avctx->ch_layout); - switch (channels) { - case 1: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - ctx->num_channel_blocks = 1; - ctx->channel_blocks[0] = CH_UNIT_MONO; - break; - case 2: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; - ctx->num_channel_blocks = 1; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - break; - case 3: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_SURROUND; - ctx->num_channel_blocks = 2; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - ctx->channel_blocks[1] = CH_UNIT_MONO; - break; - case 4: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_4POINT0; - ctx->num_channel_blocks = 3; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - ctx->channel_blocks[1] = CH_UNIT_MONO; - ctx->channel_blocks[2] = CH_UNIT_MONO; - break; - case 6: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT1_BACK; - ctx->num_channel_blocks = 4; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - ctx->channel_blocks[1] = CH_UNIT_MONO; - ctx->channel_blocks[2] = CH_UNIT_STEREO; - ctx->channel_blocks[3] = CH_UNIT_MONO; - break; - case 7: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_6POINT1_BACK; - ctx->num_channel_blocks = 5; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - ctx->channel_blocks[1] = CH_UNIT_MONO; - ctx->channel_blocks[2] = CH_UNIT_STEREO; - ctx->channel_blocks[3] = CH_UNIT_MONO; - ctx->channel_blocks[4] = CH_UNIT_MONO; - break; - case 8: - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_7POINT1; - ctx->num_channel_blocks = 5; - ctx->channel_blocks[0] = CH_UNIT_STEREO; - ctx->channel_blocks[1] = CH_UNIT_MONO; - ctx->channel_blocks[2] = CH_UNIT_STEREO; - ctx->channel_blocks[3] = CH_UNIT_STEREO; - ctx->channel_blocks[4] = CH_UNIT_MONO; - break; - default: - av_log(avctx, AV_LOG_ERROR, - "Unsupported channel count: %d!\n", channels); - return AVERROR_INVALIDDATA; - } - - ctx->channel_map = channel_map[channels - 1]; - - return 0; -} - -static av_cold void atrac3p_init_static(void) -{ - ff_atrac3p_init_vlcs(); - ff_atrac3p_init_dsp_static(); -} - -static av_cold int atrac3p_decode_init(AVCodecContext *avctx) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - ATRAC3PContext *ctx = avctx->priv_data; - float scale; - int i, ch, ret; - - if (!avctx->block_align) { - av_log(avctx, AV_LOG_ERROR, "block_align is not set\n"); - return AVERROR(EINVAL); - } - - /* initialize IPQF */ - scale = 32.0 / 32768.0; - ret = av_tx_init(&ctx->ipqf_dct_ctx, &ctx->ipqf_dct_fn, AV_TX_FLOAT_MDCT, - 1, 16, &scale, 0); - if (ret < 0) - return ret; - - scale = -1.0f; - ret = av_tx_init(&ctx->mdct_ctx, &ctx->mdct_fn, AV_TX_FLOAT_MDCT, - 1, 128, &scale, AV_TX_FULL_IMDCT); - if (ret < 0) - return ret; - - ff_atrac_init_gain_compensation(&ctx->gainc_ctx, 6, 2); - - if ((ret = set_channel_params(ctx, avctx)) < 0) - return ret; - - ctx->ch_units = av_calloc(ctx->num_channel_blocks, sizeof(*ctx->ch_units)); - ctx->fdsp = avpriv_float_dsp_alloc(avctx->flags & AV_CODEC_FLAG_BITEXACT); - - if (!ctx->ch_units || !ctx->fdsp) { - return AVERROR(ENOMEM); - } - - for (i = 0; i < ctx->num_channel_blocks; i++) { - for (ch = 0; ch < 2; ch++) { - ctx->ch_units[i].channels[ch].ch_num = ch; - ctx->ch_units[i].channels[ch].wnd_shape = &ctx->ch_units[i].channels[ch].wnd_shape_hist[0][0]; - ctx->ch_units[i].channels[ch].wnd_shape_prev = &ctx->ch_units[i].channels[ch].wnd_shape_hist[1][0]; - ctx->ch_units[i].channels[ch].gain_data = &ctx->ch_units[i].channels[ch].gain_data_hist[0][0]; - ctx->ch_units[i].channels[ch].gain_data_prev = &ctx->ch_units[i].channels[ch].gain_data_hist[1][0]; - ctx->ch_units[i].channels[ch].tones_info = &ctx->ch_units[i].channels[ch].tones_info_hist[0][0]; - ctx->ch_units[i].channels[ch].tones_info_prev = &ctx->ch_units[i].channels[ch].tones_info_hist[1][0]; - } - - ctx->ch_units[i].waves_info = &ctx->ch_units[i].wave_synth_hist[0]; - ctx->ch_units[i].waves_info_prev = &ctx->ch_units[i].wave_synth_hist[1]; - } - - avctx->sample_fmt = AV_SAMPLE_FMT_FLTP; - - ff_thread_once(&init_static_once, atrac3p_init_static); - - return 0; -} - -static void decode_residual_spectrum(ATRAC3PContext *ctx, Atrac3pChanUnitCtx *ch_unit, - float out[2][ATRAC3P_FRAME_SAMPLES], - int num_channels, - AVCodecContext *avctx) -{ - int i, sb, ch, qu, nspeclines, RNG_index; - float *dst, q; - int16_t *src; - /* calculate RNG table index for each subband */ - int sb_RNG_index[ATRAC3P_SUBBANDS] = { 0 }; - - if (ch_unit->mute_flag) { - for (ch = 0; ch < num_channels; ch++) - memset(out[ch], 0, ATRAC3P_FRAME_SAMPLES * sizeof(*out[ch])); - return; - } - - for (qu = 0, RNG_index = 0; qu < ch_unit->used_quant_units; qu++) - RNG_index += ch_unit->channels[0].qu_sf_idx[qu] + - ch_unit->channels[1].qu_sf_idx[qu]; - - for (sb = 0; sb < ch_unit->num_coded_subbands; sb++, RNG_index += 128) - sb_RNG_index[sb] = RNG_index & 0x3FC; - - /* inverse quant and power compensation */ - for (ch = 0; ch < num_channels; ch++) { - /* clear channel's residual spectrum */ - memset(out[ch], 0, ATRAC3P_FRAME_SAMPLES * sizeof(*out[ch])); - - for (qu = 0; qu < ch_unit->used_quant_units; qu++) { - src = &ch_unit->channels[ch].spectrum[ff_atrac3p_qu_to_spec_pos[qu]]; - dst = &out[ch][ff_atrac3p_qu_to_spec_pos[qu]]; - nspeclines = ff_atrac3p_qu_to_spec_pos[qu + 1] - - ff_atrac3p_qu_to_spec_pos[qu]; - - if (ch_unit->channels[ch].qu_wordlen[qu] > 0) { - q = ff_atrac3p_sf_tab[ch_unit->channels[ch].qu_sf_idx[qu]] * - ff_atrac3p_mant_tab[ch_unit->channels[ch].qu_wordlen[qu]]; - for (i = 0; i < nspeclines; i++) - dst[i] = src[i] * q; - } - } - - for (sb = 0; sb < ch_unit->num_coded_subbands; sb++) - ff_atrac3p_power_compensation(ch_unit, ctx->fdsp, ch, &out[ch][0], - sb_RNG_index[sb], sb); - } - - if (ch_unit->unit_type == CH_UNIT_STEREO) { - for (sb = 0; sb < ch_unit->num_coded_subbands; sb++) { - if (ch_unit->swap_channels[sb]) { - for (i = 0; i < ATRAC3P_SUBBAND_SAMPLES; i++) - FFSWAP(float, out[0][sb * ATRAC3P_SUBBAND_SAMPLES + i], - out[1][sb * ATRAC3P_SUBBAND_SAMPLES + i]); - } - - /* flip coefficients' sign if requested */ - if (ch_unit->negate_coeffs[sb]) - for (i = 0; i < ATRAC3P_SUBBAND_SAMPLES; i++) - out[1][sb * ATRAC3P_SUBBAND_SAMPLES + i] = -(out[1][sb * ATRAC3P_SUBBAND_SAMPLES + i]); - } - } -} - -static void reconstruct_frame(ATRAC3PContext *ctx, Atrac3pChanUnitCtx *ch_unit, - int num_channels, AVCodecContext *avctx) -{ - int ch, sb; - - for (ch = 0; ch < num_channels; ch++) { - for (sb = 0; sb < ch_unit->num_subbands; sb++) { - /* inverse transform and windowing */ - ff_atrac3p_imdct(ctx->fdsp, ctx->mdct_ctx, ctx->mdct_fn, - &ctx->samples[ch][sb * ATRAC3P_SUBBAND_SAMPLES], - &ctx->mdct_buf[ch][sb * ATRAC3P_SUBBAND_SAMPLES], - (ch_unit->channels[ch].wnd_shape_prev[sb] << 1) + - ch_unit->channels[ch].wnd_shape[sb], sb); - - /* gain compensation and overlapping */ - ff_atrac_gain_compensation(&ctx->gainc_ctx, - &ctx->mdct_buf[ch][sb * ATRAC3P_SUBBAND_SAMPLES], - &ch_unit->prev_buf[ch][sb * ATRAC3P_SUBBAND_SAMPLES], - &ch_unit->channels[ch].gain_data_prev[sb], - &ch_unit->channels[ch].gain_data[sb], - ATRAC3P_SUBBAND_SAMPLES, - &ctx->time_buf[ch][sb * ATRAC3P_SUBBAND_SAMPLES]); - } - - /* zero unused subbands in both output and overlapping buffers */ - memset(&ch_unit->prev_buf[ch][ch_unit->num_subbands * ATRAC3P_SUBBAND_SAMPLES], - 0, - (ATRAC3P_SUBBANDS - ch_unit->num_subbands) * - ATRAC3P_SUBBAND_SAMPLES * - sizeof(ch_unit->prev_buf[ch][ch_unit->num_subbands * ATRAC3P_SUBBAND_SAMPLES])); - memset(&ctx->time_buf[ch][ch_unit->num_subbands * ATRAC3P_SUBBAND_SAMPLES], - 0, - (ATRAC3P_SUBBANDS - ch_unit->num_subbands) * - ATRAC3P_SUBBAND_SAMPLES * - sizeof(ctx->time_buf[ch][ch_unit->num_subbands * ATRAC3P_SUBBAND_SAMPLES])); - - /* resynthesize and add tonal signal */ - if (ch_unit->waves_info->tones_present || - ch_unit->waves_info_prev->tones_present) { - for (sb = 0; sb < ch_unit->num_subbands; sb++) - if (ch_unit->channels[ch].tones_info[sb].num_wavs || - ch_unit->channels[ch].tones_info_prev[sb].num_wavs) { - ff_atrac3p_generate_tones(ch_unit, ctx->fdsp, ch, sb, - &ctx->time_buf[ch][sb * 128]); - } - } - - /* subband synthesis and acoustic signal output */ - ff_atrac3p_ipqf(ctx->ipqf_dct_ctx, ctx->ipqf_dct_fn, - &ch_unit->ipqf_ctx[ch], &ctx->time_buf[ch][0], - &ctx->outp_buf[ch][0]); - } - - /* swap window shape and gain control buffers. */ - for (ch = 0; ch < num_channels; ch++) { - FFSWAP(uint8_t *, ch_unit->channels[ch].wnd_shape, - ch_unit->channels[ch].wnd_shape_prev); - FFSWAP(AtracGainInfo *, ch_unit->channels[ch].gain_data, - ch_unit->channels[ch].gain_data_prev); - FFSWAP(Atrac3pWavesData *, ch_unit->channels[ch].tones_info, - ch_unit->channels[ch].tones_info_prev); - } - - FFSWAP(Atrac3pWaveSynthParams *, ch_unit->waves_info, ch_unit->waves_info_prev); -} - -static int atrac3p_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - ATRAC3PContext *ctx = avctx->priv_data; - int i, ret, ch_unit_id, ch_block = 0, out_ch_index = 0, channels_to_process; - float **samples_p = (float **)frame->extended_data; - - frame->nb_samples = ATRAC3P_FRAME_SAMPLES; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - if ((ret = init_get_bits8(&ctx->gb, avpkt->data, avpkt->size)) < 0) - return ret; - - if (get_bits1(&ctx->gb)) { - av_log(avctx, AV_LOG_ERROR, "Invalid start bit!\n"); - return AVERROR_INVALIDDATA; - } - - while (get_bits_left(&ctx->gb) >= 2 && - (ch_unit_id = get_bits(&ctx->gb, 2)) != CH_UNIT_TERMINATOR) { - if (ch_unit_id == CH_UNIT_EXTENSION) { - avpriv_report_missing_feature(avctx, "Channel unit extension"); - return AVERROR_PATCHWELCOME; - } - if (ch_block >= ctx->num_channel_blocks || - ctx->channel_blocks[ch_block] != ch_unit_id) { - av_log(avctx, AV_LOG_ERROR, - "Frame data doesn't match channel configuration!\n"); - return AVERROR_INVALIDDATA; - } - - ctx->ch_units[ch_block].unit_type = ch_unit_id; - channels_to_process = ch_unit_id + 1; - - if ((ret = ff_atrac3p_decode_channel_unit(&ctx->gb, - &ctx->ch_units[ch_block], - channels_to_process, - avctx)) < 0) - return ret; - - decode_residual_spectrum(ctx, &ctx->ch_units[ch_block], ctx->samples, - channels_to_process, avctx); - reconstruct_frame(ctx, &ctx->ch_units[ch_block], - channels_to_process, avctx); - - for (i = 0; i < channels_to_process; i++) - memcpy(samples_p[ctx->channel_map[out_ch_index + i]], ctx->outp_buf[i], - ATRAC3P_FRAME_SAMPLES * sizeof(**samples_p)); - - ch_block++; - out_ch_index += channels_to_process; - } - - *got_frame_ptr = 1; - - return avctx->codec_id == AV_CODEC_ID_ATRAC3P ? FFMIN(avctx->block_align, avpkt->size) : avpkt->size; -} - -const FFCodec ff_atrac3p_decoder = { - .p.name = "atrac3plus", - CODEC_LONG_NAME("ATRAC3+ (Adaptive TRansform Acoustic Coding 3+)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ATRAC3P, - .p.capabilities = AV_CODEC_CAP_DR1, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, - .priv_data_size = sizeof(ATRAC3PContext), - .init = atrac3p_decode_init, - .close = atrac3p_decode_close, - FF_CODEC_DECODE_CB(atrac3p_decode_frame), -}; - -const FFCodec ff_atrac3pal_decoder = { - .p.name = "atrac3plusal", - CODEC_LONG_NAME("ATRAC3+ AL (Adaptive TRansform Acoustic Coding 3+ Advanced Lossless)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ATRAC3PAL, - .p.capabilities = AV_CODEC_CAP_DR1, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, - .priv_data_size = sizeof(ATRAC3PContext), - .init = atrac3p_decode_init, - .close = atrac3p_decode_close, - FF_CODEC_DECODE_CB(atrac3p_decode_frame), -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevcdsp_lsx.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevcdsp_lsx.h deleted file mode 100644 index 0d54196cafe434f0bd5e90416cc593b33575427d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/hevcdsp_lsx.h +++ /dev/null @@ -1,230 +0,0 @@ -/* - * Copyright (c) 2022 Loongson Technology Corporation Limited - * Contributed by Lu Wang - * Hao Chen - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_LOONGARCH_HEVCDSP_LSX_H -#define AVCODEC_LOONGARCH_HEVCDSP_LSX_H - -#include "libavcodec/hevcdsp.h" - -#define MC(PEL, DIR, WIDTH) \ -void ff_hevc_put_hevc_##PEL##_##DIR##WIDTH##_8_lsx(int16_t *dst, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) - -MC(pel, pixels, 4); -MC(pel, pixels, 6); -MC(pel, pixels, 8); -MC(pel, pixels, 12); -MC(pel, pixels, 16); -MC(pel, pixels, 24); -MC(pel, pixels, 32); -MC(pel, pixels, 48); -MC(pel, pixels, 64); - -MC(qpel, h, 4); -MC(qpel, h, 8); -MC(qpel, h, 12); -MC(qpel, h, 16); -MC(qpel, h, 24); -MC(qpel, h, 32); -MC(qpel, h, 48); -MC(qpel, h, 64); - -MC(qpel, v, 4); -MC(qpel, v, 8); -MC(qpel, v, 12); -MC(qpel, v, 16); -MC(qpel, v, 24); -MC(qpel, v, 32); -MC(qpel, v, 48); -MC(qpel, v, 64); - -MC(qpel, hv, 4); -MC(qpel, hv, 8); -MC(qpel, hv, 12); -MC(qpel, hv, 16); -MC(qpel, hv, 24); -MC(qpel, hv, 32); -MC(qpel, hv, 48); -MC(qpel, hv, 64); - -MC(epel, h, 32); - -MC(epel, v, 16); -MC(epel, v, 24); -MC(epel, v, 32); - -MC(epel, hv, 8); -MC(epel, hv, 12); -MC(epel, hv, 16); -MC(epel, hv, 24); -MC(epel, hv, 32); - -#undef MC - -#define BI_MC(PEL, DIR, WIDTH) \ -void ff_hevc_put_hevc_bi_##PEL##_##DIR##WIDTH##_8_lsx(uint8_t *dst, \ - ptrdiff_t dst_stride, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - const int16_t *src_16bit, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) - -BI_MC(pel, pixels, 4); -BI_MC(pel, pixels, 6); -BI_MC(pel, pixels, 8); -BI_MC(pel, pixels, 12); -BI_MC(pel, pixels, 16); -BI_MC(pel, pixels, 24); -BI_MC(pel, pixels, 32); -BI_MC(pel, pixels, 48); -BI_MC(pel, pixels, 64); - -BI_MC(qpel, h, 16); -BI_MC(qpel, h, 24); -BI_MC(qpel, h, 32); -BI_MC(qpel, h, 48); -BI_MC(qpel, h, 64); - -BI_MC(qpel, v, 8); -BI_MC(qpel, v, 16); -BI_MC(qpel, v, 24); -BI_MC(qpel, v, 32); -BI_MC(qpel, v, 48); -BI_MC(qpel, v, 64); - -BI_MC(qpel, hv, 8); -BI_MC(qpel, hv, 16); -BI_MC(qpel, hv, 24); -BI_MC(qpel, hv, 32); -BI_MC(qpel, hv, 48); -BI_MC(qpel, hv, 64); - -BI_MC(epel, h, 24); -BI_MC(epel, h, 32); - -BI_MC(epel, v, 12); -BI_MC(epel, v, 16); -BI_MC(epel, v, 24); -BI_MC(epel, v, 32); - -BI_MC(epel, hv, 6); -BI_MC(epel, hv, 8); -BI_MC(epel, hv, 16); -BI_MC(epel, hv, 24); -BI_MC(epel, hv, 32); - -#undef BI_MC - -#define UNI_MC(PEL, DIR, WIDTH) \ -void ff_hevc_put_hevc_uni_##PEL##_##DIR##WIDTH##_8_lsx(uint8_t *dst, \ - ptrdiff_t dst_stride, \ - const uint8_t *src, \ - ptrdiff_t src_stride, \ - int height, \ - intptr_t mx, \ - intptr_t my, \ - int width) - -UNI_MC(qpel, h, 64); - -UNI_MC(qpel, v, 24); -UNI_MC(qpel, v, 32); -UNI_MC(qpel, v, 48); -UNI_MC(qpel, v, 64); - -UNI_MC(qpel, hv, 8); -UNI_MC(qpel, hv, 16); -UNI_MC(qpel, hv, 24); -UNI_MC(qpel, hv, 32); -UNI_MC(qpel, hv, 48); -UNI_MC(qpel, hv, 64); - -UNI_MC(epel, v, 24); -UNI_MC(epel, v, 32); - -UNI_MC(epel, hv, 8); -UNI_MC(epel, hv, 12); -UNI_MC(epel, hv, 16); -UNI_MC(epel, hv, 24); -UNI_MC(epel, hv, 32); - -#undef UNI_MC - -#define UNI_W_MC(PEL, DIR, WIDTH) \ -void ff_hevc_put_hevc_uni_w_##PEL##_##DIR##WIDTH##_8_lsx(uint8_t *dst, \ - ptrdiff_t \ - dst_stride, \ - const uint8_t *src, \ - ptrdiff_t \ - src_stride, \ - int height, \ - int denom, \ - int weight, \ - int offset, \ - intptr_t mx, \ - intptr_t my, \ - int width) - -UNI_W_MC(qpel, hv, 8); -UNI_W_MC(qpel, hv, 16); -UNI_W_MC(qpel, hv, 24); -UNI_W_MC(qpel, hv, 32); -UNI_W_MC(qpel, hv, 48); -UNI_W_MC(qpel, hv, 64); - -#undef UNI_W_MC - -void ff_hevc_loop_filter_luma_h_8_lsx(uint8_t *src, ptrdiff_t stride, - int32_t beta, const int32_t *tc, - const uint8_t *p_is_pcm, const uint8_t *q_is_pcm); - -void ff_hevc_loop_filter_luma_v_8_lsx(uint8_t *src, ptrdiff_t stride, - int32_t beta, const int32_t *tc, - const uint8_t *p_is_pcm, const uint8_t *q_is_pcm); - -void ff_hevc_loop_filter_chroma_h_8_lsx(uint8_t *src, ptrdiff_t stride, - const int32_t *tc, const uint8_t *p_is_pcm, - const uint8_t *q_is_pcm); - -void ff_hevc_loop_filter_chroma_v_8_lsx(uint8_t *src, ptrdiff_t stride, - const int32_t *tc, const uint8_t *p_is_pcm, - const uint8_t *q_is_pcm); - -void ff_hevc_sao_edge_filter_8_lsx(uint8_t *dst, const uint8_t *src, - ptrdiff_t stride_dst, - const int16_t *sao_offset_val, - int eo, int width, int height); - -void ff_hevc_idct_4x4_lsx(int16_t *coeffs, int col_limit); -void ff_hevc_idct_8x8_lsx(int16_t *coeffs, int col_limit); -void ff_hevc_idct_16x16_lsx(int16_t *coeffs, int col_limit); -void ff_hevc_idct_32x32_lsx(int16_t *coeffs, int col_limit); - -#endif // #ifndef AVCODEC_LOONGARCH_HEVCDSP_LSX_H diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/fmtconvert_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/fmtconvert_mips.c deleted file mode 100644 index c39e85357522a475ceffa48a68ebc79820eee121..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/fmtconvert_mips.c +++ /dev/null @@ -1,141 +0,0 @@ -/* - * Format Conversion Utils for MIPS - * - * Copyright (c) 2012 - * MIPS Technologies, Inc., California. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. Neither the name of the MIPS Technologies, Inc., nor the names of is - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE MIPS TECHNOLOGIES, INC. ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE MIPS TECHNOLOGIES, INC. BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - * - * Author: Zoran Lukic (zoranl@mips.com) - * Author: Nedeljko Babic (nbabic@mips.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ -#include "config.h" -#include "libavutil/attributes.h" -#include "libavcodec/fmtconvert.h" -#include "libavutil/mips/asmdefs.h" - -#if HAVE_INLINE_ASM -static void int32_to_float_fmul_scalar_mips(float *dst, const int *src, - float mul, int len) -{ - /* - * variables used in inline assembler - */ - float temp1, temp3, temp5, temp7, temp9, temp11, temp13, temp15; - - int rpom1, rpom2, rpom11, rpom21, rpom12, rpom22, rpom13, rpom23; - const int *src_end = src + len; - /* - * loop is 8 times unrolled in assembler in order to achieve better performance - */ - __asm__ volatile ( - "i32tf_lp%=: \n\t" - "lw %[rpom11], 0(%[src]) \n\t" - "lw %[rpom21], 4(%[src]) \n\t" - "lw %[rpom1], 8(%[src]) \n\t" - "lw %[rpom2], 12(%[src]) \n\t" - "mtc1 %[rpom11], %[temp1] \n\t" - "mtc1 %[rpom21], %[temp3] \n\t" - "mtc1 %[rpom1], %[temp5] \n\t" - "mtc1 %[rpom2], %[temp7] \n\t" - - "lw %[rpom13], 16(%[src]) \n\t" - "lw %[rpom23], 20(%[src]) \n\t" - "lw %[rpom12], 24(%[src]) \n\t" - "lw %[rpom22], 28(%[src]) \n\t" - "mtc1 %[rpom13], %[temp9] \n\t" - "mtc1 %[rpom23], %[temp11] \n\t" - "mtc1 %[rpom12], %[temp13] \n\t" - "mtc1 %[rpom22], %[temp15] \n\t" - - PTR_ADDIU "%[src], 32 \n\t" - "cvt.s.w %[temp1], %[temp1] \n\t" - "cvt.s.w %[temp3], %[temp3] \n\t" - "cvt.s.w %[temp5], %[temp5] \n\t" - "cvt.s.w %[temp7], %[temp7] \n\t" - - "cvt.s.w %[temp9], %[temp9] \n\t" - "cvt.s.w %[temp11], %[temp11] \n\t" - "cvt.s.w %[temp13], %[temp13] \n\t" - "cvt.s.w %[temp15], %[temp15] \n\t" - - "mul.s %[temp1], %[temp1], %[mul] \n\t" - "mul.s %[temp3], %[temp3], %[mul] \n\t" - "mul.s %[temp5], %[temp5], %[mul] \n\t" - "mul.s %[temp7], %[temp7], %[mul] \n\t" - - "mul.s %[temp9], %[temp9], %[mul] \n\t" - "mul.s %[temp11], %[temp11], %[mul] \n\t" - "mul.s %[temp13], %[temp13], %[mul] \n\t" - "mul.s %[temp15], %[temp15], %[mul] \n\t" - - "swc1 %[temp1], 0(%[dst]) \n\t" /*dst[i] = src[i] * mul; */ - "swc1 %[temp3], 4(%[dst]) \n\t" /*dst[i+1] = src[i+1] * mul;*/ - "swc1 %[temp5], 8(%[dst]) \n\t" /*dst[i+2] = src[i+2] * mul;*/ - "swc1 %[temp7], 12(%[dst]) \n\t" /*dst[i+3] = src[i+3] * mul;*/ - - "swc1 %[temp9], 16(%[dst]) \n\t" /*dst[i+4] = src[i+4] * mul;*/ - "swc1 %[temp11], 20(%[dst]) \n\t" /*dst[i+5] = src[i+5] * mul;*/ - "swc1 %[temp13], 24(%[dst]) \n\t" /*dst[i+6] = src[i+6] * mul;*/ - "swc1 %[temp15], 28(%[dst]) \n\t" /*dst[i+7] = src[i+7] * mul;*/ - PTR_ADDIU "%[dst], 32 \n\t" - "bne %[src], %[src_end], i32tf_lp%= \n\t" - : [temp1]"=&f"(temp1), [temp11]"=&f"(temp11), - [temp13]"=&f"(temp13), [temp15]"=&f"(temp15), - [temp3]"=&f"(temp3), [temp5]"=&f"(temp5), - [temp7]"=&f"(temp7), [temp9]"=&f"(temp9), - [rpom1]"=&r"(rpom1), [rpom2]"=&r"(rpom2), - [rpom11]"=&r"(rpom11), [rpom21]"=&r"(rpom21), - [rpom12]"=&r"(rpom12), [rpom22]"=&r"(rpom22), - [rpom13]"=&r"(rpom13), [rpom23]"=&r"(rpom23), - [dst]"+r"(dst), [src]"+r"(src) - : [mul]"f"(mul), [src_end]"r"(src_end) - : "memory" - ); -} -#endif /* HAVE_INLINE_ASM */ - -av_cold void ff_fmt_convert_init_mips(FmtConvertContext *c) -{ -#if HAVE_INLINE_ASM - c->int32_to_float_fmul_scalar = int32_to_float_fmul_scalar_mips; -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mips.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mips.h deleted file mode 100644 index 760d7b32953ece534275cc1e47d0c029db627498..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/mpegvideo_mips.h +++ /dev/null @@ -1,38 +0,0 @@ -/* - * Copyright (c) 2015 Zhou Xiaoyong - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_MIPS_MPEGVIDEO_MIPS_H -#define AVCODEC_MIPS_MPEGVIDEO_MIPS_H - -#include "libavcodec/mpegvideo.h" - -void ff_dct_unquantize_h263_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale); -void ff_dct_unquantize_h263_inter_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale); -void ff_dct_unquantize_mpeg1_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale); -void ff_dct_unquantize_mpeg1_inter_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale); -void ff_dct_unquantize_mpeg2_intra_mmi(MpegEncContext *s, int16_t *block, - int n, int qscale); -void ff_denoise_dct_mmi(MpegEncContext *s, int16_t *block); - -#endif /* AVCODEC_MIPS_MPEGVIDEO_MIPS_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download 7zip for Windows A Versatile and User-Friendly File Archiver with a High Compression Ratio.md b/spaces/congsaPfin/Manga-OCR/logs/Download 7zip for Windows A Versatile and User-Friendly File Archiver with a High Compression Ratio.md deleted file mode 100644 index d8f5d9d5bc55db5eb9595cefe73018aed50d4aef..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download 7zip for Windows A Versatile and User-Friendly File Archiver with a High Compression Ratio.md +++ /dev/null @@ -1,68 +0,0 @@ -
        -

        How to Download 7zip for Windows

        -

        If you work with large files and folders on your Windows PC, you might want to use a file archiver and compressor tool to save space and time. One of the best free and open-source options available is 7zip, a powerful program that can create and extract archives in various formats, such as ZIP, RAR, 7Z, TAR, GZIP, and more.

        -

        In this article, we will show you how to download, install, and use 7zip for Windows in a few easy steps. We will also introduce you to some alternatives to 7zip in case you want to try other file archiving tools.

        -

        download 7zip for windows


        Download Ziphttps://urlca.com/2uO5F3



        -

        Downloading 7zip from the official website

        -

        The first step is to download the latest version of 7zip from its official website. Here's how:

        -
          -
        1. Open your web browser and go to https://www.7-zip.org/, the home page of 7zip.
        2. -
        3. Under the Download section, you will see different links for different versions of Windows. Choose the one that matches your system type (32-bit or 64-bit) and click on it. You can check your system type by right-clicking on This PC or My Computer and selecting Properties.
        4. -
        5. A file named 7z2201.exe (or similar) will start downloading. Save it to a location of your choice on your PC.
        6. -
        -

        Installing 7zip on your Windows PC

        -

        Once you have downloaded the installer file, you can proceed to install 7zip on your PC. Here's how:

        -
          -
        1. Double-click on the installer file that you downloaded in the previous step. A window will pop up asking you to confirm the installation.
        2. -
        3. Click on Install. The installation process will begin and should take only a few seconds.
        4. -
        5. When the installation is complete, click on Close. You have successfully installed 7zip on your PC.
        6. -
        7. To integrate 7zip with Windows Explorer, right-click on any file or folder and select 7-Zip, then Options. A window will open where you can check the boxes for the file types that you want to associate with 7zip. Click on OK.
        8. -
        -

        Using 7zip to create and extract archives

        -

        Now that you have installed and integrated 7zip with Windows Explorer, you can start using it to create and extract archives. Here's how:

        -
          -
        1. To create an archive, select one or more files or folders that you want to compress. Right-click on them and select 7-Zip, then Add to archive.... A window will open where you can choose the archive format, compression level, encryption options, and other settings. Click on OK. A new archive file will be created in the same location as the original files or folders.
        2. -
        3. To extract an archive, right-click on it and select 7-Zip, then Extract here or Extract to.... The files or folders inside the archive will be extracted to the same location or to a folder of your choice.
        4. -
        -

        Alternatives to 7zip for Windows

        -

        While 7zip is a great file archiver and compressor tool for Windows, it is not the only one. There are many other programs that offer similar or different features and functionalities. Here are some of the most popular ones:

        - - - - - - - - - - - - - - - - - - - - - - - - - - -
        NameFeaturesPrice
        WinRAR- Supports RAR, ZIP, and many other formats
        - Offers encryption, password protection, and recovery options
        - Allows splitting and merging archives
        - Has a user-friendly interface and a built-in file manager
        - Free trial for 40 days
        - $29 for a lifetime license
        Bandizip- Supports ZIP, RAR, 7Z, and many other formats
        - Offers high-speed compression and extraction
        - Supports Unicode and multi-volume archives
        - Has a simple and intuitive interface
        - Free for personal use
        - $30 for a professional license
        WinZip- Supports ZIP, RAR, 7Z, and many other formats
        - Offers cloud integration, file sharing, and backup options
        - Allows encryption, password protection, and watermarking
        - Has a modern and customizable interface
        - Free trial for 21 days
        - $29.95 per year for a standard license
        PeaZip- Supports ZIP, RAR, 7Z, and many other formats
        - Offers encryption, password protection, and secure deletion options
        - Allows splitting and joining archives
        - Has a simple and elegant interface
        - Free and open-source
        -

        Conclusion

        -

        In this article, we have shown you how to download, install, and use 7zip for Windows, a free and open-source file archiver and compressor tool that can handle various formats. We have also introduced you to some alternatives to 7zip that you can try if you want to explore other file archiving tools.

        -

        We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

        -

        FAQs

        -
          -
        • Q: Is 7zip safe to use?
          A: Yes, 7zip is safe to use as long as you download it from its official website or a trusted source. It does not contain any malware or spyware.
        • -
        • Q: How can I update 7zip to the latest version?
          A: You can update 7zip by downloading the latest version from its official website and installing it over the existing one. You can also check for updates from within the program by clicking on Help, then Check for updates....
        • -
        • Q: How can I uninstall 7zip from my PC?
          A: You can uninstall 7zip by going to Control Panel, then Programs and Features, then selecting 7-Zip, then clicking on Uninstall/Change. You can also use an uninstaller tool like Revo Uninstaller or IObit Uninstaller.
        • -
        • Q: How can I open a file with 7zip?
          A: You can open a file with 7zip by right-clicking on it and selecting 7-Zip, then Open archive.... You can also double-click on the file if you have associated it with 7zip.
        • -
        • Q: How can I change the language of 7zip?
          A: You can change the language of 7zip by clicking on Tools, then Options..., then selecting the Language tab. You can choose from over 80 languages available.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kahani Suno 2.0 Slowed MP3 - The Most Relaxing and Inspiring Lofi Song by Kaifi Khalil.md b/spaces/congsaPfin/Manga-OCR/logs/Kahani Suno 2.0 Slowed MP3 - The Most Relaxing and Inspiring Lofi Song by Kaifi Khalil.md deleted file mode 100644 index 390bb1e2993a756dc9b83e49850764f2fcd4b354..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Kahani Suno 2.0 Slowed MP3 - The Most Relaxing and Inspiring Lofi Song by Kaifi Khalil.md +++ /dev/null @@ -1,148 +0,0 @@ -
        -

        Kahani Suno 2.0: The Viral Romantic Anthem by Kaifi Khalil

        -

        If you are a fan of romantic songs, you might have heard of Kahani Suno 2.0, the latest hit by Kaifi Khalil, a Pakistani singer and songwriter. This song has taken the internet by storm, with millions of views, likes, and comments on YouTube and other platforms. But what is Kahani Suno 2.0 and why is it so popular? And how can you download it in a slowed mp3 format for a more relaxing and soothing experience? In this article, we will answer these questions and more.

        -

        What is Kahani Suno 2.0?

        -

        Kahani Suno 2.0 is a Pakistani Urdu-language song sung by Kaifi Khalil. It was released on June 1, 2022, as a reworked version of his previous song Kahani Suno, which was released in January 2021. The song is a romantic ballad that tells the story of a lover who is longing for his beloved and expressing his feelings through poetry and music.

        -

        kahani suno 2.0 slowed mp3 download


        Downloadhttps://urlca.com/2uO81r



        -

        The origin and meaning of the song

        -

        Kaifi Khalil has stated that he had composed this song a long time ago and some time after releasing the first version, he realized that the song needed revisiting. He wanted to make it more appealing to his Urdu listeners and to convey his emotions more effectively. He said, "Kahani Suno was an old song and I decided to rewrite it and do justice to that track. I wanted people to relate to the song as if they are listening to someone’s story.”

        -

        The title of the song means "Listen to the story" in Urdu, and the lyrics are full of metaphors, similes, and imagery that describe the love and pain of the narrator. Some of the lines are:

        -
          -
        • Deewana hua, mastana hua (I became crazy, I became ecstatic)
        • -
        • Teri chahat mein kitna fasana hua (How much of a saga happened in your love)
        • -
        • Tere aane ki khushbu, tere jaane ka manzar (The fragrance of your arrival, the sight of your departure)
        • -
        • Tujhe milna padega, ab zamana hua (You will have to meet me, now it's time)
        • -
        • Sadaiyein suno, haan jafain suno (Listen to the voices, yes listen to the injustices)
        • -
        • Mujhe pyaar hua tha (I had fallen in love)
        • -
        -

        The song also features some lines from famous Urdu poets like Mirza Ghalib and Faiz Ahmed Faiz, such as:

        -
          -
        • Dil hi to hai na sang-o-khisht (It's only a heart, not stone or brick)
        • -
        • Dard se bhar na aaye kyun (Why doesn't it fill up with pain)
        • -
        • Mujhse pehli si mohabbat mere mehboob na maang (Don't ask me for the love I had before, my beloved)
        • -
        -

        The popularity and impact of the song

        -

        Kahani Suno 2.0 has become a viral sensation across Pakistan and worldwide. It has received widespread praise and acclaim from critics and fans alike. It has also been featured on various charts and lists, such as:

        -
          -
        • The features and benefits of the song

          -

          Kahani Suno 2.0 is not just a song, it is a musical masterpiece that offers many features and benefits to the listeners. Some of them are:

          -
            -
          • It is a versatile song that can suit different moods and occasions. Whether you want to relax, romance, or reminisce, this song can provide the perfect soundtrack for your feelings.
          • -
          • It is a bilingual song that combines Urdu and English lyrics, making it accessible and appealing to a wider audience. You can enjoy the melody and the message of the song regardless of your language preference.
          • -
          • It is a high-quality song that showcases the talent and skill of Kaifi Khalil as a singer, songwriter, and composer. His voice is smooth and expressive, his lyrics are poetic and meaningful, and his music is rich and harmonious.
          • -
          • It is a timeless song that transcends trends and fads. It has a classic and universal appeal that can resonate with people of different ages, cultures, and backgrounds. It is a song that you can listen to over and over again without getting bored.
          • -
          -

          How to download Kahani Suno 2.0 slowed mp3?

          -

          If you love Kahani Suno 2.0 and want to enjoy it in a different way, you might want to try downloading it in a slowed mp3 format. Slowed mp3 is a type of audio file that has been modified to reduce the speed and pitch of the original song, creating a more relaxed and soothing effect. In this section, we will explain what slowed mp3 is, why you should try it, and how you can download it for free from various sources.

          -

          What is slowed mp3 and why you should try it?

          -

          Slowed mp3 is a term that refers to an audio file that has been altered using software or online tools to decrease its tempo and pitch. This means that the song will sound slower and lower than the original version. For example, if the original song has a tempo of 120 beats per minute (bpm) and a pitch of A4 (440 Hz), the slowed mp3 version might have a tempo of 60 bpm and a pitch of A3 (220 Hz).

          -

          Slowed mp3 is not a new phenomenon, but it has gained popularity in recent years thanks to the internet and social media platforms such as YouTube, TikTok, and SoundCloud. Many users upload slowed mp3 versions of their favorite songs or create playlists of slowed mp3 tracks for others to enjoy.

          -

          kahani suno 2.0 slowed and reverb song download
          -kahani suno 2.0 lofi remix mp3 download
          -kahani suno 2.0 slowed down version free download
          -kahani suno 2.0 reverb effect song mp3 download
          -kahani suno 2.0 slow jam song download
          -kahani suno 2.0 chill beats mp3 download
          -kahani suno 2.0 romantic song slowed and reverb download
          -kahani suno 2.0 hindi song lofi remix download
          -kahani suno 2.0 bollywood song slowed down mp3 download
          -kahani suno 2.0 heart touching song reverb effect download
          -kahani suno 2.0 love song slowed and reverb mp3 download
          -kahani suno 2.0 sad song lofi remix download
          -kahani suno 2.0 emotional song slowed down version download
          -kahani suno 2.0 soothing music reverb effect mp3 download
          -kahani suno 2.0 relaxing music slowed and reverb download
          -kahani suno 2.0 musical healing lofi remix mp3 download
          -kahani suno 2.0 chill vibes slowed down version download
          -kahani suno 2.0 slow music reverb effect song download
          -kahani suno 2.0 song remix slowed and reverb mp3 download
          -kahani suno 2.0 new version lofi remix download
          -kahani suno 2.0 2021 music slowed down version mp3 download
          -kahani suno 2.0 latest music reverb effect song download
          -kahani suno 2.0 indian music slowed and reverb mp3 download
          -kahani suno 2.0 melodious song lofi remix download
          -kahani suno 2.0 music lovers slowed down version download
          -kahani suno 2.0 audio experience reverb effect mp3 download
          -kahani suno 2.0 youtube music slowed and reverb download
          -kahani suno 2.0 music reaction lofi remix mp3 download
          -kahani suno 2.0 relaxing lofi chill beats download
          -kahani suno 2.0 mood music slowed down version mp3 download
          -kahani suno 2.0 study music reverb effect song download
          -kahani suno 2.0 music for concentration slowed and reverb download
          -kahani suno 2.0 background music lofi remix mp3 download
          -kahani suno 2.0 sad music slowed down version download
          -kahani suno 2.0 emotional music reverb effect song mp3 download
          -kahani suno 2.0 beautiful music slowed and reverb download
          -kahani suno 2.0 instrumental music lofi remix mp3 download
          -kahani suno 2.0 sleep music slowed down version download
          -kahani suno 2.0 ambient music reverb effect song mp3 download
          -kahani suno 2.0 lofi hip hop slowed and reverb download
          -kahani suno 2.0 lofi chill lofi remix mp3 download
          -kahani suno 2.0 lofi radio slowed down version download
          -kahani suno 2.0 lofi mood reverb effect song mp3 download
          -kahani suno 2.0 jazz hop slowed and reverb download
          -kahani suno 2.0 chillhop lofi remix mp3 download

          -

          But why would you want to listen to slowed mp3 songs? Here are some possible reasons:

          -
            -
          • Slowed mp3 songs can help you relax and unwind. By slowing down the tempo and pitch of the song, you can create a more calming and soothing atmosphere. Slowed mp3 songs can also help you fall asleep faster and sleep better by reducing stress and anxiety.
          • -
          • Slowed mp3 songs can enhance your appreciation and enjoyment of the music. By slowing down the tempo and pitch of the song, you can hear more details and nuances in the melody, harmony, rhythm, and lyrics. You can also discover new meanings and emotions in the song that you might have missed before.
          • -
          • Slowed mp3 songs can inspire your creativity and imagination. By slowing down the tempo and pitch of the song, you can create a new musical experience that is different from the original. You can also experiment with different effects and filters to customize your slowed mp3 songs according to your preferences.
          • -
          -

          The best sources to download slowed mp3 for free

          -

          If you want to download slowed mp3 songs for free, you have several options to choose from. Here are some of the best sources that we recommend:

          -

          Pixabay

          -

          Pixabay is a website that offers free royalty-free images, videos, audio tracks, sound effects, GIFs, and more. You can find thousands of free sound effects for download on Pixabay, including slowed mp3 versions of popular songs. You can search by keywords or browse by categories such as music genres, moods, instruments, etc. You can also filter by duration, file type, license type, etc. To download a slowed mp3 file from Pixabay, simply click on the download button next to the file name and choose your desired quality (low, medium, or high). You can also preview the file before downloading it by clicking on the play button.

          -

          Mixkit

          -

          Mixkit is another website that offers free royalty-free music tracks, sound effects, video clips, stock photos, etc. You can find hundreds of free music tracks for download on Mixkit, including slowed mp3 versions of various genres and moods. You can search by keywords or browse by categories such as ambient, chill, cinematic, etc. You can also filter by duration, tempo, mood, etc. To download a slowed mp3 file from Mixkit, simply click on the download button next to the file name and choose your desired quality (low, medium, or high). You can also preview the file before downloading it by clicking on the play button.

          -

          YouTube

          -

          YouTube is the most popular and widely used website for watching and sharing videos online. You can find millions of videos on YouTube, including slowed mp3 versions of your favorite songs. You can search by keywords or browse by channels, playlists, categories, etc. You can also filter by upload date, duration, features, etc. To download a slowed mp3 file from YouTube, you will need to use a third-party tool or website that can convert YouTube videos to mp3 files. There are many such tools and websites available online, but some of the best ones are:

          -
            -
          • YTMP3: This is a simple and fast website that can convert any YouTube video to mp3 or mp4 format. You just need to paste the URL of the YouTube video you want to convert and click on the convert button. You can then download the converted file to your device or save it to your Dropbox account.
          • -
          • 4K Video Downloader: This is a powerful and versatile software that can download any YouTube video or playlist in various formats and qualities. You can also extract audio from YouTube videos and save them as mp3 files. You just need to copy the URL of the YouTube video you want to download and paste it in the software. You can then choose your desired format, quality, and location for the downloaded file.
          • -
          • MP3FY: This is an online tool that can convert any YouTube video to mp3 format with high quality and speed. You just need to enter the URL of the YouTube video you want to convert and click on the convert button. You can then download the converted file to your device or share it with others.
          • -
          -

          The steps to download slowed mp3 from each source

          -

          To help you download slowed mp3 files from each source, we have provided a step-by-step guide below:

          -

          Pixabay

          -
            -
          1. Go to Pixabay and click on the Music tab.
          2. -
          3. Type "slowed" in the search box and press enter.
          4. -
          5. Browse through the results and find the slowed mp3 file you want to download.
          6. -
          7. Click on the download button next to the file name and choose your desired quality (low, medium, or high).
          8. -
          9. Wait for the download to complete and enjoy your slowed mp3 file.
          10. -
          -

          Mixkit

          -
            -
          1. Go to Mixkit and click on the Music tab.
          2. -
          3. Type "slowed" in the search box and press enter.
          4. -
          5. Browse through the results and find the slowed mp3 file you want to download.
          6. -
          7. Click on the download button next to the file name and choose your desired quality (low, medium, or high).
          8. -
          9. Wait for the download to complete and enjoy your slowed mp3 file.
          10. -
          -

          YouTube

          -
            -
          1. Go to YouTube and type "slowed" in the search box and press enter.
          2. -
          3. Browse through the results and find the slowed mp3 video you want to download.
          4. -
          5. Copy the URL of the video from the address bar.
          6. -
          7. Go to one of the third-party tools or websites mentioned above (YTMP3, 4K Video Downloader, or MP3FY) and paste the URL in their respective boxes.
          8. -
          9. Click on the convert or download button and choose your desired format (mp3) and quality (low, medium, or high).
          10. -
          11. Wait for the conversion or download to complete and enjoy your slowed mp3 file.
          12. -
          -

          Conclusion

          -

          Kahani Suno 2.0 is a beautiful and captivating song that has captured the hearts of millions of people around the world. It is a song that you can listen to anytime and anywhere, but especially when you want to feel romantic, nostalgic, or relaxed. If you want to enhance your listening experience, you can try downloading it in a slowed mp3 format that will make it sound more soothing and relaxing. You can easily download it for free from various sources such as Pixabay, Mixkit, or YouTube, using the steps we have provided in this article. We hope you have enjoyed this article and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy listening!

          -

          FAQs

          -

          Here are some of the frequently asked questions about Kahani Suno 2.0 and slowed mp3:

          -
            -
          1. Who is Kaifi Khalil and what are his other songs?
          2. -

            Kaifi Khalil is a Pakistani singer, songwriter, and composer who started his musical career in 2018. He is known for his romantic and soulful songs that blend Urdu and English lyrics. Some of his other songs are: Dil Ki Baat, Tum Ho, Tere Bina, and Zindagi.

            -
          3. What is the difference between Kahani Suno and Kahani Suno 2.0?
          4. -

            Kahani Suno is the original version of the song that was released in January 2021. It has a faster tempo and a higher pitch than Kahani Suno 2.0. Kahani Suno 2.0 is the reworked version of the song that was released in June 2022. It has a slower tempo and a lower pitch than Kahani Suno. It also has some changes in the lyrics and the music.

            -
          5. What are the benefits of listening to slowed mp3 songs?
          6. -

            Slowed mp3 songs can help you relax, enjoy, and inspire. They can create a more calming and soothing effect by reducing the speed and pitch of the original song. They can also enhance your appreciation and enjoyment of the music by revealing more details and nuances in the melody, harmony, rhythm, and lyrics. They can also inspire your creativity and imagination by creating a new musical experience that is different from the original.

            -
          7. What are the drawbacks of listening to slowed mp3 songs?
          8. -

            Slowed mp3 songs can also have some drawbacks depending on your personal preference and taste. They can distort or degrade the quality of the original song by altering its tempo and pitch. They can also lose some of the energy and excitement of the original song by making it sound more dull and boring. They can also violate the artistic integrity and intention of the original song by changing its meaning and emotion.

            -
          9. How can I make my own slowed mp3 songs?
          10. -

            If you want to make your own slowed mp3 songs, you will need to use a software or an online tool that can modify the tempo and pitch of any audio file. Some of the software or tools that you can use are: Audacity, VLC Media Player, MP3 Speed Changer, Online Audio Converter, etc. You will need to upload or import your audio file to the software or tool, adjust the tempo and pitch settings according to your preference, and save or export your slowed mp3 file.

            -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Together VNG A Virtual Universe Where You Can Make Friends and Have Fun.md b/spaces/congsaPfin/Manga-OCR/logs/Play Together VNG A Virtual Universe Where You Can Make Friends and Have Fun.md deleted file mode 100644 index aa0cb1840563c81e8500c9e07a6fb64c77652510..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Together VNG A Virtual Universe Where You Can Make Friends and Have Fun.md +++ /dev/null @@ -1,92 +0,0 @@ - -

          Play Together VNG: A Virtual Universe Where Friends Can Gather

          -

          Do you want to meet new friends, have fun, and explore a virtual world? If yes, then you should try Play Together VNG, a game that lets you do all that and more. Play Together VNG is a virtual universe where friends from all over the world can gather and enjoy various activities and games. You can create your own avatar, decorate your home, travel to different places, and party with your friends. In this article, we will tell you more about what Play Together VNG is, how to download it for free, and why you should play it.

          -

          play together vng download free


          Download ○○○ https://urlca.com/2uO6Se



          -

          What is Play Together VNG?

          -

          Play Together VNG is a game developed by VNG Corporation, a leading game company in Vietnam. It is a casual game that allows you to interact with other players in a virtual world. You can chat, play, and socialize with people from different countries and cultures. You can also customize your character, your home, and your vehicles. Here are some of the features that make Play Together VNG an exciting game:

          -

          A virtual playground with various activities and games

          -

          Play Together VNG has a lot of fun things to do in its virtual playground. You can meet new friends at the Plaza, go shopping, or play a variety of short games at the Game Center. You can also play hide-and-seek with zombies at the Haunted House at night and try to conquer the top of the Infinity Tower at the Campground. The people at the Plaza will have special quests for you. Complete these missions and get rewards. At Play Together VNG, every day is a fun experience.

          -

          A special adventure with travel and exploration

          -

          Play Together VNG also lets you travel to different places and explore new things. You can visit a Travel Agent and book a trip to various countries. You can meet new friends from different regions and bookmark the places you've visited. You can also sail to the Lost Island and search for hidden treasures. You never know what you will find in Play Together VNG.

          -

          play together vng apk download free
          -play together vng game center
          -play together vng code
          -play together vng on pc
          -play together vng pet
          -play together vng minigame
          -play together vng review
          -play together vng hack
          -play together vng mod apk
          -play together vng tips and tricks
          -play together vng plaza
          -play together vng travel agent
          -play together vng house party
          -play together vng outfits
          -play together vng lost island
          -play together vng haunted house
          -play together vng infinity tower
          -play together vng campground
          -play together vng fishing
          -play together vng cooking
          -play together vng dance party
          -play together vng pool party
          -play together vng zombie hide and seek
          -play together vng virtual playground
          -play together vng friends from all over the world
          -play together vng furniture themes
          -play together vng in-app purchases
          -play together vng ratings and reviews
          -play together vng data safety and privacy
          -play together vng net energy gain experiment
          -play together vng holy grail fusion experiment
          -play together vng 100 million degrees Celsius reactor
          -play together vng seven times hotter than the sun core
          -play together vng Korea Superconducting Tokamak Advanced Research facility (KSTAR)
          -play together vng Korea Institute of Fusion Energy (KFE)
          -play together vng nuclear fusion reaction for 30 seconds
          -play together vng unlimited energy source
          -play together vng physics problem to engineering one
          -play together vng special adventure and memorable moments
          -play together vng unique style and identity
          -play together vng skateboard, sports car or off-road vehicle?
          -play together vng beach cruise with pet and friends
          -play together vng Egyptian, Toy Block, Botany and more themes
          -play together vng VNG Corporation - Công ty Cổ phần VNG
          -play together vng BlueStacks emulator for pc
          -play together vng webpay for Apple users
          -play together vng Xích Lô for 18k
          -play together vng Giáng Sinh event with Santa Mono
          -play together vng Rút bài hay, nhận quà ngay event

          -

          A party at your place with your own style and theme

          -

          Play Together VNG gives you the opportunity to show off your creativity and style. You can decorate your home using furniture with various themes. There are many colorful themes to choose from like Egyptian, Toy Block, Botany, and more. You can also invite your friends over for a House Party. The theme of the party can be anything you like. Dance parties, pool parties, cooking classes, brunch venues, and more. The only limit is your imagination.

          -

          A unique identity with outfits and accessories

          -

          Play Together VNG also allows you to express yourself with outfits and accessories. You can choose from a wide range of clothes, hats, glasses, shoes, and more. You can also accessorize with pets, skateboards, sports cars, or off-road vehicles. You can go on a beach cruise with your adorable pet and friends. You can also change your hairstyle, skin tone, eye color, and facial features. Only you can define who you are in Play Together VNG.

          -

          How to download Play Together VNG for free?

          -

          Play Together VNG is available for both Android and iOS devices. You can download it for free from various sources. Here are some of the ways you can get Play Together VNG on your device:

          -

          Download from Google Play Store or App Store

          -

          The easiest way to download Play Together VNG is from the Google Play Store or the App Store. Just search for Play Together VNG and tap on the install button. The game will automatically download and install on your device. You can also use the links below to go directly to the download page: - [Play Together VNG on Google Play Store] - [Play Together VNG on App Store]

          Download from VNGGames website or APK file

          -

          Another way to download Play Together VNG is from the official website of VNGGames, the developer of the game. You can visit their website and click on the download button. You can also scan the QR code on the website to get the link. Alternatively, you can download the APK file of Play Together VNG from a third-party source and install it manually on your device. However, you need to enable the installation of apps from unknown sources in your device settings. You can use the links below to access the website or the APK file: - [Play Together VNG on VNGGames website] - [Play Together VNG APK file]

          Download from BlueStacks emulator on PC

          -

          If you want to play Play Together VNG on your PC, you can use an emulator like BlueStacks. BlueStacks is a software that allows you to run Android apps on your PC. You can download BlueStacks from their website and install it on your PC. Then, you can search for Play Together VNG in the BlueStacks app store and install it. You can also use your Google account to sync your progress and data across devices. You can use the link below to download BlueStacks: - [BlueStacks emulator]

          Why should you play Play Together VNG?

          -

          Play Together VNG is more than just a game. It is a social platform where you can meet friends, have fun, and express yourself. Here are some of the reasons why you should play Play Together VNG:

          -

          Meet friends from all over the world and have fun together

          -

          Play Together VNG is a global game that connects players from different countries and cultures. You can chat with other players using text or voice messages. You can also join clubs and make new friends with similar interests. You can play games together, go on trips together, or party together. You can also send gifts, stickers, and emojis to show your appreciation and friendship.

          -

          Create memorable moments in a virtual universe

          -

          Play Together VNG is a virtual universe where you can create your own stories and memories. You can capture screenshots or videos of your moments and share them with your friends or on social media. You can also participate in events and festivals that celebrate different occasions and cultures. You can also earn badges and achievements that showcase your accomplishments and skills.

          -

          Express yourself and your creativity

          -

          Play Together VNG is a game that lets you express yourself and your creativity. You can customize your character, your home, and your vehicles with various themes and styles. You can also create your own content using the DIY feature. You can design your own clothes, furniture, stickers, and more. You can also sell your creations in the market or gift them to your friends.

          -

          Conclusion

          -

          Play Together VNG is a game that offers you a virtual universe where friends can gather and have fun. You can enjoy various activities and games, travel to different places, party at your place, and express yourself and your creativity. You can also meet new friends from all over the world and create memorable moments in a virtual world. Play Together VNG is a game that you should not miss.

          -

          FAQs

          -

          Here are some of the frequently asked questions about Play Together VNG:

          -

          Q: Is Play Together VNG free to play?

          -

          A: Yes, Play Together VNG is free to play. However, there are some optional in-app purchases that you can make to enhance your experience.

          -

          Q: How do I update Play Together VNG?

          -

          A: To update Play Together VNG, you need to go to the Google Play Store or the App Store and check for updates. Alternatively, you can visit the official website of VNGGames or download the latest APK file.

          -

          Q: How do I contact customer service for Play Together VNG?

          -

          A: To contact customer service for Play Together VNG, you need to go to the settings menu in the game and tap on the customer service button. You can also email them at support@vnggames.com or visit their Facebook page.

          -

          Q: How do I delete my account for Play Together VNG

          Q: How do I delete my account for Play Together VNG?

          -

          A: To delete your account for Play Together VNG, you need to go to the settings menu in the game and tap on the account management button. You can then choose to delete your account permanently. However, please note that once you delete your account, you will lose all your data and progress in the game.

          -

          Q: How do I play Play Together VNG with my friends?

          -

          A: To play Play Together VNG with your friends, you need to add them as your friends in the game. You can do this by searching for their nickname or ID, or by scanning their QR code. You can also join the same club as them or invite them to your house party. You can then chat with them, play games with them, or travel with them in the game.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solar Smash A Free and Amazing Space Destruction Game APK.md b/spaces/congsaPfin/Manga-OCR/logs/Solar Smash A Free and Amazing Space Destruction Game APK.md deleted file mode 100644 index 4d05f7d9f144374b60dc9307e8fb046c01f0debb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Solar Smash A Free and Amazing Space Destruction Game APK.md +++ /dev/null @@ -1,112 +0,0 @@ -
          -

          Solar Smash Gratis APK: A Planet Destruction Simulator for Android

          -

          Have you ever wondered what it would be like to destroy a planet with a variety of weapons, such as nuclear missiles, lasers, asteroids, and even aliens? If so, then you might want to try Solar Smash, a planet destruction simulator developed by Paradyme Games. This game allows you to unleash your destructive fantasies on any planet you choose, from Earth to Jupiter, and even some fictional ones. You can also customize your weapons and planets, and watch the realistic physics and graphics as they crumble and explode.

          -

          Solar Smash is a fun and relaxing game that lets you experiment with different scenarios and outcomes. You can play it casually or challenge yourself by completing achievements and unlocking secret planets. You can also play it offline or online, and share your creations with other players. If you are looking for a game that is simple but satisfying, then Solar Smash might be the perfect choice for you.

          -

          solar smash gratis apk


          Download ☆☆☆ https://urlca.com/2uOfW9



          -

          However, Solar Smash is not available for free on the Google Play Store. You have to pay a small fee to download and install it on your Android device. But don't worry, there is a way to get it for free without any hassle. In this article, we will show you how to download and install Solar Smash gratis APK, which is a modified version of the original game that lets you play it for free. We will also tell you about the features, tips and tricks, alternatives, and reviews of Solar Smash gratis APK. So, let's get started!

          -

          How to Download and Install Solar Smash Gratis APK

          -

          If you want to play Solar Smash for free on your Android device, you need to download and install Solar Smash gratis APK. This is a file that contains the modified version of the game that bypasses the payment requirement. Here are the steps on how to do it:

          -
            -
          1. Go to [this link](^10^) and click on the green "Download APK" button. This will start downloading the Solar Smash gratis APK file on your device.
          2. -
          3. Once the download is complete, locate the file in your device's file manager and tap on it. This will prompt you to enable unknown sources in your device's settings. This is necessary because you are installing an app from outside the Google Play Store.
          4. -
          5. After enabling unknown sources, go back to the file manager and tap on the Solar Smash gratis APK file again. This will start installing the app on your device.
          6. -
          7. Wait for the installation to finish, then open the app. You can now enjoy playing Solar Smash for free!
          8. -
          -

          Note: You might need to uninstall the original version of Solar Smash if you have it on your device before installing the gratis APK version.

          -

          solar smash free download apk
          -solar smash planet destruction apk
          -solar smash mod apk unlimited money
          -solar smash apk for android
          -solar smash game online apk
          -solar smash latest version apk
          -solar smash simulation game apk
          -solar smash 2 apk download
          -solar smash premium apk free
          -solar smash hack apk 2021
          -solar smash 3d apk mod
          -solar smash offline apk
          -solar smash full apk unlocked
          -solar smash realistic apk
          -solar smash no ads apk
          -solar smash pro apk 2020
          -solar smash cracked apk
          -solar smash cheats apk
          -solar smash fun apk
          -solar smash best apk
          -solar smash new update apk
          -solar smash original apk
          -solar smash pc version apk
          -solar smash unlimited planets apk
          -solar smash easy apk
          -solar smash cool apk
          -solar smash android 1 apk
          -solar smash revdl apk
          -solar smash rexdl apk
          -solar smash apkpure apk
          -solar smash happymod apk
          -solar smash an1 apk
          -solar smash apkmody apk
          -solar smash apknite apk
          -solar smash apkmirror apk
          -solar smash mob.org apk
          -solar smash uptodown apk
          -solar smash apptoko apk
          -solar smash aptoide apk
          -solar smash appvn apk
          -solar smash moddroid apk
          -solar smash andropalace apk
          -solar smash androidoyun.club apk
          -solar smash ihackedit apk
          -solar smash platinmods apk
          -solar smash blackmod apk
          -solar smash modapkdown apk
          -solar smash apksfull.com apk

          -

          Features of Solar Smash Gratis APK

          -

          Solar Smash gratis APK has all the features of the original game, plus some extra ones that make it more enjoyable. Here are some of them:

          -
            -
          • You can use a variety of different weapons to destroy any planet you want. These include nuclear missiles, lasers, asteroids, black holes, aliens, UFOs, celestial beings, wormholes, planet killers, sun killers, and more.
          • -
          • You can customize your weapons by changing their size, speed, color, angle, trajectory, impact effect, explosion effect, sound effect, etc.
          • -
          • You can also customize your planets by changing their size, rotation speed, gravity strength, atmosphere color, surface color, cloud color, etc.
          • -

            Tips and Tricks to Have Your Best Destruction

            -

            Solar Smash gratis APK is a game that lets you have fun with destroying planets, but it also has some challenges and secrets that you can discover. Here are some tips and tricks to help you have your best destruction:

            -
              -
            • Try to hit the weak spots of the planets, such as volcanoes, fault lines, poles, etc. This will cause more damage and create more realistic effects.
            • -
            • Use different combinations of weapons to create different scenarios and outcomes. For example, you can use a black hole to suck in a planet, then use a sun killer to destroy the black hole. Or you can use a wormhole to teleport a planet to another location, then use a planet killer to smash it.
            • -
            • Complete the achievements to unlock secret planets and weapons. Some of them are hidden and require you to do specific actions or use specific weapons. For example, to unlock the Death Star weapon, you need to destroy Earth with a laser.
            • -
            • Share your creations with other players online. You can upload your screenshots and videos to social media platforms, such as Facebook, Twitter, Instagram, etc. You can also watch other players' creations and get inspired by them.
            • -
            -

            Alternatives to Solar Smash Gratis APK

            -

            If you like Solar Smash gratis APK, you might also like some other games that are similar to it in terms of gameplay and theme. Here are some of them:

            -
              -
            • Solar Smash 2: This is the sequel to Solar Smash, which adds more features and improvements. You can now destroy multiple planets at once, use more weapons and effects, customize your solar system, and play in multiplayer mode with your friends.
            • -
            • Planet Bomber: This is a game that lets you bomb different planets with various types of bombs. You can upgrade your bombs and unlock new ones as you progress. You can also see the statistics and information of each planet you bomb.
            • -
            • WorldBox: This is a game that lets you create and destroy your own world with various tools and elements. You can spawn life forms, such as humans, animals, monsters, etc., and watch them interact with each other. You can also use natural disasters, such as earthquakes, volcanoes, meteorites, etc., to destroy your world.
            • -
            -

            Reviews of Solar Smash Gratis APK

            -

            Solar Smash gratis APK has received many positive and negative reviews from users who have played it. Here are some of them:

            - - - - - -
            Positive ReviewsNegative Reviews
            "This game is awesome! I love how realistic and satisfying it is to destroy planets with different weapons. The graphics and physics are amazing, and the customization options are endless. I highly recommend this game to anyone who likes destruction games.""This game is boring! It has no purpose or goal, just mindless destruction. The graphics and physics are glitchy and laggy, and the customization options are limited. I don't recommend this game to anyone who likes challenging games."
            "This game is fun! I enjoy experimenting with different scenarios and outcomes. The weapons and planets are varied and interesting, and the achievements and secrets are challenging and rewarding. I like this game a lot.""This game is annoying! It has too many ads and pop-ups that interrupt the gameplay. The weapons and planets are repetitive and boring, and the achievements and secrets are too hard and frustrating. I hate this game."
            "This game is cool! I like how creative and original it is. The weapons and planets are unique and cool, and the effects are stunning and impressive. I think this game is one of a kind.""This game is stupid! It has no logic or sense. The weapons and planets are unrealistic and silly, and the effects are cheesy and lame. I think this game is a waste of time."
            -

            Conclusion

            -and unlock secret planets, and share your creations with other players online. You can also try some other games that are similar to Solar Smash in terms of gameplay and theme. Solar Smash gratis APK has received many positive and negative reviews from users who have played it, so you can decide for yourself if you like it or not.

            -

            If you are interested in playing Solar Smash gratis APK, you can follow the steps we have provided above to download and install it on your Android device. It is easy and safe to do, and you will be able to enjoy the game for free. However, if you want to support the developers and get the latest updates and features, you can also buy the original version of Solar Smash on the Google Play Store.

            -

            So, what are you waiting for? Download Solar Smash gratis APK now and have fun destroying planets with different weapons. You might be surprised by how addictive and satisfying it is. But remember, don't try this at home!

            -

            FAQs

            -

            Here are some frequently asked questions about Solar Smash gratis APK:

            -
              -
            1. What is Solar Smash gratis APK?
            2. -

              Solar Smash gratis APK is a modified version of Solar Smash, a planet destruction simulator for Android. It lets you play the game for free without paying anything.

              -
            3. Is Solar Smash gratis APK safe to use?
            4. -

              Yes, Solar Smash gratis APK is safe to use, as long as you download it from a trusted source. We have provided a link to a reliable website where you can download it without any risk.

              -
            5. What are the differences between Solar Smash gratis APK and the original game?
            6. -

              Solar Smash gratis APK has all the features of the original game, plus some extra ones that make it more enjoyable. For example, it has more weapons and planets, more customization options, more achievements and secrets, etc.

              -
            7. How can I update Solar Smash gratis APK?
            8. -

              You can update Solar Smash gratis APK by downloading and installing the latest version of the file from the same website where you got it. You might need to uninstall the previous version before installing the new one.

              -
            9. Can I play Solar Smash gratis APK offline?
            10. -

              Yes, you can play Solar Smash gratis APK offline, as it does not require an internet connection to run. However, you might need an internet connection to download and install it, and to share your creations online.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading Video Game Music for Free.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading Video Game Music for Free.md deleted file mode 100644 index f0d30efbf29ac1a51ab2bc70d0cdf67b18da004b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Downloading Video Game Music for Free.md +++ /dev/null @@ -1,85 +0,0 @@ -
            -

            How to Download Video Game Music

            -

            Video game music is the soundtrack or background music that accompanies video games. It can range from simple melodies to complex orchestral scores, from retro chiptunes to modern electronic beats, from original compositions to licensed songs. Video game music is an integral part of the gaming experience, as it sets the mood, enhances the atmosphere, and creates emotional connections with the characters and stories.

            -

            how can i download video game music


            Download Ziphttps://urlca.com/2uObDJ



            -

            Many people enjoy listening to video game music outside of playing games, as it can evoke nostalgia, inspire creativity, or simply provide entertainment. Downloading video game music can allow you to access your favorite tracks anytime and anywhere, without needing an internet connection or a gaming device. However, downloading video game music can also pose some legal issues and risks, as it may infringe on the rights of the composers, publishers, or platforms that own or distribute the music. Therefore, you need to be careful and respectful when downloading video game music, and follow some guidelines and tips to avoid any problems.

            -

            In this article, we will show you how to download video game music from various sources, how to play it on different devices, and what are some of the best sites to find and enjoy video game music. Let's get started!

            -

            How to Download Video Game Music from Various Sources

            -

            There are many ways to download video game music, depending on the type, quality, and price of the music you want. Here are some of the most common sources and methods:

            -

            From Official Websites or Platforms

            -

            The safest and most legal way to download video game music is to get it from the official websites or platforms of the games, composers, or publishers. Many games offer their soundtracks as digital downloads or physical CDs, either for free or for a reasonable price. Some examples are [The Witcher 3: Wild Hunt](^1^), [Hollow Knight](^2^), and [Celeste](^3^). You can also find official soundtracks on platforms like Steam, GOG.com, Bandcamp, iTunes, Spotify, or Amazon Music.

            -

            Downloading video game music from official sources ensures that you get high-quality and authorized files, and that you support the creators and developers of the games and music. However, not all games have their soundtracks available for download, and some may be region-locked or out of stock.

            -

            From YouTube or Other Video-Sharing Sites

            -

            Another popular way to download video game music is to get it from YouTube or other video-sharing sites. You can find almost any video game music on YouTube, uploaded by fans, channels, or platforms. Some examples are [Game Soundtracks](^4^), [Video Game Music](^5^), and [Nintendo](^6^). You can also use YouTube playlists to find collections of video game music by genre, theme, console, or series.

            -

            How to download free video game soundtracks
            -Best sites to download royalty-free music for games
            -Video game music download in MP3 or original format
            -Where to find fan-made remixes of video game music
            -How to use Felgo to play free game music in your app
            -Download video game music from Commodore 64, NES, and SNES
            -How to subscribe to RSS feed for video game music remixes
            -Video game music download from Bandcamp and NetLabels
            -How to create your own video game music with online tools
            -Download video game music from OverClocked ReMix community
            -How to convert video game music to different genres
            -Video game music download from Zophar's Music Domain and SNESmusic
            -How to stream video game music online without downloading
            -Video game music download from Khinsider and Zophar.net
            -How to edit and mix video game music with Audacity
            -Download video game music from Project2612 and SNESAmp
            -How to find video game music by composer, genre, or platform
            -Video game music download from MusOpen and OurMusicBox
            -How to add video game music to your YouTube videos
            -Download video game music from Purple Planet and Mark Sparling
            -How to download video game music from Spotify and Apple Music
            -Video game music download from Steven O'Brien and Beardmont
            -How to make video game music loops and sound effects
            -Video game music download from Amaradillo.cc and PartnersInRhyme
            -How to download video game music from SoundCloud and YouTube

            -

            To download video game music from YouTube, you need to use a video downloader tool such as [Free HD Video Converter Factory](^7^) or [VideoHunter](^8^). These tools allow you to copy and paste the URL of the video game music you want, choose the format and quality you prefer, and save it to your computer. You can also use these tools to convert the downloaded files to other formats if needed.

            -

            Downloading video game music from YouTube is convenient and free, but it also has some drawbacks. The quality and accuracy of the files may vary depending on the source and the tool you use. The files may also contain ads, watermarks, or other unwanted elements. Moreover, downloading video game music from YouTube may violate the terms of service of YouTube or the rights of the owners of the music. Therefore, you should only download video game music from YouTube for personal use and not distribute or monetize it

            From Free or Paid Music Download Sites

            -

            A third way to download video game music is to get it from free or paid music download sites. These are websites that offer a large collection of video game music files for download, either for free or for a subscription fee. Some examples are [KHInsider], [Zophar's Domain], and [Game Music Online]. You can browse these sites by game title, genre, platform, composer, or other criteria.

            -

            To download video game music from these sites, you usually need to register an account and follow the instructions on the site. Some sites may require you to use a download manager or a torrent client to get the files. Some sites may also offer additional features such as reviews, ratings, forums, or podcasts.

            -

            Downloading video game music from these sites can give you access to a wide variety of video game music files, some of which may be rare or exclusive. However, the quality and legality of the files may also vary depending on the site and the source. Some sites may have low-quality or corrupted files, or files that are mislabeled or incomplete. Some sites may also host pirated or unauthorized files, which could expose you to legal risks or malware. Therefore, you should only download video game music from reputable and trustworthy sites, and scan the files for viruses before opening them.

            -

            How to Play Video Game Music on Different Devices

            -

            Once you have downloaded video game music files, you may want to play them on different devices, such as your PC, your mobile phone, your tablet, or your gaming console. However, not all devices can play all types of video game music files, as they may have different formats, codecs, or compatibility issues. Here are some tips on how to play video game music on different devices:

            -

            On PC or Mac

            -

            Playing video game music on your PC or Mac is usually easy and straightforward, as most computers can support most common audio formats such as MP3, WAV, OGG, FLAC, or M4A. You can use any media player software such as Windows Media Player, VLC Media Player, iTunes, or Winamp to play video game music files on your computer. You can also use some specialized software such as [Foobar2000] or [Audacious] to play more obscure or exotic formats such as NSF, SPC, PSF, GBS, VGM, or MOD.

            -

            To play video game music on your PC or Mac, you just need to locate the files on your hard drive and open them with your preferred media player software. You can also create playlists, edit tags, adjust settings, or apply effects to enhance your listening experience.

            -

            On Mobile Phones or Tablets

            -

            Playing video game music on your mobile phone or tablet is also possible and convenient, as most smartphones and tablets can support most common audio formats such as MP3, WAV, OGG, FLAC, or M4A. You can use any music player app such as Google Play Music, Apple Music, Spotify, or Poweramp to play video game music files on your mobile device. You can also use some specialized apps such as [Droidsound-E] or [Modizer] to play more obscure or exotic formats such as NSF, SPC, PSF, GBS, VGM or MOD on your mobile device.

            -

            To play video game music on your mobile phone or tablet, you need to transfer the files from your computer to your device via USB, Bluetooth, Wi-Fi, or cloud storage. You can also download the files directly from the internet using your device's browser or a download manager app. Then, you need to open the files with your chosen music player app and enjoy the video game music on the go.

            -

            On Gaming Consoles or Handhelds

            -

            Playing video game music on your gaming console or handheld is a bit more tricky and limited, as most consoles and handhelds can only support specific audio formats such as MP3, AAC, or WMA. You can use the built-in media player software of your console or handheld such as Xbox Music, PlayStation Music, Nintendo 3DS Sound, or PSP Media Go to play video game music files on your device. You can also use some homebrew software such as [MoonShell] or [Lameboy] to play more obscure or exotic formats such as NSF, SPC, PSF, GBS, VGM, or MOD on your device.

            -

            To play video game music on your gaming console or handheld, you need to transfer the files from your computer to your device via USB, SD card, or Wi-Fi. You can also download the files directly from the internet using your device's browser or a homebrew app. Then, you need to open the files with your selected media player software and enjoy the video game music on your big screen or small screen.

            -

            Conclusion

            -

            Video game music is a wonderful and diverse form of art that can enrich your gaming experience and your life. Downloading video game music can allow you to listen to your favorite tracks anytime and anywhere, without any limitations or interruptions. However, downloading video game music can also involve some legal issues and risks, so you need to be careful and respectful when doing so.

            -

            In this article, we have shown you how to download video game music from various sources, how to play it on different devices, and what are some of the best sites to find and enjoy video game music. We hope that this article has been helpful and informative for you, and that you have learned something new and useful.

            -

            If you are interested in downloading video game music, we recommend that you check out these sites:

            -
              -
            • [KHInsider]: A huge archive of video game music files in various formats, with over 50,000 soundtracks and 600,000 tracks.
            • -
            • [Zophar's Domain]: A comprehensive resource of video game music files in obscure or exotic formats, with over 30,000 soundtracks and 300,000 tracks.
            • -
            • [Game Music Online]: A professional and reliable source of video game music news, reviews, interviews, and downloads.
            • -
            -

            Downloading and listening to video game music can be a fun and rewarding hobby that can enhance your mood, stimulate your imagination, or relax your mind. Why not give it a try and see for yourself? You might discover some amazing gems that you never knew existed!

            -

            FAQs

            -

            What is the best format for video game music?

            -

            There is no definitive answer to this question, as different formats have different advantages and disadvantages depending on the quality, size, compatibility, and availability of the files. However, some of the most common and popular formats for video game music are MP3, WAV, OGG, FLAC, and M4A.

            -

            How can I download video game music legally?

            -

            The best way to download video game music legally is to get it from the official websites or platforms of the games, composers or publishers, as they usually offer their soundtracks as digital downloads or physical CDs, either for free or for a reasonable price. You can also find official soundtracks on platforms like Steam, GOG.com, Bandcamp, iTunes, Spotify, or Amazon Music. Downloading video game music from official sources ensures that you get high-quality and authorized files, and that you support the creators and developers of the games and music.

            -

            How can I download video game music for free?

            -

            One way to download video game music for free is to get it from YouTube or other video-sharing sites, where you can find almost any video game music uploaded by fans, channels, or platforms. You can use a video downloader tool such as Free HD Video Converter Factory or VideoHunter to copy and paste the URL of the video game music you want, choose the format and quality you prefer, and save it to your computer. However, downloading video game music from YouTube may violate the terms of service of YouTube or the rights of the owners of the music, so you should only do it for personal use and not distribute or monetize it.

            -

            Another way to download video game music for free is to get it from free music download sites, such as KHInsider or Zophar's Domain. These are websites that offer a large collection of video game music files for download, without any charge or registration. However, the quality and legality of the files may vary depending on the site and the source, so you should only download video game music from reputable and trustworthy sites, and scan the files for viruses before opening them.

            -

            How can I download video game music from Spotify?

            -

            Spotify is a popular streaming service that offers a lot of video game music on its platform. You can find official soundtracks, playlists, podcasts, and radio stations related to video game music on Spotify. However, Spotify does not allow you to download video game music directly from its app or website, unless you have a premium subscription and want to listen offline on your device.

            -

            If you want to download video game music from Spotify to your computer or other devices, you need to use a third-party tool such as [AudKit Spotify Music Converter] or [TuneFab Spotify Music Converter]. These tools allow you to convert Spotify video game music to MP3, WAV, OGG, FLAC, or M4A formats, and save them to your hard drive. You can also adjust the bitrate, sample rate, channel, and other parameters to improve the quality of the files. However, using these tools may violate the terms of service of Spotify or the rights of the owners of the music, so you should only do it for personal use and not distribute or monetize it.

            -

            How can I download video game music from Nintendo Switch?

            -

            Nintendo Switch is a popular gaming console that has many exclusive games with amazing soundtracks. However, Nintendo Switch does not have a built-in media player or a way to transfer files from the console to other devices. Therefore, downloading video game music from Nintendo Switch is not possible without hacking or modding the console.

            -

            If you want to download video game music from Nintendo Switch games, you need to use a homebrew software such as [NXMusicPlayer] or [Switch Media Host]. These software allow you to play video game music files stored on your SD card or stream them from your PC via Wi-Fi. You can also use these software to rip video game music files from your cartridges or digital downloads. However, using these software may void your warranty or ban your account from Nintendo services so you should only do it at your own risk and discretion.

            -

            How can I download video game music from PS4 or PS5?

            -

            PS4 and PS5 are popular gaming consoles that have many exclusive games with amazing soundtracks. However, PS4 and PS5 do not have a built-in media player or a way to transfer files from the console to other devices. Therefore, downloading video game music from PS4 or PS5 is not possible without hacking or modding the console.

            -

            If you want to download video game music from PS4 or PS5 games, you need to use a homebrew software such as [PS4 Media Player] or [PS5 Media Player]. These software allow you to play video game music files stored on your USB drive or stream them from your PC via Wi-Fi. You can also use these software to rip video game music files from your discs or digital downloads. However, using these software may void your warranty or ban your account from PlayStation services, so you should only do it at your own risk and discretion.

            -

            -

            This is the end of the article. I hope you enjoyed reading it and learned something new and useful. If you have any questions, comments, or feedback, please feel free to contact me. Thank you for your time and attention.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Sybase-PowerDesigner-16503982-BEAN-Download-Pc-EXCLUSIVE.md b/spaces/contluForse/HuggingGPT/Sybase-PowerDesigner-16503982-BEAN-Download-Pc-EXCLUSIVE.md deleted file mode 100644 index cde3dfce65a4659daf281a0a03604264740facd8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Sybase-PowerDesigner-16503982-BEAN-Download-Pc-EXCLUSIVE.md +++ /dev/null @@ -1,82 +0,0 @@ -## Sybase PowerDesigner 16.5.0.3982 - BEAN Download Pc - - - - - - - - - -**DOWNLOAD ⚙⚙⚙ [https://riszurachen.blogspot.com/?d=2txoIn](https://riszurachen.blogspot.com/?d=2txoIn)** - - - - - - - - - - - - - -# How to Download and Install Sybase PowerDesigner 16.5.0.3982 - BEAN on Your PC - - - -Sybase PowerDesigner is a powerful tool for enterprise data modeling and database design. It helps you align your business and IT processes, implement effective enterprise architecture, and create conceptual data models for your application development lifecycle. It supports various data modeling techniques, such as UML, Business Process Modeling, and market-leading data modeling. It also integrates with leading development environments, such as .NET, Workspace, PowerBuilder, Java, Eclipse, etc. It works with all modern RDBMS. - - - -If you want to download and install Sybase PowerDesigner 16.5.0.3982 - BEAN on your PC, you can follow these simple steps: - - - -1. Go to the Internet Archive website[^1^] and search for "SAP Sybase Power Designer v16.5.0.3982". You will find a link to download the software for free. - -2. Click on the download button and save the file to your preferred location on your PC. The file size is 602.6MB. - -3. Extract the zip file using a software like WinRAR or 7-Zip. You will find a folder named "BEAN" inside it. - -4. Open the folder and run the setup.exe file as administrator. Follow the instructions on the screen to install the software. - -5. After the installation is complete, copy the file named "pdflm15.dll" from the "BEAN" folder and paste it into the installation directory of Sybase PowerDesigner (usually C:\Program Files\Sybase\PowerDesigner 16). - -6. Run the software from the desktop shortcut or the start menu. You will see a license agreement window. Click on "I accept" and then click on "OK". - -7. You have successfully downloaded and installed Sybase PowerDesigner 16.5.0.3982 - BEAN on your PC. You can now use it to create and manage your data models and databases. - - - -I hope this article was helpful for you. If you have any questions or feedback, please leave a comment below. - - - -## What are the main features of Sybase PowerDesigner? - - - -Sybase PowerDesigner is a comprehensive data modeling and enterprise architecture tool that offers many features to help you design, document, and optimize your data and processes. Some of the main features are: - - - -- Data modeling: You can create various types of data models, such as conceptual, logical, physical, dimensional, XML, and object-relational. You can also reverse-engineer existing databases and generate SQL scripts and DDL statements. You can compare and synchronize your models with your databases and ensure consistency and accuracy. - -- Enterprise architecture: You can define and visualize your enterprise architecture using different frameworks, such as Zachman, TOGAF, ArchiMate, etc. You can also create business process models using BPMN notation and link them to your data models. You can analyze the impact of changes across your architecture and manage dependencies and traceability. - -- Metadata management: You can import and export metadata from various sources, such as Excel, XML, ERwin, etc. You can also create a metadata repository and share it with other users and tools. You can use the metadata browser to search and navigate your metadata and generate reports and documentation. - -- Collaboration: You can collaborate with other users and stakeholders using the PowerDesigner Web Portal. You can publish your models and architecture online and enable feedback and reviews. You can also integrate with other tools, such as SAP HANA Studio, SAP Solution Manager, etc. - - - -Sybase PowerDesigner is a versatile and powerful tool that can help you manage your data and enterprise architecture effectively. It can help you improve your data quality, governance, compliance, performance, and agility. - - 1b8d091108 - - - - - diff --git a/spaces/contluForse/HuggingGPT/assets/Activation Code For Achtung Panzer Operation Star.30 [TOP].md b/spaces/contluForse/HuggingGPT/assets/Activation Code For Achtung Panzer Operation Star.30 [TOP].md deleted file mode 100644 index 1ffa8030bb2d36e0ff249e41e2708e3f0bc49046..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Activation Code For Achtung Panzer Operation Star.30 [TOP].md +++ /dev/null @@ -1,11 +0,0 @@ -

            Activation Code For Achtung Panzer Operation Star.30


            DOWNLOAD ○○○ https://ssurll.com/2uzx7A



            -
            -Steam activation key for Russia, Ukraine, Armenia, Azerbaijan, Belarus, . Graviteam Tactics : Operation Star is the sequel to Achtung Panzer: Kharkiv 1943. This expansion pack includes 4 new maps for one of the most important operations to liberate Ukraine in 1943: Citadel. -Game features: - New missions, as well as a new episode "Attack on the Citadel". -- New tasks that you can choose yourself. -- New maps: "Citadel" and "Pass", - New squad development system. -- A new system for using vehicles, as well as new tasks for them. -- New vehicle models, - New maps, - New tank and SPG shooting mechanics. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/contluForse/HuggingGPT/assets/EaseUS Partition Master Professional 9.1.1- Retail .rar A Powerful and User-Friendly Partition Manager.md b/spaces/contluForse/HuggingGPT/assets/EaseUS Partition Master Professional 9.1.1- Retail .rar A Powerful and User-Friendly Partition Manager.md deleted file mode 100644 index e5904169f316fad3630df9ecfc6e0e6fec9a98f4..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/EaseUS Partition Master Professional 9.1.1- Retail .rar A Powerful and User-Friendly Partition Manager.md +++ /dev/null @@ -1,6 +0,0 @@ -

            EaseUS Partition Master Professional 9.1.1- Retail .rar


            Download Zip ✶✶✶ https://ssurll.com/2uzyix



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/unet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/unet.py deleted file mode 100644 index 82caa16a94c195c192a2a920fb7bc7e60f0f3ce3..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/backbones/unet.py +++ /dev/null @@ -1,429 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer, - build_norm_layer, constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import UpConvBlock - - -class BasicConvBlock(nn.Module): - """Basic convolutional block for UNet. - - This module consists of several plain convolutional layers. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - num_convs (int): Number of convolutional layers. Default: 2. - stride (int): Whether use stride convolution to downsample - the input feature map. If stride=2, it only uses stride convolution - in the first convolutional layer to downsample the input feature - map. Options are 1 or 2. Default: 1. - dilation (int): Whether use dilated convolution to expand the - receptive field. Set dilation rate of each convolutional layer and - the dilation rate of the first convolutional layer is always 1. - Default: 1. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - num_convs=2, - stride=1, - dilation=1, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - dcn=None, - plugins=None): - super(BasicConvBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.with_cp = with_cp - convs = [] - for i in range(num_convs): - convs.append( - ConvModule( - in_channels=in_channels if i == 0 else out_channels, - out_channels=out_channels, - kernel_size=3, - stride=stride if i == 0 else 1, - dilation=1 if i == 0 else dilation, - padding=1 if i == 0 else dilation, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - self.convs = nn.Sequential(*convs) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.convs, x) - else: - out = self.convs(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class DeconvModule(nn.Module): - """Deconvolution upsample module in decoder for UNet (2X upsample). - - This module uses deconvolution to upsample feature map in the decoder - of UNet. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - kernel_size (int): Kernel size of the convolutional layer. Default: 4. - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - kernel_size=4, - scale_factor=2): - super(DeconvModule, self).__init__() - - assert (kernel_size - scale_factor >= 0) and\ - (kernel_size - scale_factor) % 2 == 0,\ - f'kernel_size should be greater than or equal to scale_factor '\ - f'and (kernel_size - scale_factor) should be even numbers, '\ - f'while the kernel size is {kernel_size} and scale_factor is '\ - f'{scale_factor}.' - - stride = scale_factor - padding = (kernel_size - scale_factor) // 2 - self.with_cp = with_cp - deconv = nn.ConvTranspose2d( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding) - - norm_name, norm = build_norm_layer(norm_cfg, out_channels) - activate = build_activation_layer(act_cfg) - self.deconv_upsamping = nn.Sequential(deconv, norm, activate) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.deconv_upsamping, x) - else: - out = self.deconv_upsamping(x) - return out - - -@UPSAMPLE_LAYERS.register_module() -class InterpConv(nn.Module): - """Interpolation upsample module in decoder for UNet. - - This module uses interpolation to upsample feature map in the decoder - of UNet. It consists of one interpolation upsample layer and one - convolutional layer. It can be one interpolation upsample layer followed - by one convolutional layer (conv_first=False) or one convolutional layer - followed by one interpolation upsample layer (conv_first=True). - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - conv_first (bool): Whether convolutional layer or interpolation - upsample layer first. Default: False. It means interpolation - upsample layer followed by one convolutional layer. - kernel_size (int): Kernel size of the convolutional layer. Default: 1. - stride (int): Stride of the convolutional layer. Default: 1. - padding (int): Padding of the convolutional layer. Default: 1. - upsample_cfg (dict): Interpolation config of the upsample layer. - Default: dict( - scale_factor=2, mode='bilinear', align_corners=False). - """ - - def __init__(self, - in_channels, - out_channels, - with_cp=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - *, - conv_cfg=None, - conv_first=False, - kernel_size=1, - stride=1, - padding=0, - upsample_cfg=dict( - scale_factor=2, mode='bilinear', align_corners=False)): - super(InterpConv, self).__init__() - - self.with_cp = with_cp - conv = ConvModule( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - upsample = nn.Upsample(**upsample_cfg) - if conv_first: - self.interp_upsample = nn.Sequential(conv, upsample) - else: - self.interp_upsample = nn.Sequential(upsample, conv) - - def forward(self, x): - """Forward function.""" - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(self.interp_upsample, x) - else: - out = self.interp_upsample(x) - return out - - -@BACKBONES.register_module() -class UNet(nn.Module): - """UNet backbone. - U-Net: Convolutional Networks for Biomedical Image Segmentation. - https://arxiv.org/pdf/1505.04597.pdf - - Args: - in_channels (int): Number of input image channels. Default" 3. - base_channels (int): Number of base channels of each stage. - The output channels of the first stage. Default: 64. - num_stages (int): Number of stages in encoder, normally 5. Default: 5. - strides (Sequence[int 1 | 2]): Strides of each stage in encoder. - len(strides) is equal to num_stages. Normally the stride of the - first stage in encoder is 1. If strides[i]=2, it uses stride - convolution to downsample in the correspondence encoder stage. - Default: (1, 1, 1, 1, 1). - enc_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence encoder stage. - Default: (2, 2, 2, 2, 2). - dec_num_convs (Sequence[int]): Number of convolutional layers in the - convolution block of the correspondence decoder stage. - Default: (2, 2, 2, 2). - downsamples (Sequence[int]): Whether use MaxPool to downsample the - feature map after the first stage of encoder - (stages: [1, num_stages)). If the correspondence encoder stage use - stride convolution (strides[i]=2), it will never use MaxPool to - downsample, even downsamples[i-1]=True. - Default: (True, True, True, True). - enc_dilations (Sequence[int]): Dilation rate of each stage in encoder. - Default: (1, 1, 1, 1, 1). - dec_dilations (Sequence[int]): Dilation rate of each stage in decoder. - Default: (1, 1, 1, 1). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - conv_cfg (dict | None): Config dict for convolution layer. - Default: None. - norm_cfg (dict | None): Config dict for normalization layer. - Default: dict(type='BN'). - act_cfg (dict | None): Config dict for activation layer in ConvModule. - Default: dict(type='ReLU'). - upsample_cfg (dict): The upsample config of the upsample module in - decoder. Default: dict(type='InterpConv'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - dcn (bool): Use deformable convolution in convolutional layer or not. - Default: None. - plugins (dict): plugins for convolutional layers. Default: None. - - Notice: - The input image size should be divisible by the whole downsample rate - of the encoder. More detail of the whole downsample rate can be found - in UNet._check_input_divisible. - - """ - - def __init__(self, - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False, - dcn=None, - plugins=None): - super(UNet, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert len(strides) == num_stages, \ - 'The length of strides should be equal to num_stages, '\ - f'while the strides is {strides}, the length of '\ - f'strides is {len(strides)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_num_convs) == num_stages, \ - 'The length of enc_num_convs should be equal to num_stages, '\ - f'while the enc_num_convs is {enc_num_convs}, the length of '\ - f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_num_convs) == (num_stages-1), \ - 'The length of dec_num_convs should be equal to (num_stages-1), '\ - f'while the dec_num_convs is {dec_num_convs}, the length of '\ - f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\ - f'{num_stages}.' - assert len(downsamples) == (num_stages-1), \ - 'The length of downsamples should be equal to (num_stages-1), '\ - f'while the downsamples is {downsamples}, the length of '\ - f'downsamples is {len(downsamples)}, and the num_stages is '\ - f'{num_stages}.' - assert len(enc_dilations) == num_stages, \ - 'The length of enc_dilations should be equal to num_stages, '\ - f'while the enc_dilations is {enc_dilations}, the length of '\ - f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\ - f'{num_stages}.' - assert len(dec_dilations) == (num_stages-1), \ - 'The length of dec_dilations should be equal to (num_stages-1), '\ - f'while the dec_dilations is {dec_dilations}, the length of '\ - f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\ - f'{num_stages}.' - self.num_stages = num_stages - self.strides = strides - self.downsamples = downsamples - self.norm_eval = norm_eval - self.base_channels = base_channels - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - - for i in range(num_stages): - enc_conv_block = [] - if i != 0: - if strides[i] == 1 and downsamples[i - 1]: - enc_conv_block.append(nn.MaxPool2d(kernel_size=2)) - upsample = (strides[i] != 1 or downsamples[i - 1]) - self.decoder.append( - UpConvBlock( - conv_block=BasicConvBlock, - in_channels=base_channels * 2**i, - skip_channels=base_channels * 2**(i - 1), - out_channels=base_channels * 2**(i - 1), - num_convs=dec_num_convs[i - 1], - stride=1, - dilation=dec_dilations[i - 1], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - upsample_cfg=upsample_cfg if upsample else None, - dcn=None, - plugins=None)) - - enc_conv_block.append( - BasicConvBlock( - in_channels=in_channels, - out_channels=base_channels * 2**i, - num_convs=enc_num_convs[i], - stride=strides[i], - dilation=enc_dilations[i], - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - dcn=None, - plugins=None)) - self.encoder.append((nn.Sequential(*enc_conv_block))) - in_channels = base_channels * 2**i - - def forward(self, x): - self._check_input_divisible(x) - enc_outs = [] - for enc in self.encoder: - x = enc(x) - enc_outs.append(x) - dec_outs = [x] - for i in reversed(range(len(self.decoder))): - x = self.decoder[i](enc_outs[i], x) - dec_outs.append(x) - - return dec_outs - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(UNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - def _check_input_divisible(self, x): - h, w = x.shape[-2:] - whole_downsample_rate = 1 - for i in range(1, self.num_stages): - if self.strides[i] == 2 or self.downsamples[i - 1]: - whole_downsample_rate *= 2 - assert (h % whole_downsample_rate == 0) \ - and (w % whole_downsample_rate == 0),\ - f'The input image size {(h, w)} should be divisible by the whole '\ - f'downsample rate {whole_downsample_rate}, when num_stages is '\ - f'{self.num_stages}, strides is {self.strides}, and downsamples '\ - f'is {self.downsamples}.' - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/__init__.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/utils/paste_pic.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/utils/paste_pic.py deleted file mode 100644 index a722bdda925256b4109d21c4a5384ab262d0b6ed..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/utils/paste_pic.py +++ /dev/null @@ -1,66 +0,0 @@ -import cv2, os -import numpy as np -from tqdm import tqdm -import uuid - -from Demo_TFR_Pirenderer.src.utils.videoio import save_video_with_watermark - -def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path): - - if not os.path.isfile(pic_path): - raise ValueError('pic_path must be a valid path to video/image file') - elif pic_path.split('.')[-1] in ['jpg', 'png', 'jpeg']: - # loader for first frame - full_img = cv2.imread(pic_path) - else: - # loader for videos - video_stream = cv2.VideoCapture(pic_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - break - full_img = frame - frame_h = full_img.shape[0] - frame_w = full_img.shape[1] - - video_stream = cv2.VideoCapture(video_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - crop_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - crop_frames.append(frame) - - if len(crop_info) != 3: - print("you didn't crop the image") - return - else: - r_w, r_h = crop_info[0] - clx, cly, crx, cry = crop_info[1] - lx, ly, rx, ry = crop_info[2] - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - oy1, oy2, ox1, ox2 = cly, cry, clx, crx - - - tmp_path = str(uuid.uuid4())+'.mp4' - out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h)) - for crop_frame in tqdm(crop_frames, 'seamlessClone:'): - p = cv2.resize(crop_frame.astype(np.uint8), (crx-clx, cry - cly)) - - mask = 255*np.ones(p.shape, p.dtype) - location = ((ox1+ox2) // 2, (oy1+oy2) // 2) - gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE) - out_tmp.write(gen_img) - - out_tmp.release() - - save_video_with_watermark(tmp_path, new_audio_path, full_video_path, watermark=False) - os.remove(tmp_path) diff --git a/spaces/dakaiye/dky_xuexi/request_llm/edge_gpt_free.py b/spaces/dakaiye/dky_xuexi/request_llm/edge_gpt_free.py deleted file mode 100644 index ef6187379c470b0f325d50d7642cfc95b933f1ef..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/request_llm/edge_gpt_free.py +++ /dev/null @@ -1,1112 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -""" -Main.py -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import time -import uuid -from enum import Enum -from pathlib import Path -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union - -import aiohttp -import certifi -import httpx -from prompt_toolkit import PromptSession -from prompt_toolkit.auto_suggest import AutoSuggestFromHistory -from prompt_toolkit.completion import WordCompleter -from prompt_toolkit.history import InMemoryHistory -from prompt_toolkit.key_binding import KeyBindings -from rich.live import Live -from rich.markdown import Markdown - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3imaginative", - "travelansgnd", - "dv3sugg", - "clgalileo", - "gencontentv3", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - balanced = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "nojbfedge", - ] - precise = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "galileo", - "dv3sugg", - "responseos", - "e2ecachewrite", - "cachewriteext", - "nodlcpcwrite", - "travelansgnd", - "h3precise", - "clgalileo", - "nojbfedge", - ] - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg, ensure_ascii=False) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt: str, - conversation_style: CONVERSATION_STYLE_TYPE, - options = None, - webpage_context = None, - search_result = False, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = conversation_style.value - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "allowedMessageTypes": [ - "Chat", - "Disengaged", - "AdsQuery", - "SemanticSerp", - "GenerateContentQuery", - "SearchQuery", - ], - "sliceIds": [ - "chk1cf", - "nopreloadsscf", - "winlongmsg2tf", - "perfimpcomb", - "sugdivdis", - "sydnoinputt", - "wpcssopt", - "wintone2tf", - "0404sydicnbs0", - "405suggbs0", - "scctl", - "330uaugs0", - "0329resp", - "udscahrfon", - "udstrblm5", - "404e2ewrt", - "408nodedups0", - "403tvlansgnd", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - if search_result: - have_search_result = [ - "InternalSearchQuery", - "InternalSearchResult", - "InternalLoaderMessage", - "RenderCardRequest", - ] - self.struct["arguments"][0]["allowedMessageTypes"] += have_search_result - if webpage_context: - self.struct["arguments"][0]["previousMessages"] = [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----", - }, - ] - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - proxy = None, - async_mode = False, - cookies = None, - ) -> None: - if async_mode: - return - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - if cookies: - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = _Conversation(async_mode=True) - self.struct = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - transport = httpx.AsyncHTTPTransport(retries=10) - # Convert cookie format to httpx format - formatted_cookies = None - if cookies: - formatted_cookies = httpx.Cookies() - for cookie in cookies: - formatted_cookies.set(cookie["name"], cookie["value"]) - async with httpx.AsyncClient( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - transport=transport, - cookies=formatted_cookies, - ) as client: - # Send GET request - response = await client.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = await client.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - return self - - -class _ChatHub: - """ - Chat API - """ - - def __init__( - self, - conversation: _Conversation, - proxy = None, - cookies = None, - ) -> None: - self.session = None - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - self.cookies = cookies - self.proxy: str = proxy - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - timeout = aiohttp.ClientTimeout(total=30) - self.session = aiohttp.ClientSession(timeout=timeout) - - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await self.session.ws_connect( - wss_link, - headers=HEADERS, - ssl=ssl_context, - proxy=self.proxy, - autoping=False, - ) - await self._initial_handshake() - if self.request.invocation_id == 0: - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ) - else: - async with httpx.AsyncClient() as client: - response = await client.post( - "https://sydney.bing.com/sydney/UpdateConversation/", - json={ - "messages": [ - { - "author": "user", - "description": webpage_context, - "contextType": "WebPage", - "messageType": "Context", - }, - ], - "conversationId": self.request.conversation_id, - "source": "cib", - "traceId": _get_ran_hex(32), - "participant": {"id": self.request.client_id}, - "conversationSignature": self.request.conversation_signature, - }, - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Update web page context failed") - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send_str(_append_identifier(self.request.struct)) - final = False - draw = False - resp_txt = "" - result_text = "" - resp_txt_no_link = "" - while not final: - msg = await self.wss.receive() - objects = msg.data.split(DELIMITER) - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - if not draw: - if ( - response["arguments"][0]["messages"][0].get("messageType") - == "GenerateContentQuery" - ): - async with ImageGenAsync("", True) as image_generator: - images = await image_generator.get_images( - response["arguments"][0]["messages"][0]["text"], - ) - for i, image in enumerate(images): - resp_txt = resp_txt + f"\n![image{i}]({image})" - draw = True - if ( - response["arguments"][0]["messages"][0]["contentOrigin"] - != "Apology" - ) and not draw: - resp_txt = result_text + response["arguments"][0][ - "messages" - ][0]["adaptiveCards"][0]["body"][0].get("text", "") - resp_txt_no_link = result_text + response["arguments"][0][ - "messages" - ][0].get("text", "") - if response["arguments"][0]["messages"][0].get( - "messageType", - ): - resp_txt = ( - resp_txt - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - result_text = ( - result_text - + response["arguments"][0]["messages"][0][ - "adaptiveCards" - ][0]["body"][0]["inlines"][0].get("text") - + "\n" - ) - yield False, resp_txt - - elif response.get("type") == 2: - if response["item"]["result"].get("error"): - await self.close() - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}", - ) - if draw: - cache = response["item"]["messages"][1]["adaptiveCards"][0][ - "body" - ][0]["text"] - response["item"]["messages"][1]["adaptiveCards"][0]["body"][0][ - "text" - ] = (cache + resp_txt) - if ( - response["item"]["messages"][-1]["contentOrigin"] == "Apology" - and resp_txt - ): - response["item"]["messages"][-1]["text"] = resp_txt_no_link - response["item"]["messages"][-1]["adaptiveCards"][0]["body"][0][ - "text" - ] = resp_txt - print( - "Preserved the message from being deleted", - file=sys.stderr, - ) - final = True - await self.close() - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send_str(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.receive() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - if self.session and not self.session.closed: - await self.session.close() - - -class Chatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - proxy = None, - cookies = None, - ) -> None: - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - - @staticmethod - async def create( - proxy = None, - cookies = None, - ): - self = Chatbot.__new__(Chatbot) - self.proxy = proxy - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy, cookies=cookies), - proxy=self.proxy, - cookies=cookies, - ) - return self - - async def ask( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - if final: - return response - await self.chat_hub.wss.close() - return {} - - async def ask_stream( - self, - prompt: str, - wss_link: str = "wss://sydney.bing.com/sydney/ChatHub", - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - webpage_context = None, - search_result: bool = False, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - webpage_context=webpage_context, - search_result=search_result, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub( - await _Conversation.create(self.proxy), - proxy=self.proxy, - cookies=self.chat_hub.cookies, - ) - - -async def _get_input_async( - session: PromptSession = None, - completer: WordCompleter = None, -) -> str: - """ - Multiline input function. - """ - return await session.prompt_async( - completer=completer, - multiline=True, - auto_suggest=AutoSuggestFromHistory(), - ) - - -def _create_session() -> PromptSession: - kb = KeyBindings() - - @kb.add("enter") - def _(event): - buffer_text = event.current_buffer.text - if buffer_text.startswith("!"): - event.current_buffer.validate_and_handle() - else: - event.current_buffer.insert_text("\n") - - @kb.add("escape") - def _(event): - if event.current_buffer.complete_state: - # event.current_buffer.cancel_completion() - event.current_buffer.text = "" - - return PromptSession(key_bindings=kb, history=InMemoryHistory()) - - -def _create_completer(commands: list, pattern_str: str = "$"): - return WordCompleter(words=commands, pattern=re.compile(pattern_str)) - - -async def async_main(args: argparse.Namespace) -> None: - """ - Main function - """ - print("Initializing...") - print("Enter `alt+enter` or `escape+enter` to send a message") - # Read and parse cookies - cookies = None - if args.cookie_file: - cookies = json.loads(open(args.cookie_file, encoding="utf-8").read()) - bot = await Chatbot.create(proxy=args.proxy, cookies=cookies) - session = _create_session() - completer = _create_completer(["!help", "!exit", "!reset"]) - initial_prompt = args.prompt - - while True: - print("\nYou:") - if initial_prompt: - question = initial_prompt - print(question) - initial_prompt = None - else: - question = ( - input() - if args.enter_once - else await _get_input_async(session=session, completer=completer) - ) - print() - if question == "!exit": - break - if question == "!help": - print( - """ - !help - Show this help message - !exit - Exit the program - !reset - Reset the conversation - """, - ) - continue - if question == "!reset": - await bot.reset() - continue - print("Bot:") - if args.no_stream: - print( - ( - await bot.ask( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ) - )["item"]["messages"][1]["adaptiveCards"][0]["body"][0]["text"], - ) - else: - wrote = 0 - if args.rich: - md = Markdown("") - with Live(md, auto_refresh=False) as live: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if wrote > len(response): - print(md) - print(Markdown("***Bing revoked the response.***")) - wrote = len(response) - md = Markdown(response) - live.update(md, refresh=True) - else: - async for final, response in bot.ask_stream( - prompt=question, - conversation_style=args.style, - wss_link=args.wss_link, - ): - if not final: - if not wrote: - print(response, end="", flush=True) - else: - print(response[wrote:], end="", flush=True) - wrote = len(response) - print() - await bot.close() - - -def main() -> None: - print( - """ - EdgeGPT - A demo of reverse engineering the Bing GPT chatbot - Repo: github.com/acheong08/EdgeGPT - By: Antonio Cheong - - !help for help - - Type !exit to exit - """, - ) - parser = argparse.ArgumentParser() - parser.add_argument("--enter-once", action="store_true") - parser.add_argument("--no-stream", action="store_true") - parser.add_argument("--rich", action="store_true") - parser.add_argument( - "--proxy", - help="Proxy URL (e.g. socks5://127.0.0.1:1080)", - type=str, - ) - parser.add_argument( - "--wss-link", - help="WSS URL(e.g. wss://sydney.bing.com/sydney/ChatHub)", - type=str, - default="wss://sydney.bing.com/sydney/ChatHub", - ) - parser.add_argument( - "--style", - choices=["creative", "balanced", "precise"], - default="balanced", - ) - parser.add_argument( - "--prompt", - type=str, - default="", - required=False, - help="prompt to start with", - ) - parser.add_argument( - "--cookie-file", - type=str, - default="", - required=False, - help="path to cookie file", - ) - args = parser.parse_args() - asyncio.run(async_main(args)) - - -class Cookie: - """ - Convenience class for Bing Cookie files, data, and configuration. This Class - is updated dynamically by the Query class to allow cycling through >1 - cookie/credentials file e.g. when daily request limits (current 200 per - account per day) are exceeded. - """ - - current_file_index = 0 - dirpath = Path("./").resolve() - search_pattern = "bing_cookies_*.json" - ignore_files = set() - - @classmethod - def fetch_default(cls, path=None): - from selenium import webdriver - from selenium.webdriver.common.by import By - - driver = webdriver.Edge() - driver.get("https://bing.com/chat") - time.sleep(5) - xpath = '//button[@id="bnp_btn_accept"]' - driver.find_element(By.XPATH, xpath).click() - time.sleep(2) - xpath = '//a[@id="codexPrimaryButton"]' - driver.find_element(By.XPATH, xpath).click() - if path is None: - path = Path("./bing_cookies__default.json") - # Double underscore ensures this file is first when sorted - cookies = driver.get_cookies() - Path(path).write_text(json.dumps(cookies, indent=4), encoding="utf-8") - # Path again in case supplied path is: str - print(f"Cookies saved to: {path}") - driver.quit() - - @classmethod - def files(cls): - """Return a sorted list of all cookie files matching .search_pattern""" - all_files = set(cls.dirpath.glob(cls.search_pattern)) - return sorted(list(all_files - cls.ignore_files)) - - @classmethod - def import_data(cls): - """ - Read the active cookie file and populate the following attributes: - - .current_filepath - .current_data - .image_token - """ - try: - cls.current_filepath = cls.files()[cls.current_file_index] - except IndexError: - print( - "> Please set Cookie.current_filepath to a valid cookie file, then run Cookie.import_data()", - ) - return - print(f"> Importing cookies from: {cls.current_filepath.name}") - with open(cls.current_filepath, encoding="utf-8") as file: - cls.current_data = json.load(file) - cls.image_token = [x for x in cls.current_data if x.get("name") == "_U"] - cls.image_token = cls.image_token[0].get("value") - - @classmethod - def import_next(cls): - """ - Cycle through to the next cookies file. Import it. Mark the previous - file to be ignored for the remainder of the current session. - """ - cls.ignore_files.add(cls.current_filepath) - if Cookie.current_file_index >= len(cls.files()): - Cookie.current_file_index = 0 - Cookie.import_data() - - -class Query: - """ - A convenience class that wraps around EdgeGPT.Chatbot to encapsulate input, - config, and output all together. Relies on Cookie class for authentication - """ - - def __init__( - self, - prompt, - style="precise", - content_type="text", - cookie_file=0, - echo=True, - echo_prompt=False, - ): - """ - Arguments: - - prompt: Text to enter into Bing Chat - style: creative, balanced, or precise - content_type: "text" for Bing Chat; "image" for Dall-e - cookie_file: Path, filepath string, or index (int) to list of cookie paths - echo: Print something to confirm request made - echo_prompt: Print confirmation of the evaluated prompt - """ - self.index = [] - self.request_count = {} - self.image_dirpath = Path("./").resolve() - Cookie.import_data() - self.index += [self] - self.prompt = prompt - files = Cookie.files() - if isinstance(cookie_file, int): - index = cookie_file if cookie_file < len(files) else 0 - else: - if not isinstance(cookie_file, (str, Path)): - message = "'cookie_file' must be an int, str, or Path object" - raise TypeError(message) - cookie_file = Path(cookie_file) - if cookie_file in files(): # Supplied filepath IS in Cookie.dirpath - index = files.index(cookie_file) - else: # Supplied filepath is NOT in Cookie.dirpath - if cookie_file.is_file(): - Cookie.dirpath = cookie_file.parent.resolve() - if cookie_file.is_dir(): - Cookie.dirpath = cookie_file.resolve() - index = 0 - Cookie.current_file_index = index - if content_type == "text": - self.style = style - self.log_and_send_query(echo, echo_prompt) - if content_type == "image": - self.create_image() - - def log_and_send_query(self, echo, echo_prompt): - self.response = asyncio.run(self.send_to_bing(echo, echo_prompt)) - name = str(Cookie.current_filepath.name) - if not self.request_count.get(name): - self.request_count[name] = 1 - else: - self.request_count[name] += 1 - - def create_image(self): - image_generator = ImageGen(Cookie.image_token) - image_generator.save_images( - image_generator.get_images(self.prompt), - output_dir=self.image_dirpath, - ) - - async def send_to_bing(self, echo=True, echo_prompt=False): - """Creat, submit, then close a Chatbot instance. Return the response""" - retries = len(Cookie.files()) - while retries: - try: - bot = await Chatbot.create() - if echo_prompt: - print(f"> {self.prompt=}") - if echo: - print("> Waiting for response...") - if self.style.lower() not in "creative balanced precise".split(): - self.style = "precise" - response = await bot.ask( - prompt=self.prompt, - conversation_style=getattr(ConversationStyle, self.style), - # wss_link="wss://sydney.bing.com/sydney/ChatHub" - # What other values can this parameter take? It seems to be optional - ) - return response - except KeyError: - print( - f"> KeyError [{Cookie.current_filepath.name} may have exceeded the daily limit]", - ) - Cookie.import_next() - retries -= 1 - finally: - await bot.close() - - @property - def output(self): - """The response from a completed Chatbot request""" - return self.response["item"]["messages"][1]["text"] - - @property - def sources(self): - """The source names and details parsed from a completed Chatbot request""" - return self.response["item"]["messages"][1]["sourceAttributions"] - - @property - def sources_dict(self): - """The source names and details as a dictionary""" - sources_dict = {} - name = "providerDisplayName" - url = "seeMoreUrl" - for source in self.sources: - if name in source.keys() and url in source.keys(): - sources_dict[source[name]] = source[url] - else: - continue - return sources_dict - - @property - def code(self): - """Extract and join any snippets of Python code in the response""" - code_blocks = self.output.split("```")[1:-1:2] - code_blocks = ["\n".join(x.splitlines()[1:]) for x in code_blocks] - return "\n\n".join(code_blocks) - - @property - def languages(self): - """Extract all programming languages given in code blocks""" - code_blocks = self.output.split("```")[1:-1:2] - return {x.splitlines()[0] for x in code_blocks} - - @property - def suggestions(self): - """Follow-on questions suggested by the Chatbot""" - return [ - x["text"] - for x in self.response["item"]["messages"][1]["suggestedResponses"] - ] - - def __repr__(self): - return f"" - - def __str__(self): - return self.output - - -class ImageQuery(Query): - def __init__(self, prompt, **kwargs): - kwargs.update({"content_type": "image"}) - super().__init__(prompt, **kwargs) - - def __repr__(self): - return f"" - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/danielcwq/chat-your-data-trial/query_data.py b/spaces/danielcwq/chat-your-data-trial/query_data.py deleted file mode 100644 index 98e737559fae67760f5111c66736e228a9a00a86..0000000000000000000000000000000000000000 --- a/spaces/danielcwq/chat-your-data-trial/query_data.py +++ /dev/null @@ -1,34 +0,0 @@ -from langchain.prompts.prompt import PromptTemplate -from langchain.llms import OpenAI -from langchain.chains import ChatVectorDBChain - -_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. -You can assume the question about the syllabus of the H2 Economics A-Level Examination in Singapore. - -Chat History: -{chat_history} -Follow Up Input: {question} -Standalone question:""" -CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) - -template = """You are an AI assistant for answering questions about economics for the H2 Economics A-Levels. -You are given the following extracted parts of a long document and a question. Provide a conversational answer. -If you don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer. -If the question is not about H2 Economics, politely inform them that you are tuned to only answer questions about it. -Question: {question} -========= -{context} -========= -Answer in Markdown:""" -QA_PROMPT = PromptTemplate(template=template, input_variables=["question", "context"]) - - -def get_chain(vectorstore): - llm = OpenAI(temperature=0) - qa_chain = ChatVectorDBChain.from_llm( - llm, - vectorstore, - qa_prompt=QA_PROMPT, - condense_question_prompt=CONDENSE_QUESTION_PROMPT, - ) - return qa_chain diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/params.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/params.py deleted file mode 100644 index 0cc1a0e2d982e900988cf5a4b24b2e59b093537b..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/params.py +++ /dev/null @@ -1,563 +0,0 @@ -import argparse - - -def get_default_params(model_name): - # Params from paper (https://arxiv.org/pdf/2103.00020.pdf) - model_name = model_name.lower() - if "vit" in model_name: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.98, "eps": 1.0e-6} - else: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.999, "eps": 1.0e-8} - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--train-data", - type=str, - default=None, - help="Path to h5 filewith training data", - ) - parser.add_argument( - "--val-data", - type=str, - default=None, - help="Path to h5 file with validation data", - ) - parser.add_argument( - "--freeze-text", - default=False, - action="store_true", - help="if you need to freeze the text encoder, make this True", - ) - parser.add_argument( - "--freeze-text-after", - type=int, - default=-1, - help="if you need to freeze the text encoder after (include) epoch x, set this param to x. Set -1 to disable it", - ) - parser.add_argument( - "--train-ipc", - type=str, - default=None, - help="Path to npy file of the number of instance per class in training data", - ) - parser.add_argument( - "--val-ipc", - type=str, - default=None, - help="Path to npy file of the number of instance per class in validation data", - ) - parser.add_argument( - "--train-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Required for webdataset if not available in info file.", - ) - parser.add_argument( - "--val-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Useful for webdataset if not available in info file.", - ) - parser.add_argument( - "--dataset-type", - choices=["webdataset", "csv", "auto", "toy"], - default="auto", - help="Which type of dataset to process.", - ) - parser.add_argument( - "--csv-separator", - type=str, - default="\t", - help="For csv-like datasets, which separator to use.", - ) - parser.add_argument( - "--csv-img-key", - type=str, - default="filepath", - help="For csv-like datasets, the name of the key for the image paths.", - ) - parser.add_argument( - "--csv-caption-key", - type=str, - default="title", - help="For csv-like datasets, the name of the key for the captions.", - ) - parser.add_argument( - "--imagenet-val", - type=str, - default=None, - help="Path to imagenet val set for conducting zero shot evaluation.", - ) - parser.add_argument( - "--imagenet-v2", - type=str, - default=None, - help="Path to imagenet v2 for conducting zero shot evaluation.", - ) - parser.add_argument( - "--datasetnames", - nargs="+", - default=None, - help="If loading webdataset, spedify the dataset names to load. Can be some of these: Clotho, audioset, audiocaps, BBCSoundEffects", - ) - parser.add_argument( - "--full-train-dataset", - nargs="+", - default=None, - help="Which dataset will be trained with all the subsets. (train+test)", - ) - parser.add_argument( - "--exclude-eval-dataset", - nargs="+", - default=None, - help="Which dataset will be excluded with evaluation", - ) - parser.add_argument( - "--datasetinfos", - nargs="+", - default=None, - help="If loading webdataset, spedify the dataset types to load. Can be some of these: train, test, valid, unbalanced_train, balanced_train, eval", - ) - parser.add_argument( - "--dataset-proportion", - type=float, - default=1.0, - help="How much proportion of dataset we want to train.", - ) - parser.add_argument( - "--remotedata", - default=False, - action="store_true", - help="if the dataset is remote, set this flag", - ) - parser.add_argument( - "--class-label-path", - type=str, - default=None, - help="The path of the class label pickle or csv.", - ) - parser.add_argument( - "--datasetpath", - type=str, - default="/mnt/audio_clip/webdataset_tar", - help="The path to the dataset", - ) - parser.add_argument( - "--logs", - type=str, - default="./logs/", - help="Where to store tensorboard logs. Use None to avoid storing logs.", - ) - parser.add_argument( - "--log-local", - action="store_true", - default=False, - help="log files on local master, otherwise global master only.", - ) - parser.add_argument( - "--name", - type=str, - default=None, - help="Optional identifier for the experiment when storing logs. Otherwise use current time.", - ) - parser.add_argument( - "--workers", type=int, default=1, help="Number of workers per GPU." - ) - parser.add_argument( - "--batch-size", type=int, default=64, help="Batch size per GPU." - ) - parser.add_argument( - "--epochs", type=int, default=32, help="Number of epochs to train for." - ) - parser.add_argument("--lr", type=float, default=None, help="Learning rate.") - parser.add_argument("--beta1", type=float, default=None, help="Adam beta 1.") - parser.add_argument("--beta2", type=float, default=None, help="Adam beta 2.") - parser.add_argument("--eps", type=float, default=None, help="Adam epsilon.") - parser.add_argument("--momentum", type=float, default=None, help="SGD epsilon.") - parser.add_argument("--wd", type=float, default=0.2, help="Weight decay.") - - parser.add_argument( - "--split-opt", - action="store_true", - default=False, - help="Use this flag to skip the learning rate decay.", - ) - parser.add_argument( - "--lr-pretrained", type=float, default=None, help="Learning rate for text." - ) - parser.add_argument( - "--beta1-pretrained", type=float, default=None, help="Adam beta 1 for text." - ) - parser.add_argument( - "--beta2-pretrained", type=float, default=None, help="Adam beta 2 for text." - ) - parser.add_argument( - "--eps-pretrained", type=float, default=None, help="Adam epsilon for text." - ) - parser.add_argument( - "--wd-pretrained", type=float, default=0.2, help="Weight decay for text." - ) - parser.add_argument( - "--momentum-pretrained", type=float, default=0.9, help="Momentum for text." - ) - parser.add_argument( - "--lr-new", type=float, default=None, help="Learning rate for audio." - ) - parser.add_argument( - "--beta1-new", type=float, default=None, help="Adam beta 1 for audio." - ) - parser.add_argument( - "--beta2-new", type=float, default=None, help="Adam beta 2 for audio." - ) - parser.add_argument( - "--eps-new", type=float, default=None, help="Adam epsilon for audio." - ) - parser.add_argument( - "--wd-new", type=float, default=0.2, help="Weight decay for audio." - ) - parser.add_argument( - "--momentum-new", type=float, default=0.9, help="Momentum for audio." - ) - parser.add_argument( - "--warmup", type=int, default=10000, help="Number of steps to warmup for." - ) - parser.add_argument( - "--use-bn-sync", - default=False, - action="store_true", - help="Whether to use batch norm sync.", - ) - parser.add_argument( - "--skip-scheduler", - action="store_true", - default=False, - help="Use this flag to skip the learning rate decay.", - ) - parser.add_argument( - "--save-frequency", type=int, default=1, help="How often to save checkpoints." - ) - parser.add_argument( - "--save-top-performance", - type=int, - default=0, - help="Save the top x performance weights if the value >0", - ) - parser.add_argument( - "--save-most-recent", - action="store_true", - default=False, - help="Always save the most recent model trained to epoch_latest.pt.", - ) - parser.add_argument( - "--zeroshot-frequency", type=int, default=2, help="How often to run zero shot." - ) - parser.add_argument( - "--val-frequency", - type=int, - default=1, - help="How often to run evaluation with val data.", - ) - parser.add_argument( - "--resume", - default=None, - type=str, - help="path to latest checkpoint (default: none)", - ) - parser.add_argument( - "--precision", - choices=["amp", "fp16", "fp32"], - default="amp", - help="Floating point precision.", - ) - parser.add_argument( - "--amodel", - type=str, - default="RN50", - help="Name of the audio backbone to use.", - ) - parser.add_argument( - "--tmodel", - type=str, - default="transformer", - help="Name of the text backbone to use. Can be [transformer, bert, roberta, bart]", - ) - parser.add_argument( - "--pretrained-audio", - default="", - type=str, - help="Use a pretrained audio model weights for the audio encoder of CLAP", - ) - parser.add_argument( - "--pretrained-text", - default="", - type=str, - help="Use a pretrained text model weights for the text encoder of CLAP", - ) - parser.add_argument( - "--pretrained", - default="", - type=str, - help="Use a pretrained CLIP model weights with the specified tag or file path.", - ) - parser.add_argument( - "--pretrained-image", - default=False, - action="store_true", - help="Load imagenet pretrained weights for image tower backbone if available.", - ) - parser.add_argument( - "--lock-image", - default=False, - action="store_true", - help="Lock full image tower by disabling gradients.", - ) - parser.add_argument( - "--lock-image-unlocked-groups", - type=int, - default=0, - help="Leave last n image tower layer groups unlocked.", - ) - parser.add_argument( - "--lock-image-freeze-bn-stats", - default=False, - action="store_true", - help="Freeze BatchNorm running stats in image tower for any locked layers.", - ) - parser.add_argument( - "--local-loss", - default=False, - action="store_true", - help="calculate loss w/ local features @ global (instead of realizing full global @ global matrix)", - ) - parser.add_argument( - "--gather-with-grad", - default=False, - action="store_true", - help="enable full distributed gradient for feature gather", - ) - parser.add_argument( - "--force-quick-gelu", - default=False, - action="store_true", - help="Force use of QuickGELU activation for non-OpenAI transformer models.", - ) - parser.add_argument( - "--torchscript", - default=False, - action="store_true", - help="torch.jit.script the model, also uses jit version of OpenAI models if pretrained=='openai'", - ) - parser.add_argument( - "--trace", - default=False, - action="store_true", - help="torch.jit.trace the model for inference / eval only", - ) - # arguments for distributed training - parser.add_argument( - "--dist-url", - default="env://", - type=str, - help="url used to set up distributed training", - ) - parser.add_argument( - "--dist-backend", default="nccl", type=str, help="distributed backend" - ) - parser.add_argument( - "--report-to", - default="", - type=str, - help="Options are ['wandb', 'tensorboard', 'wandb,tensorboard']", - ) - parser.add_argument( - "--wandb-notes", default="", type=str, help="Notes if logging with wandb" - ) - parser.add_argument( - "--C", type=float, default=3.16, help="inverse regularizer for logistic reg." - ) - parser.add_argument( - "--debug", - default=False, - action="store_true", - help="If true, more information is logged.", - ) - parser.add_argument( - "--copy-codebase", - default=False, - action="store_true", - help="If true, we copy the entire base on the log diretory, and execute from there.", - ) - parser.add_argument( - "--horovod", - default=False, - action="store_true", - help="Use horovod for distributed training.", - ) - parser.add_argument( - "--ddp-static-graph", - default=False, - action="store_true", - help="Enable static graph optimization for DDP in PyTorch >= 1.11.", - ) - parser.add_argument( - "--no-set-device-rank", - default=False, - action="store_true", - help="Don't set device index from local rank (when CUDA_VISIBLE_DEVICES restricted to one per proc).", - ) - parser.add_argument("--seed", type=int, default=4242, help="Default random seed.") - - parser.add_argument( - "--top-k-checkpoint-select-dataset", - type=str, - default="all", - help="The dataset of selecting top-k checkpoint.", - ) - - # @R10, @R@5, @R1, mAP@10 - parser.add_argument( - "--top-k-checkpoint-select-metric", - type=str, - default="_R@10", - help="The metric for selecting top-k checkpoint.", - ) - parser.add_argument( - "--openai-model-cache-dir", - type=str, - default="~/.cache/clip", - help="Directory to download OpenAI models.", - ) - parser.add_argument( - "--optimizer", - type=str, - default="adamw", - help="can be AdamW or SGD", - ) - parser.add_argument( - "--parallel-eval", - default=False, - action="store_true", - help="Eval in parallel (multi-GPU, multi-node).", - ) - - parser.add_argument( - "--no-eval", - default=False, - action="store_true", - help="Training without evaluation.", - ) - - parser.add_argument( - "--lp-mlp", - default=False, - action="store_true", - help="Linear Probe using MLP layer or not.", - ) - - parser.add_argument( - "--lp-freeze", - default=False, - action="store_true", - help="Linear Probe using Freeze CLAP or not", - ) - - parser.add_argument( - "--lp-act", - default="None", - type=str, - help="Options are ['relu','elu','prelu','softmax','sigmoid']", - ) - - parser.add_argument( - "--lp-loss", type=str, default="bce", help="Loss func of Linear Probe." - ) - - parser.add_argument( - "--lp-metrics", - type=str, - default="map,mauc,acc", - help="Metrics of Linear Probe.", - ) - - parser.add_argument( - "--lp-lr", type=float, default=1e-4, help="learning rate of linear probe" - ) - parser.add_argument( - "--kappa", - type=float, - default=0, - help="the kappa in the weighted contrastive loss, default is to turn off the weighted contrastive loss", - ) - - parser.add_argument( - "--data-filling", - type=str, - default="pad", - help="type of data filling when the audio length is shorter than the max length." - "Can be one of the following: repeat, repeatpad, pad", - ) - parser.add_argument( - "--data-truncating", - type=str, - default="rand_trunc", - help="type of data truncation when the audio length is longer than the max length." - "Can be one of the following: rand_trunc, fusion", - ) - - parser.add_argument( - "--clap-mlploss", - default=False, - action="store_true", - help="Using MLP loss for CLAP model or not", - ) - - parser.add_argument( - "--wandb-id", - type=str, - default=None, - help="the id of wandb experiment to restore.", - ) - - parser.add_argument( - "--sleep", type=float, default=0, help="sleep n seconds before start training" - ) - - # variable length processing - parser.add_argument( - "--enable-fusion", - default=False, - action="store_true", - help="Enable feature funsion for variable-length data", - ) - - parser.add_argument( - "--fusion-type", - type=str, - default="None", - help="Type is among ['channel_map', 'daf_1d','aff_1d','iaff_1d','daf_2d','aff_2d','iaff_2d']", - ) - - parser.add_argument( - "--mixup", - default=False, - action="store_true", - help="Enable mixup in finetuning training.", - ) - parser.add_argument( - "--text-augment-selection", - type=str, - default=None, - help="For selecting levels of augmented text. Type is among ['all', 'augment_only', 'none']", - ) - - args = parser.parse_args() - - # If some params are not passed, we use the default values based on model name. - default_params = get_default_params(args.amodel) - for name, val in default_params.items(): - if getattr(args, name) is None: - setattr(args, name, val) - - return args diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/data/data_sampler.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/data/data_sampler.py deleted file mode 100644 index 575452d9f844a928f7f42296c81635cfbadec7c2..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/data/data_sampler.py +++ /dev/null @@ -1,48 +0,0 @@ -import math -import torch -from torch.utils.data.sampler import Sampler - - -class EnlargedSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - Modified from torch.utils.data.distributed.DistributedSampler - Support enlarging the dataset for iteration-based training, for saving - time when restart the dataloader after each epoch - - Args: - dataset (torch.utils.data.Dataset): Dataset used for sampling. - num_replicas (int | None): Number of processes participating in - the training. It is usually the world_size. - rank (int | None): Rank of the current process within num_replicas. - ratio (int): Enlarging ratio. Default: 1. - """ - - def __init__(self, dataset, num_replicas, rank, ratio=1): - self.dataset = dataset - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.num_samples = math.ceil(len(self.dataset) * ratio / self.num_replicas) - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - indices = torch.randperm(self.total_size, generator=g).tolist() - - dataset_size = len(self.dataset) - indices = [v % dataset_size for v in indices] - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/spaces/deborabmfreitas/churn-prediction-deploy/README.md b/spaces/deborabmfreitas/churn-prediction-deploy/README.md deleted file mode 100644 index b32349e8e56420c0446a4254decb88b39bc97517..0000000000000000000000000000000000000000 --- a/spaces/deborabmfreitas/churn-prediction-deploy/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Churn Prediction Deploy -emoji: 🚀 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion.py deleted file mode 100644 index 74783faae421cb0a10a89fda4f19454f4cf834a8..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion.py +++ /dev/null @@ -1,306 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import tempfile -import unittest - -import numpy as np - -from diffusers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - OnnxStableDiffusionPipeline, - PNDMScheduler, -) -from diffusers.utils.testing_utils import is_onnx_available, nightly, require_onnxruntime, require_torch_gpu - -from ...test_pipelines_onnx_common import OnnxPipelineTesterMixin - - -if is_onnx_available(): - import onnxruntime as ort - - -class OnnxStableDiffusionPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase): - hub_checkpoint = "hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline" - - def get_dummy_inputs(self, seed=0): - generator = np.random.RandomState(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "generator": generator, - "num_inference_steps": 2, - "guidance_scale": 7.5, - "output_type": "numpy", - } - return inputs - - def test_pipeline_default_ddim(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.65072, 0.58492, 0.48219, 0.55521, 0.53180, 0.55939, 0.50697, 0.39800, 0.46455]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_pipeline_pndm(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.scheduler = PNDMScheduler.from_config(pipe.scheduler.config, skip_prk_steps=True) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.65863, 0.59425, 0.49326, 0.56313, 0.53875, 0.56627, 0.51065, 0.39777, 0.46330]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_pipeline_lms(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.53755, 0.60786, 0.47402, 0.49488, 0.51869, 0.49819, 0.47985, 0.38957, 0.44279]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_pipeline_euler(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.53755, 0.60786, 0.47402, 0.49488, 0.51869, 0.49819, 0.47985, 0.38957, 0.44279]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_pipeline_euler_ancestral(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.53817, 0.60812, 0.47384, 0.49530, 0.51894, 0.49814, 0.47984, 0.38958, 0.44271]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_pipeline_dpm_multistep(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained(self.hub_checkpoint, provider="CPUExecutionProvider") - pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs() - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 128, 128, 3) - expected_slice = np.array([0.53895, 0.60808, 0.47933, 0.49608, 0.51886, 0.49950, 0.48053, 0.38957, 0.44200]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - -@nightly -@require_onnxruntime -@require_torch_gpu -class OnnxStableDiffusionPipelineIntegrationTests(unittest.TestCase): - @property - def gpu_provider(self): - return ( - "CUDAExecutionProvider", - { - "gpu_mem_limit": "15000000000", # 15GB - "arena_extend_strategy": "kSameAsRequested", - }, - ) - - @property - def gpu_options(self): - options = ort.SessionOptions() - options.enable_mem_pattern = False - return options - - def test_inference_default_pndm(self): - # using the PNDM scheduler by default - sd_pipe = OnnxStableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", - revision="onnx", - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - np.random.seed(0) - output = sd_pipe([prompt], guidance_scale=6.0, num_inference_steps=10, output_type="np") - image = output.images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.0452, 0.0390, 0.0087, 0.0350, 0.0617, 0.0364, 0.0544, 0.0523, 0.0720]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_inference_ddim(self): - ddim_scheduler = DDIMScheduler.from_pretrained( - "runwayml/stable-diffusion-v1-5", subfolder="scheduler", revision="onnx" - ) - sd_pipe = OnnxStableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - revision="onnx", - scheduler=ddim_scheduler, - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "open neural network exchange" - generator = np.random.RandomState(0) - output = sd_pipe([prompt], guidance_scale=7.5, num_inference_steps=10, generator=generator, output_type="np") - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.2867, 0.1974, 0.1481, 0.7294, 0.7251, 0.6667, 0.4194, 0.5642, 0.6486]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_inference_k_lms(self): - lms_scheduler = LMSDiscreteScheduler.from_pretrained( - "runwayml/stable-diffusion-v1-5", subfolder="scheduler", revision="onnx" - ) - sd_pipe = OnnxStableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - revision="onnx", - scheduler=lms_scheduler, - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - sd_pipe.set_progress_bar_config(disable=None) - - prompt = "open neural network exchange" - generator = np.random.RandomState(0) - output = sd_pipe([prompt], guidance_scale=7.5, num_inference_steps=10, generator=generator, output_type="np") - image = output.images - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 512, 512, 3) - expected_slice = np.array([0.2306, 0.1959, 0.1593, 0.6549, 0.6394, 0.5408, 0.5065, 0.6010, 0.6161]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3 - - def test_intermediate_state(self): - number_of_steps = 0 - - def test_callback_fn(step: int, timestep: int, latents: np.ndarray) -> None: - test_callback_fn.has_been_called = True - nonlocal number_of_steps - number_of_steps += 1 - if step == 0: - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [-0.6772, -0.3835, -1.2456, 0.1905, -1.0974, 0.6967, -1.9353, 0.0178, 1.0167] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3 - elif step == 5: - assert latents.shape == (1, 4, 64, 64) - latents_slice = latents[0, -3:, -3:, -1] - expected_slice = np.array( - [-0.3351, 0.2241, -0.1837, -0.2325, -0.6577, 0.3393, -0.0241, 0.5899, 1.3875] - ) - - assert np.abs(latents_slice.flatten() - expected_slice).max() < 1e-3 - - test_callback_fn.has_been_called = False - - pipe = OnnxStableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - revision="onnx", - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - pipe.set_progress_bar_config(disable=None) - - prompt = "Andromeda galaxy in a bottle" - - generator = np.random.RandomState(0) - pipe( - prompt=prompt, - num_inference_steps=5, - guidance_scale=7.5, - generator=generator, - callback=test_callback_fn, - callback_steps=1, - ) - assert test_callback_fn.has_been_called - assert number_of_steps == 6 - - def test_stable_diffusion_no_safety_checker(self): - pipe = OnnxStableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - revision="onnx", - safety_checker=None, - feature_extractor=None, - provider=self.gpu_provider, - sess_options=self.gpu_options, - ) - assert isinstance(pipe, OnnxStableDiffusionPipeline) - assert pipe.safety_checker is None - - image = pipe("example prompt", num_inference_steps=2).images[0] - assert image is not None - - # check that there's no error when saving a pipeline with one of the models being None - with tempfile.TemporaryDirectory() as tmpdirname: - pipe.save_pretrained(tmpdirname) - pipe = OnnxStableDiffusionPipeline.from_pretrained(tmpdirname) - - # sanity check that the pipeline still works - assert pipe.safety_checker is None - image = pipe("example prompt", num_inference_steps=2).images[0] - assert image is not None diff --git a/spaces/diacanFperku/AutoGPT/Evil Dead 1080p Bluray [REPACK] Download.md b/spaces/diacanFperku/AutoGPT/Evil Dead 1080p Bluray [REPACK] Download.md deleted file mode 100644 index 31ed9245ac1f80a2d75103afdcdf0c06c17d35e8..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Evil Dead 1080p Bluray [REPACK] Download.md +++ /dev/null @@ -1,68 +0,0 @@ -

            evil dead 1080p bluray download


            Downloadhttps://gohhs.com/2uFUNu



            - -ubuntu - - where can i get the proper driver - - where can i find the proper driver for xubuntu - - ssj5goku: check the PPA page for VLC, they have the latest version available for you - - ssj5goku: if it's just a GUI application, it's probably in the repo - - or maybe you mean the actual VLC program? - - no i mean the driver for my dell inspiron 15 5554 laptop - - the video is not playing from the usb - - i tried all the solutions that i found on the net - - they are not working - - so it doesn't play on ANY video player? - - no it just hangs after loading - - what happens when you try to play an avi? - - it hangs on the point after the screen is supposed to be loaded - - ok, what's the model of your laptop? - - inspiron 15 5554 - - asus laptop - - xubuntu version 16.04 - - ok, so you installed a distro on it? - - yes - - i installed lubuntu on it - - did you try the windows driver on Windows first? - - yes i have tried that one too - - the same error - - ok - - this is driving me crazy - - maybe you can try to use the ubuntu hardware drivers - - i tried that too - - how about Ubuntu 17.10? - - is it still using the xorg 1.17 kernel? - - i dont know - - i'd try and install the newest version then 4fefd39f24
            -
            -
            -

            diff --git a/spaces/diacanFperku/AutoGPT/Feel The Flash Hardcore Kasumi Rebirth 3.1.torrent.md b/spaces/diacanFperku/AutoGPT/Feel The Flash Hardcore Kasumi Rebirth 3.1.torrent.md deleted file mode 100644 index 5ee570421eb14e3ab12a577d44b2bf641a3683c3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Feel The Flash Hardcore Kasumi Rebirth 3.1.torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Feel The Flash Hardcore Kasumi Rebirth 3.1.torrent


            Download ::: https://gohhs.com/2uFTBb



            -
            -September 28, 2564 BC - Play with Kasumi, touch her body and use your naughty hands. April 18, 2015 - Feel the flash hardcore - Kasumi : Rebirth V3.30 juego de. - Play the game for an hour or more. - Customize the difficulty level according to your own preferences. - Understand the game and create your own style of passing. - Find a new dimension of pleasure in Kasumi : Rebirth. S-t-i-l-s-e-S-t-i-l-t-I-r-e - This is the name of the game you are currently playing. Or, more broadly, the game behind it is S-t-i-l-s-e-S-t-i-l-t-I-r-e. — The name of the game has a hidden meaning, but it does not need to be deciphered. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/condense.py b/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/condense.py deleted file mode 100644 index 27ac302fb4863424cbfd3a234aeea4e7433fb5ab..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/condense.py +++ /dev/null @@ -1,141 +0,0 @@ -import torch - -from colbert.utils.utils import load_checkpoint -from colbert.utils.amp import MixedPrecisionManager -from colbert.utils.utils import flatten - -from baleen.utils.loaders import * -from baleen.condenser.model import ElectraReader -from baleen.condenser.tokenization import AnswerAwareTokenizer - - - -class Condenser: - def __init__(self, collectionX_path, checkpointL1, checkpointL2, deviceL1='cuda', deviceL2='cuda'): - self.modelL1, self.maxlenL1 = self._load_model(checkpointL1, deviceL1) - self.modelL2, self.maxlenL2 = self._load_model(checkpointL2, deviceL2) - - assert self.maxlenL1 == self.maxlenL2, "Add support for different maxlens: use two tokenizers." - - self.amp, self.tokenizer = self._setup_inference(self.maxlenL2) - self.CollectionX, self.CollectionY = self._load_collection(collectionX_path) - - def condense(self, query, backs, ranking): - stage1_preds = self._stage1(query, backs, ranking) - stage2_preds, stage2_preds_L3x = self._stage2(query, stage1_preds) - - return stage1_preds, stage2_preds, stage2_preds_L3x - - def _load_model(self, path, device): - model = torch.load(path, map_location='cpu') - ElectraModels = ['google/electra-base-discriminator', 'google/electra-large-discriminator'] - assert model['arguments']['model'] in ElectraModels, model['arguments'] - - model = ElectraReader.from_pretrained(model['arguments']['model']) - checkpoint = load_checkpoint(path, model) - - model = model.to(device) - model.eval() - - maxlen = checkpoint['arguments']['maxlen'] - - return model, maxlen - - def _setup_inference(self, maxlen): - amp = MixedPrecisionManager(activated=True) - tokenizer = AnswerAwareTokenizer(total_maxlen=maxlen) - - return amp, tokenizer - - def _load_collection(self, collectionX_path): - CollectionX = {} - CollectionY = {} - - with open(collectionX_path) as f: - for line_idx, line in enumerate(f): - line = ujson.loads(line) - - assert type(line['text']) is list - assert line['pid'] == line_idx, (line_idx, line) - - passage = [line['title']] + line['text'] - CollectionX[line_idx] = passage - - passage = [line['title'] + ' | ' + sentence for sentence in line['text']] - - for idx, sentence in enumerate(passage): - CollectionY[(line_idx, idx)] = sentence - - return CollectionX, CollectionY - - def _stage1(self, query, BACKS, ranking, TOPK=9): - model = self.modelL1 - - with torch.inference_mode(): - backs = [self.CollectionY[(pid, sid)] for pid, sid in BACKS if (pid, sid) in self.CollectionY] - backs = [query] + backs - query = ' # '.join(backs) - - # print(query) - # print(backs) - passages = [] - actual_ranking = [] - - for pid in ranking: - actual_ranking.append(pid) - psg = self.CollectionX[pid] - psg = ' [MASK] '.join(psg) - - passages.append(psg) - - obj = self.tokenizer.process([query], passages, None) - - with self.amp.context(): - scores = model(obj.encoding.to(model.device)).float() - - pids = [[pid] * scores.size(1) for pid in actual_ranking] - pids = flatten(pids) - - sids = [list(range(scores.size(1))) for pid in actual_ranking] - sids = flatten(sids) - - scores = scores.view(-1) - - topk = scores.topk(min(TOPK, len(scores))).indices.tolist() - topk_pids = [pids[idx] for idx in topk] - topk_sids = [sids[idx] for idx in topk] - - preds = [(pid, sid) for pid, sid in zip(topk_pids, topk_sids)] - - pred_plus = BACKS + preds - pred_plus = f7(list(map(tuple, pred_plus)))[:TOPK] - - return pred_plus - - def _stage2(self, query, preds): - model = self.modelL2 - - psgX = [self.CollectionY[(pid, sid)] for pid, sid in preds if (pid, sid) in self.CollectionY] - psg = ' [MASK] '.join([''] + psgX) - passages = [psg] - # print(passages) - - obj = self.tokenizer.process([query], passages, None) - - with self.amp.context(): - scores = model(obj.encoding.to(model.device)).float() - scores = scores.view(-1).tolist() - - preds = [(score, (pid, sid)) for (pid, sid), score in zip(preds, scores)] - preds = sorted(preds, reverse=True)[:5] - - preds_L3x = [x for score, x in preds if score > min(0, preds[1][0] - 1e-10)] # Take at least 2! - preds = [x for score, x in preds if score > 0] - - earliest_pids = f7([pid for pid, _ in preds_L3x])[:4] # Take at most 4 docs. - preds_L3x = [(pid, sid) for pid, sid in preds_L3x if pid in earliest_pids] - - assert len(preds_L3x) >= 2 - assert len(f7([pid for pid, _ in preds_L3x])) <= 4 - - return preds, preds_L3x diff --git a/spaces/digitalxingtong/Azuma-Bert-VITS2/text/english.py b/spaces/digitalxingtong/Azuma-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azuma-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/digitalxingtong/Shanbao-Bert-VITS2/preprocess_text.py b/spaces/digitalxingtong/Shanbao-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Shanbao-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -import shutil -stage = [1,2,3] - -transcription_path = 'filelists/short_character_anno.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - #language = "ZH" - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except: - print("err!", utt) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - file_path = transcription_path+'.cleaned' - shutil.copy(file_path,'./filelists/train.list') - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path)) - config['data']["n_speakers"] = current_sid # - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/fast_rcnn.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/fast_rcnn.py deleted file mode 100644 index 3d6e242767b927ed37198b6bc7862abecef99a33..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,52 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/spaces/django-ochain/AI-market-researcher/app.py b/spaces/django-ochain/AI-market-researcher/app.py deleted file mode 100644 index fc193b23415fe9976a62e1c007770775b488768f..0000000000000000000000000000000000000000 --- a/spaces/django-ochain/AI-market-researcher/app.py +++ /dev/null @@ -1,312 +0,0 @@ -import os -from bs4 import BeautifulSoup -import gradio as gr -import openai -import requests -from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate -from langchain.memory import ConversationBufferWindowMemory -from langchain.chains import LLMChain -from langchain.agents import load_tools, initialize_agent -from langchain.chat_models import ChatOpenAI -from langchain.output_parsers import CommaSeparatedListOutputParser -from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate -from langchain.llms import OpenAI -from collections import Counter -import pandas as pd -from langchain.document_loaders import TextLoader, YoutubeLoader -from youtube_transcript_api import YouTubeTranscriptApi -from langchain.indexes import VectorstoreIndexCreator - - -OPENAI_API_KEY = os.environ['OPENAI_API_KEY'] - -OPENAI_API_KEY = os.environ['OPENAI_API_KEY'] -GOOGLE_MAPS_API = os.environ['GOOGLE_MAPS_API'] - -#### TAB 1 #### - -def get_location_data(search_term, location): - # First, we get the latitude and longitude coordinates of the location - url = "https://maps.googleapis.com/maps/api/geocode/json" - params = { - "address": location, - "key": GOOGLE_MAPS_API - } - response = requests.get(url, params=params) - location_data = response.json()["results"][0]["geometry"]["location"] - - # Next, we use the Places API nearbysearch endpoint to find places matching the search term - url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" - params = { - "location": f"{location_data['lat']},{location_data['lng']}", - "radius": "10000", # 10km radius - #"type": search_term, - "keyword" : search_term, - "key": GOOGLE_MAPS_API - } - response = requests.get(url, params=params) - results = response.json()["results"] - - # We only want the first 5 results - results = results[:5] - - # For each result, we get the place details to retrieve the description and top reviews - locations = [] - for result in results: - place_id = result["place_id"] - url = "https://maps.googleapis.com/maps/api/place/details/json" - params = { - "place_id": place_id, - "fields": "name,formatted_address,formatted_phone_number,rating,review", - "key": GOOGLE_MAPS_API - } - response = requests.get(url, params=params) - place_details = response.json()["result"] - - # Create a dictionary representing the location and add it to the list - location_dict = { - "name": place_details["name"], - "address": place_details["formatted_address"], - #"phone_number": place_details.get("formatted_phone_number", "N/A"), - #"rating": place_details.get("rating", "N/A"), - "reviews": [] - } - - # Add the top 3 reviews to the dictionary - reviews = place_details.get("reviews", []) - for review in reviews[:3]: - review_dict = { - #"author": review["author_name"], - #"rating": review["rating"], - "text": review["text"], - #"time": review["relative_time_description"] - } - location_dict["reviews"].append(review_dict) - - locations.append(location_dict) - - return locations - -# Define the function to be used in the Gradio app -def find_competitors(product, location): - locations = get_location_data(product, location) - if len(locations) == 0: - return f"No competitors found for {product} in {location}." - - output_str = f"Top competitors for {product} in {location}:" - for i, loc in enumerate(locations): - output_str += f"\n{i+1}. {loc['name']}" - output_str += f"\nAddress: {loc['address']}" - #output_str += f"\nPhone number: {loc['phone_number']}" - #output_str += f"\nRating: {loc['rating']}" - output_str += f"\nTop 3 reviews:" - for review in loc['reviews']: - output_str += f"\n- {review['text']}" - #output_str += f"\n Author: {review['author']}" - #output_str += f"\n Rating: {review['rating']}" - #output_str += f"\n Time: {review['time']}" - - output_str2 = f"Top competitors for {product} in {location}:" - for i, loc in enumerate(locations): - output_str2 += f"\n{i+1}. {loc['name']}" - output_str2 += f"\nAddress: {loc['address']}" - - #return output_str - - prompt_input = ''' - You are an expert management consultant that rivals the best of Mckinsey, Bain, BCG. - The client wants to sell {} in {}. - {} - Provide an analysis of the following: - - From the competition and reviews about its products and come up with creative insights to recommend the client execute as part of a differentiating business strategy. - - From there, think step by step, explain 5 strategies in bullet points of a creative and effective business plan. - - Suggest a location for the client and explain the rationale of this locatioin. - - Let us think step by step. - '''.format(product, location, output_str) - - template = ''' - {history} - {human_input} - ''' - prompt = PromptTemplate( - input_variables=["history", "human_input"], - template=template - ) - - chatgpt_chain = LLMChain( - llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5,openai_api_key=OPENAI_API_KEY), - prompt=prompt, - verbose=True, - memory=ConversationBufferWindowMemory(k=10), - ) - - output = output_str2 + "\n\n" + chatgpt_chain.predict(human_input=prompt_input) - - return(output) - -# Create the Gradio app interface -inputs = [ - gr.inputs.Textbox(label="Product to research"), - gr.inputs.Textbox(label="Location") -] - - -output = gr.outputs.Textbox(label="AI Analysis") - -iface1 = gr.Interface(fn=find_competitors, inputs=inputs, outputs=output, title="Market Research AI", - description="Input a product and a location. The AI analyst will help you research nearby competitors, formulate a business plan to differentiate you from your competitors, and recommend a strategic location for your business.") - -#### TAB 2 #### - -template2 = ''' -{history} -{human_input} -''' -prompt2 = PromptTemplate( - input_variables=["history", "human_input"], - template=template2 -) - -chatgpt_chain = LLMChain( - llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5,openai_api_key=OPENAI_API_KEY), - prompt=prompt2, - verbose=True, - memory=ConversationBufferWindowMemory(k=10), -) - -# Scrape the URL -def scrape(url): - response = requests.get(url) - soup = BeautifulSoup(response.text, "html.parser") - - # Remove script and style elements - for script in soup(["script", "style"]): - script.extract() - - return soup.get_text() - -# Extract keywords -def extract_keywords(prompt_input, num_keywords): - - output= chatgpt_chain.predict(human_input=prompt_input) - output_parser = CommaSeparatedListOutputParser() - ret_list = output_parser.parse(output) - - return ret_list - -# Define the function to be used in Gradio -def keywords_from_url(url, num_keywords): - url_text = scrape(url) - prompt_input2 = ''' -You are an expert SEO optimized, consultant and manager. - -Here is the text from a website: - -{} - -From the text above, extract {} SEO keyphrase that are highly valueble in terms of SEO purpose. - -Your response should be a list of comma separated values, eg: `foo, bar, baz -'''.format(url_text, num_keywords) - - keywords = extract_keywords(prompt_input2, num_keywords) - - df = pd.DataFrame(keywords, columns=["Keyword"]) - df.index.name = "Rank" - df.index += 1 - df.to_csv('keywords.csv') - - return "keywords.csv" - - - -iface2 = gr.Interface( - fn=keywords_from_url, - inputs=[gr.inputs.Textbox(label="URL"), gr.inputs.Slider(minimum=1, maximum=50, step=1, default=10, label="Number of SEO Keywords")], - outputs=gr.outputs.File(label="Download CSV File"), - title="SEO Keyword Extractor", - description="Enter a URL and the number of keywords you want to extract from that page. The output will be a CSV file containing the SEO keywords." -) - - -#### TAB 3 #### - - - -previous_youtube_url = None -index = None - -def get_video_id(url): - video_id = None - if 'youtu.be' in url: - video_id = url.split('/')[-1] - else: - video_id = url.split('watch?v=')[-1] - return video_id - -def get_captions(url): - try: - video_id = get_video_id(url) - transcript_list = YouTubeTranscriptApi.list_transcripts(video_id) - transcript = transcript_list.find_transcript(['en']) - captions = transcript.fetch() - - formatted_captions = '' - for caption in captions: - formatted_captions += caption['text'] + ' ' - - return formatted_captions - - except Exception as e: - print(e) - return "Error. Could not fetch captions." - - - -def answer_question(youtube_url, user_question): - # You can implement your logic here to process the video, transcribe it, and answer the user question. - # For now, let's return the user question as output. - global previous_youtube_url - global index - - query = ''' - You are an expert researcher that can answer any questions from a given text. Here is the question: - {} - '''.format(str(user_question)) - - if previous_youtube_url == youtube_url: - #index = VectorstoreIndexCreator().from_loaders([loader]) - #query = user_question - answer = index.query(llm=OpenAI(model="text-davinci-003"), question = query) - else: - f= open("temp.txt","w+") - f.write(get_captions(youtube_url)) - f.close() - loader = TextLoader("temp.txt") - - index = VectorstoreIndexCreator().from_loaders([loader]) - os.remove("temp.txt") - - #query = user_question - answer = index.query(llm=OpenAI(model="text-davinci-003"), question = query) - - return answer - -iface3 = gr.Interface( - fn=answer_question, - inputs=[ - gr.Textbox(lines=1, placeholder="Enter YouTube URL here..."), - gr.Textbox(lines=1, placeholder="Enter your question here...") - ], - outputs=gr.Textbox(), - title="YouTube Smart Q & A", - description="Enter a YouTube URL & a question and the app will find the answer from the video captions." -) - - - -#tab1 = gr.Tab("AI Market Research", inputs=iface1.inputs, outputs=iface1.outputs) -#tab2 = gr.Tab("SEO Keyword Extractor", inputs=iface2.inputs, outputs=iface2.outputs) - -demo = gr.TabbedInterface([iface2, iface1, iface3], ["SEO Keyword Extractor", "AI Market Researcher","YouTube Smart Q & A"]) -demo.launch() \ No newline at end of file diff --git a/spaces/dolceschokolade/chatbot-mini/__tests__/utils/app/importExports.test.ts b/spaces/dolceschokolade/chatbot-mini/__tests__/utils/app/importExports.test.ts deleted file mode 100644 index aa51cbc054eae6a7921d88f2e894186e82a87739..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/__tests__/utils/app/importExports.test.ts +++ /dev/null @@ -1,264 +0,0 @@ -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from '@/utils/app/const'; -import { - cleanData, - isExportFormatV1, - isExportFormatV2, - isExportFormatV3, - isExportFormatV4, - isLatestExportFormat, -} from '@/utils/app/importExport'; - -import { ExportFormatV1, ExportFormatV2, ExportFormatV4 } from '@/types/export'; -import { OpenAIModelID, OpenAIModels } from '@/types/openai'; - -import { describe, expect, it } from 'vitest'; - -describe('Export Format Functions', () => { - describe('isExportFormatV1', () => { - it('should return true for v1 format', () => { - const obj = [{ id: 1 }]; - expect(isExportFormatV1(obj)).toBe(true); - }); - - it('should return false for non-v1 formats', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV1(obj)).toBe(false); - }); - }); - - describe('isExportFormatV2', () => { - it('should return true for v2 format', () => { - const obj = { history: [], folders: [] }; - expect(isExportFormatV2(obj)).toBe(true); - }); - - it('should return false for non-v2 formats', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV2(obj)).toBe(false); - }); - }); - - describe('isExportFormatV3', () => { - it('should return true for v3 format', () => { - const obj = { version: 3, history: [], folders: [] }; - expect(isExportFormatV3(obj)).toBe(true); - }); - - it('should return false for non-v3 formats', () => { - const obj = { version: 4, history: [], folders: [] }; - expect(isExportFormatV3(obj)).toBe(false); - }); - }); - - describe('isExportFormatV4', () => { - it('should return true for v4 format', () => { - const obj = { version: 4, history: [], folders: [], prompts: [] }; - expect(isExportFormatV4(obj)).toBe(true); - }); - - it('should return false for non-v4 formats', () => { - const obj = { version: 5, history: [], folders: [], prompts: [] }; - expect(isExportFormatV4(obj)).toBe(false); - }); - }); -}); - -describe('cleanData Functions', () => { - describe('cleaning v1 data', () => { - it('should return the latest format', () => { - const data = [ - { - id: 1, - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - }, - ] as ExportFormatV1; - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: 1, - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [], - prompts: [], - }); - }); - }); - - describe('cleaning v2 data', () => { - it('should return the latest format', () => { - const data = { - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - }, - ], - folders: [ - { - id: 1, - name: 'folder 1', - }, - ], - } as ExportFormatV2; - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [], - }); - }); - }); - - describe('cleaning v4 data', () => { - it('should return the latest format', () => { - const data = { - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [ - { - id: '1', - name: 'prompt 1', - description: '', - content: '', - model: OpenAIModels[OpenAIModelID.GPT_3_5], - folderId: null, - }, - ], - } as ExportFormatV4; - - const obj = cleanData(data); - expect(isLatestExportFormat(obj)).toBe(true); - expect(obj).toEqual({ - version: 4, - history: [ - { - id: '1', - name: 'conversation 1', - messages: [ - { - role: 'user', - content: "what's up ?", - }, - { - role: 'assistant', - content: 'Hi', - }, - ], - model: OpenAIModels[OpenAIModelID.GPT_3_5], - prompt: DEFAULT_SYSTEM_PROMPT, - temperature: DEFAULT_TEMPERATURE, - folderId: null, - }, - ], - folders: [ - { - id: '1', - name: 'folder 1', - type: 'chat', - }, - ], - prompts: [ - { - id: '1', - name: 'prompt 1', - description: '', - content: '', - model: OpenAIModels[OpenAIModelID.GPT_3_5], - folderId: null, - }, - ], - }); - }); - }); -}); diff --git a/spaces/dongyi/MMFS/test_sp2pII.py b/spaces/dongyi/MMFS/test_sp2pII.py deleted file mode 100644 index 45eb7aa796b3b8180fc010331f735ced4bd29003..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/test_sp2pII.py +++ /dev/null @@ -1,76 +0,0 @@ -import argparse -import os -import numpy as np -from PIL import Image -from tqdm import tqdm - -import clip -import torch -from torchvision.transforms import Compose, Resize, ToTensor, Normalize, InterpolationMode -from models.style_based_pix2pixII_model import Stylizer, TrainingPhase - -if __name__ == '__main__': - # define & parse args - parser = argparse.ArgumentParser(description='sp2pII test') - parser.add_argument('--ckpt', type=str, default='./checkpoints/watercolor_painting/epoch_latest.pth') - parser.add_argument('--in_folder', type=str, default='./example/source') - parser.add_argument('--out_folder', type=str, default='./example/outputs/zero-shot/watercolor_painting') - parser.add_argument('--phase', type=int, default=3) - parser.add_argument('--txt_prompt', type=str, default='watercolor painting') - parser.add_argument('--img_prompt', type=str, default='') # ./example/reference/04.png - parser.add_argument('--device', type=str, default='cuda:0') - args = parser.parse_args() - args.phase = TrainingPhase(args.phase) - - os.makedirs(args.out_folder, exist_ok=True) - - # init model - state_dict = torch.load(args.ckpt, map_location='cpu') - model = Stylizer(ngf=64, phase=args.phase, model_weights=state_dict['G_ema_model']) - model.to(args.device) - model.eval() - model.requires_grad_(False) - - clip_model, img_preprocess = clip.load('ViT-B/32', device=args.device) - clip_model.eval() - clip_model.requires_grad_(False) - - # image transform for stylizer - img_transform = Compose([ - Resize((512, 512), interpolation=InterpolationMode.LANCZOS), - ToTensor(), - Normalize([0.5], [0.5]) - ]) - - # get clip features - with torch.no_grad(): - if os.path.isfile(args.img_prompt): - img = img_preprocess(Image.open(args.img_prompt)).unsqueeze(0).to(args.device) - clip_feats = clip_model.encode_image(img) - else: - text = clip.tokenize(args.txt_prompt).to(args.device) - clip_feats = clip_model.encode_text(text) - clip_feats /= clip_feats.norm(dim=1, keepdim=True) - - # enum image files - files = os.listdir(args.in_folder) - for fn in tqdm(files): - prefix, ext = os.path.splitext(fn) - if not ext.lower() in ['.png', '.jpg', '.jpeg']: - continue - - # load image & to tensor - img = Image.open(os.path.join(args.in_folder, fn)) - if not img.mode == 'RGB': - img = img.convert('RGB') - img = img_transform(img).unsqueeze(0).to(args.device) - - # stylize it ! - with torch.no_grad(): - if args.phase == TrainingPhase.CLIP_MAPPING: - res = model(img, clip_feats=clip_feats) - - # save image - res = res.cpu().numpy()[0] - res = np.transpose(res, (1, 2, 0)) * 0.5 + 0.5 - Image.fromarray((res * 255).astype(np.uint8)).save(os.path.join(args.out_folder, prefix + '.png')) diff --git a/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/app.py b/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/app.py deleted file mode 100644 index 5bb9f31f665e1739f6f14ed18a0051d7dbce2ff7..0000000000000000000000000000000000000000 --- a/spaces/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion/app.py +++ /dev/null @@ -1,367 +0,0 @@ -import os -import random - -import autocuda -from pyabsa.utils.pyabsa_utils import fprint - -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, \ - DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil - -from interface import realEsrgan - -start_time = time.time() -is_colab = utils.is_google_colab() - -device = autocuda.auto_cuda() -dtype = torch.float16 if device != 'cpu' else torch.float32 - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - - -models = [ - Model("anything v4", "andite/anything-v4.0", "anything v4 style"), -] -# Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), -# Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), -# Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), -# Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ") -# Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), -# Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""), -# Model("Robo Diffusion", "nousr/robo-diffusion", ""), - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - # predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=dtype, scheduler=scheduler, - safety_checker=lambda images, clip_input: (images, False)) - -else: # download all models - print(f"{datetime.datetime.now()} Downloading vae...") - vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=dtype) - for model in models: - try: - print(f"{datetime.datetime.now()} Downloading {model.name} model...") - unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=dtype) - model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, - torch_dtype=dtype, scheduler=scheduler, - safety_checker=None) - model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, - torch_dtype=dtype, - scheduler=scheduler, safety_checker=None) - except Exception as e: - print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) - models.remove(model) - pipe = models[0].pipe_t2i - -# model.pipe_i2i = torch.compile(model.pipe_i2i) -# model.pipe_t2i = torch.compile(model.pipe_t2i) -if torch.cuda.is_available(): - pipe = pipe.to(device) - - -# device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - - -def on_model_change(model_name): - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), - None) + "\" is prefixed automatically" if model_name != models[ - 0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible=model_name == models[0].name), gr.update(placeholder=prefix) - - -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, - neg_prompt="", scale_factor=2, tile=200): - fprint(psutil.virtual_memory()) # print memory usage - fprint(f"\nPrompt: {prompt}") - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - generator = torch.Generator(device).manual_seed(seed) if seed != 0 else None - - try: - if img is not None: - return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, - generator, scale_factor, tile), None - else: - return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, - scale_factor, tile), None - except Exception as e: - return None, error_str(e) - # if img is not None: - # return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, - # generator, scale_factor), None - # else: - # return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, scale_factor), None - - -def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator, scale_factor, tile): - print(f"{datetime.datetime.now()} \ntxt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=dtype, - scheduler=scheduler, - safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to(device) - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt=neg_prompt, - # num_images_per_prompt=n_images, - num_inference_steps=int(steps), - guidance_scale=guidance, - width=width, - height=height, - generator=generator) - - # save image - img_file = "imgs/result-{}.png".format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) - result.images[0].save(img_file) - - # enhance resolution - if scale_factor>1: - fp32 = True if device=='cpu' else False - result.images[0] = realEsrgan( - input_dir = img_file, - suffix = '', - output_dir= "imgs", - fp32 = fp32, - outscale = scale_factor, - tile = tile - )[0] - print('Complete') - - return replace_nsfw_images(result) - - -def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator, scale_factor, tile): - fprint(f"{datetime.datetime.now()} \nimg_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=dtype, - scheduler=scheduler, - safety_checker=lambda images, clip_input: ( - images, False)) - else: - # pipe = pipe.to("cpu") - pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to(device) - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt=neg_prompt, - # num_images_per_prompt=n_images, - image=img, - num_inference_steps=int(steps), - strength=strength, - guidance_scale=guidance, - # width=width, - # height=height, - generator=generator) - - # save image - img_file = "imgs/result-{}.png".format(datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) - result.images[0].save(img_file) - - # enhance resolution - if scale_factor>1: - fp32 = True if device=='cpu' else False - result.images[0] = realEsrgan( - input_dir = img_file, - suffix = '', - output_dir= "imgs", - fp32 = fp32, - outscale = scale_factor, - tile = tile - )[0] - print('Complete') - - return replace_nsfw_images(result) - - -def replace_nsfw_images(results): - if is_colab: - return results.images[0] - if hasattr(results, "nsfw_content_detected") and results.nsfw_content_detected: - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - - -css = 'style.css' -with gr.Blocks(css=css) as demo: - if not os.path.exists('imgs'): - os.mkdir('imgs') - - gr.Markdown('# RealESRGAN enhanced Anime Diffusion') - gr.Markdown( - "## Author: [dotmet](https://github.com/dotmet) Github:[Github](https://github.com/dotmet/Real-ESRGAN-Enhanced-Anime-Diffusion)") -# gr.Markdown( -# "### You can duplicate this demo on HuggingFace Spaces, click [here](https://huggingface.co/spaces/yangheng/Super-Resolution-Anime-Diffusion?duplicate=true)") - - with gr.Row(): - with gr.Column(scale=55): - with gr.Group(): - gr.Markdown("Text to image") - - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", - placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", - interactive=True) - gr.HTML( - "
            Custom models have to be downloaded first, so give it some time.
            ") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=10, - value = "1girl, brown hair, green eyes, colorful, autumn, \ -cumulonimbus clouds, lighting, blue sky, falling leaves, garden", - placeholder="Enter prompt. Style applied automatically").style(container=False) - with gr.Row(): - generate = gr.Button(value="Generate") - - with gr.Row(): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", - max_lines=10, - value = "lowers, bad anatomy, bad hands, text, error, \ -missing fingers, extra digit, fewer digits, cropped, worst quality, \ -low quality, normal quality, jpeg artifacts, signature, watermark, \ -username, blurry, artist name, bad feet", - placeholder="What to exclude from the image") - - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Group(): - gr.Markdown("Image to Image") - - with gr.Row(): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, - value=0.5) - - with gr.Row(): - with gr.Group(): - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=15, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - with gr.Row(): - scale_factor = gr.Slider(label='Scale factor (to magnify image) (1, 2, 4, 8)', - value=4, minimum=1, maximum=8, step=1) - with gr.Row(): - tile = gr.Slider(label='''Tile for magnify - (depend on the memory of your device, 0=no tile)''', - value=200, minimum=0, maximum=10000, step=10) - with gr.Row(): - seed = gr.Slider(0, 114514, label='Random Seed (0 = random)', value=0, step=1) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - - gr.Markdown('''### based on [Anything V3](https://huggingface.co/Linaqruf/anything-v3.0) and [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)''') - - inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, scale_factor, tile] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs, api_name="generate") - - prompt_keys = ['1girl', 'brown hair', 'green eyes', 'colorful', 'autumn', - 'cumulonimbus clouds', 'lighting, blue sky', 'falling leaves', 'garden'] - prompt.value = ','.join(prompt_keys) - ex = gr.Examples([ - [models[0].name, prompt.value, 7.5, 15], - - ], inputs=[model_name, prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -if not is_colab: - demo.queue(concurrency_count=1) -demo.launch(debug=is_colab, enable_queue=True, share=is_colab) diff --git a/spaces/dwolfe66/text-generation-webui-space/modules/deepspeed_parameters.py b/spaces/dwolfe66/text-generation-webui-space/modules/deepspeed_parameters.py deleted file mode 100644 index 3dbed437f5b5196d0b1fcbc582085319fb8d40d1..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/modules/deepspeed_parameters.py +++ /dev/null @@ -1,75 +0,0 @@ -def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir): - - ''' - DeepSpeed configration - https://huggingface.co/docs/transformers/main_classes/deepspeed - ''' - - if nvme_offload_dir: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "nvme", - "nvme_path": nvme_offload_dir, - "pin_memory": True, - "buffer_count": 5, - "buffer_size": 1e9, - "max_in_cpu": 1e9 - }, - "overlap_comm": True, - "reduce_bucket_size": "auto", - "contiguous_gradients": True, - "sub_group_size": 1e8, - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "aio": { - "block_size": 262144, - "queue_depth": 32, - "thread_count": 1, - "single_submit": False, - "overlap_events": True - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - else: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "cpu", - "pin_memory": True - }, - "overlap_comm": True, - "contiguous_gradients": True, - "reduce_bucket_size": "auto", - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - - return ds_config diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/latent_mappers.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/latent_mappers.py deleted file mode 100644 index 63637adc9646986a3546edd19f4555a2f75a379f..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/latent_mappers.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -from torch import nn -from torch.nn import Module - -from models.StyleCLIP.models.stylegan2.model import EqualLinear, PixelNorm - - -class Mapper(Module): - - def __init__(self, opts): - super(Mapper, self).__init__() - - self.opts = opts - layers = [PixelNorm()] - - for i in range(4): - layers.append( - EqualLinear( - 512, 512, lr_mul=0.01, activation='fused_lrelu' - ) - ) - - self.mapping = nn.Sequential(*layers) - - - def forward(self, x): - x = self.mapping(x) - return x - - -class SingleMapper(Module): - - def __init__(self, opts): - super(SingleMapper, self).__init__() - - self.opts = opts - - self.mapping = Mapper(opts) - - def forward(self, x): - out = self.mapping(x) - return out - - -class LevelsMapper(Module): - - def __init__(self, opts): - super(LevelsMapper, self).__init__() - - self.opts = opts - - if not opts.no_coarse_mapper: - self.course_mapping = Mapper(opts) - if not opts.no_medium_mapper: - self.medium_mapping = Mapper(opts) - if not opts.no_fine_mapper: - self.fine_mapping = Mapper(opts) - - def forward(self, x): - x_coarse = x[:, :4, :] - x_medium = x[:, 4:8, :] - x_fine = x[:, 8:, :] - - if not self.opts.no_coarse_mapper: - x_coarse = self.course_mapping(x_coarse) - else: - x_coarse = torch.zeros_like(x_coarse) - if not self.opts.no_medium_mapper: - x_medium = self.medium_mapping(x_medium) - else: - x_medium = torch.zeros_like(x_medium) - if not self.opts.no_fine_mapper: - x_fine = self.fine_mapping(x_fine) - else: - x_fine = torch.zeros_like(x_fine) - - - out = torch.cat([x_coarse, x_medium, x_fine], dim=1) - - return out - diff --git a/spaces/ennet/ChatDev/online_log/static/replay/js/app.js b/spaces/ennet/ChatDev/online_log/static/replay/js/app.js deleted file mode 100644 index 0d078d7ff3d265549c21f88702160eaf285d2dce..0000000000000000000000000000000000000000 --- a/spaces/ennet/ChatDev/online_log/static/replay/js/app.js +++ /dev/null @@ -1,575 +0,0 @@ -const coordSet = []; -coordSet["Chief Executive Officer"] = { - "character": "Chief Executive Officer", - "imgid": "right", - "top": "-315px", - "left": "280px" -}; -coordSet["Chief Product Officer"] = { - "character": "Chief Product Officer", - "imgid": "left", - "top": "-165px", - "left": "110px" -}; -coordSet["Chief Human Resource Officer"] = { - "character": "Chief Human Resource Officer", - "imgid": "left", - "top": "-305px", - "left": "55px" -}; -coordSet["Code Reviewer"] = { - "character": "Code Reviewer", - "imgid": "left", - "top": "-185px", - "left": "500px" -}; -coordSet["Programmer"] = { - "character": "Programmer", - "imgid": "right", - "top": "-80px", - "left": "300px" -}; -coordSet["Chief Technology Officer"] = { - "character": "Chief Technology Officer", - "imgid": "right", - "top": "-130px", - "left": "340px" -}; -coordSet["Chief Creative Officer"] = { - "character": "Chief Creative Officer", - "imgid": "right", - "top": "-95px", - "left": "205px" -} -coordSet["Software Test Engineer"] = { - "character": "Software Test Engineer", - "imgid": "right", - "top": "-90px", - "left": "470px" - -} -coordSet["User"] = { - "character": "User", - "imgid": "left", - "top": "-465px", - "left": "125px" -} -coordSet["Counselor"] = { - "character": "Counselor", - "imgid": "right", - "top": "-360px", - "left": "420px" -} -coordSet["Prompt Engineer"] = { - "character": "Prompt Engineer", - "imgid": "right", - "top": "-320px", - "left": "20px" -} -const Softwareinfo = { - "duration": "-1", - "cost": "-1", - "version_updates": "-1", - "num_code_files": "-1", - "num_png_files": "-1", - "num_doc_files": "-1", - "code_lines": "-1", - "env_lines": "-1", - "manual_lines": "-1", - "num_utterances": "-1", - "num_self_reflections": "-1", - "num_prompt_tokens": "-1", - "num_completion_tokens": "-1", - "num_total_tokens": "-1", -}; - -//control chars appear speed -var timeinterval = 5; -var charinterval = 1; -var scrollinterval = 40; - -var contents; -var filename; -var curdialog = ''; -var total_height = 0; - -var cur_para = ''; -var cur_command = ''; -var idx = 0; -var dialog; - -var replaying = 0; -var if_stop = 0; -let isPaused = false; -let pauseIntervalId; -var if_move = true; -var md = window.markdownit(); - -//watch replay button clicked -const button = document.getElementById('replay'); -button.addEventListener('click', () => { - replayDialog(idx); -}); -$(document).ready(function() { - $('#filebutton').click(function() { - $('#fileInput').click(); - }); - -}); - -const dialogbody = document.getElementById("dialogBody"); -dialogbody.addEventListener("mousewheel", handleMouseWheel, false); - -function handleMouseWheel(event) { - if (event.wheelDelta > 0) { - if_move = false; - } else if (event.wheelDelta < 0) { - if (dialogbody.scrollTop + dialogbody.clientHeight == dialogbody.scrollHeight) { - if_move = true; - } - } -} - -function getinterval(speed) { - - if (speed < 80 && speed > 40) { - timeinterval = 250 / speed; - charinterval = 2; - scrollinterval = 80; - } else if (speed <= 40 && speed > 0) { - timeinterval = 150 / speed; - charinterval = 1; - scrollinterval = 80; - } else if (speed >= 80 && speed < 90) { - timeinterval = 100 / speed; - charinterval = 1; - scrollinterval = 100; - } else if (speed >= 90 && speed <= 100) { - timeinterval = 5 / speed; - charinterval = 1; - scrollinterval = 400; - } -} -//use the slider to control the replay speed -function speedchange() { - var speedbar = document.getElementById("speed"); - var speed = speedbar.value; - if (speed == 0) { - if (!isPaused) { - isPaused = true; - clearInterval(pauseIntervalId); - updateCompanyWorking("end"); - } - } else if (speed != 0 && isPaused == true) { - getinterval(speed); - isPaused = false; - idx += 1; - replayDialog(idx); - } else if (speed != 0) { - isPaused = false; - getinterval(speed); - } -} -// do replay -async function replayDialog(idx) { - if (replaying == 1 && idx == 0) { - return; - } - if (idx == 0) { - replaying = 1; - dialog = extraction(contents); - var filelable = document.getElementById("successupload"); - filelable.style.display = "block"; - var info = "Replaying `" + filename + "` ......"; - filelable.innerHTML = md.render(info); - } - for (let i = idx; i < dialog.length; ++i) { - await createPara(dialog[i], i); - } -} - -//watch .log file input -function watchfileInput(files) { - if (files.length) { - const file = files[0]; - if (file) { - const reader = new FileReader(); - reader.onload = function() { - contents = this.result; - }; - reader.readAsText(file); - var filelable = document.getElementById("successupload"); - filelable.style.display = "block"; - var info = "File uploaded (`" + file.name + "`). Please click **\"Replay\"** to show ChatDev's development process"; - filename = file.name; - filelable.innerHTML = md.render(info); - } - } -} - -//extract information -function extraction(contents) { - const regex = /\[(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} \w+)\] ([.\s\S\n\r\d\D\t]*?)(?=\n\[\d|$)/g; - - var matches = []; - - let match; - var itemp = 0; - while ((match = regex.exec(contents))) { - console.log(itemp); - itemp++; - const timestamp = match[1]; - const text = match[2]; - matches.push({ - timestamp, - text - }); - } - const regex_assistant = /(.*):([.\r\n\s\S\t\d\D]*)<->([.\r\n\s\S\t\d\D]*?)\]([.\r\n\s\S\t\d\D]*)/g; - const regex_user = /(.*):(.*)(\[Start Chat\])([.\r\n\s\S\t\d\D]*?)\]([.\r\n\s\S\t\d\D]*)/g; - const regex_prompt = /(Prompt Engineer): "([.\s\S\d\D]*)"/g - - const regex_end = /(AgentTech Ends|ChatDev Ends)/g; - const regex_start = /(ChatDev Starts)([\D\s])*(\d*)/g; - - const regex_task = /(task_prompt)(.*):(.*)/g; - const regex_info = /Software Info([\r\n\s\S\t\d\D]*)/g; - - const regex_system = /System/g; - const regex_debug = /DEBUG/g; - - var dialog = []; - var count = 0; - - for (let i = 0; i < matches.length; ++i) { - var if_break = false; - console.log(i); - if (i == 159 || i == 198 || i == 223 || i == 260 || i == 416 || i == 537) { - //console.log(matches[i]); - } - while ((match = regex_debug.exec(matches[i].timestamp)) !== null) { - if_break = true; - } - while ((match = regex_system.exec(matches[i].text)) !== null) { - if_break = true; - } - while (((match = regex_prompt.exec(matches[i].text)) !== null)) { - const type = "assitant"; - const character = match[1]; - const command = match[2]; - const len = match[2].length; - count += 1; - dialog.push({ - type, - character, - command, - len, - count - }); - if_break = true; - } - if (if_break) { - continue; - } - - while ((match = regex_assistant.exec(matches[i].text)) !== null) { - const type = "assitant"; - const character = match[1]; - const command = match[4]; - const len = match[4].length; - count += 1; - dialog.push({ - type, - character, - command, - len, - count - }); - - } - while ((match = regex_user.exec(matches[i].text)) !== null) { - const type = "user"; - const character = match[1]; - const command = match[5]; - const len = match[5].length; - count += 1; - dialog.push({ - type, - character, - command, - len, - count - }); - } - while ((match = regex_start.exec(matches[i].text)) !== null) { - const start = match[1]; - const len = match[1].length; - dialog.push({ - start, - len, - }); - - } - while ((match = regex_end.exec(matches[i].text)) !== null) { - const end = match[1]; - const len = match[1].length; - dialog.push({ - end, - len, - }); - - } - while ((match = regex_task.exec(matches[i].text)) !== null) { - const task = match[3]; - dialog.push({ - task - }); - - } - while ((match = regex_info.exec(matches[i].text)) !== null) { - const info = match[1]; - if ((/code_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.code_lines = (/code_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_code_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_code_files = (/num_code_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_png_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_png_files = (/num_png_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_doc_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_doc_files = (/num_doc_files(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/env_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.env_lines = (/env_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/manual_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.manual_lines = (/manual_lines(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/duration(?:[\t\n\r\s\D]*?)=(-?(\d*)(.(\d)*)?s)/g).exec(info) != null) { - Softwareinfo.duration = (/duration(?:[\t\n\r\s\D]*?)=(-?(\d*)(.(\d)*)?s)/g).exec(info)[1]; - } - if ((/num_utterances(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_utterances = (/num_utterances(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_self_reflections(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_self_reflections = (/num_self_reflections(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_prompt_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_prompt_tokens = (/num_prompt_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_completion_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_completion_tokens = (/num_completion_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/num_total_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info) != null) { - Softwareinfo.num_total_tokens = (/num_total_tokens(?:[\t\n\r\s\D]*?)=(-?(\d*))/g).exec(info)[1]; - } - if ((/cost(?:[\t\n\r\s\D]*?)=(.((\d)*\.(\d)*))/g).exec(info) != null) { - Softwareinfo.cost = (/cost(?:[\t\n\r\s\D]*?)=(.((\d)*\.(\d)*))/g).exec(info)[1]; - } - if ((/version_updates(?:[\t\n\r\s\D]*?)=(-?\d*)/g).exec(info) != null) { - Softwareinfo.version_updates = (/version_updates(?:[\t\n\r\s\D]*?)=(-?\d*)/g).exec(info)[1]; - } - - dialog.push({ - info, - Softwareinfo - }); - - } - } - return dialog; -} - -//show dailog -function createPara(d, i) { - const singleDialog = document.createElement("div"); - singleDialog.style.position = "relative"; - curdialog = singleDialog; - singleDialog.style.display = "flex"; - singleDialog.style.flexDirection = "column"; - singleDialog.style.width = "773px"; - dialogbody.appendChild(singleDialog); - var paralen; - if (d.type && d.character) { - updateCompanyWorking(d.character); - var renderedHtml = md.render(d.character); - const character = document.createElement("div"); - character.style.display = "flex"; - - character.style.backgroundColor = "lightblue"; - character.style.width = "fit-content"; - character.style.fontSize = "13px "; - character.style.border = "1px solid rgba(11, 20, 150, .3)"; - character.style.borderRadius = "10px"; - character.style.boxShadow = "2px 2px 2px black"; - character.style.fontFamily = "'Lucida Sans', 'Lucida Sans Regular', 'Lucida Grande', 'Lucida Sans Unicode', Geneva, Verdana, sans-serif;"; - - if (d.type == "user") { - character.style.position = "relative"; - character.style.marginLeft = "auto"; - } - character.innerHTML = renderedHtml; - singleDialog.appendChild(character); - - const characterimg = document.createElement("img"); - console.log(d.character); - if (d.character == "Programmer") { - characterimg.src = "figures/programmer.png"; - } else if (d.character == "Code Reviewer") { - characterimg.src = "figures/reviewer.png"; - } else if (d.character == "Chief Human Resource Officer") { - characterimg.src = "figures/hr.png"; - } else if (d.character == "Chief Executive Officer") { - characterimg.src = "figures/ceo.png"; - } else if (d.character == "Chief Product Officer") { - characterimg.src = "figures/cpo.png"; - } else if (d.character == "Chief Technology Officer") { - characterimg.src = "figures/cto.png"; - } else if (d.character == "Chief Creative Officer") { - characterimg.src = "figures/designer.png"; - } else if (d.character == "Software Test Engineer") { - characterimg.src = "figures/tester.png"; - } else if (d.character == "User") { - characterimg.src = "figures/user.png"; - } else if (d.character == "Counselor") { - characterimg.src = "figures/counselor.png"; - } else if (d.character == "Prompt Engineer") { - characterimg.src = "figures/pe.png"; - } - - characterimg.style.height = "40px"; - characterimg.style.width = "30px"; - characterimg.style.position = "relative"; - character.appendChild(characterimg); - character.style.width = "fit-content"; - - - var renderedHtml = md.render(d.command); - const paragraph = document.createElement("div"); - paragraph.className = "markdown-body"; - //paragraph.innerHTML = renderedHtml; - paragraph.style.padding = "10px"; - paragraph.style.border = "3px solid #a08D8D"; - paragraph.style.width = "750px"; - paragraph.style.border = "1px solid rgba(11, 20, 150, .3)"; - paragraph.style.borderRadius = "10px"; - paragraph.style.boxShadow = "2px 2px 2px black"; - - singleDialog.appendChild(paragraph); - - const emptyparagraph = document.createElement("div"); - emptyparagraph.style.height = "10px"; - singleDialog.appendChild(emptyparagraph); - - if (d.type == "user") { - paragraph.style.backgroundColor = "#4b751a"; - } else { - paragraph.style.backgroundColor = "#133153"; - } - cur_command = d.command; - cur_para = paragraph; - idx = i; - return Promise.resolve(printCommand(paragraph, d.command)); - - } else if (d.start) { - paralen = 0; - var renderedHtml = md.render("----------" + d.start + "----------"); - const starttext = document.createElement("div"); - starttext.innerHTML = renderedHtml; - singleDialog.appendChild(starttext); - - } else if (d.end) { - paralen = 0; - updateCompanyWorking("end"); - var renderedHtml = md.render("----------" + d.end + "----------"); - const endtext = document.createElement("div"); - endtext.innerHTML = renderedHtml; - singleDialog.appendChild(endtext); - var filelable = document.getElementById("successupload"); - filelable.style.display = "block"; - var info = "Replayed"; - filelable.innerHTML = md.render(info); - } else if (d.task) { - var renderedHtml = md.render("Task: " + d.task); - const tasktext = document.getElementById("Requesttext"); - tasktext.innerHTML = renderedHtml; - } else if (d.info) { - var renderedHtml = md.render(d.info); - const infotext = document.getElementById("dialogStatistic"); - var temp_label = ""; - for (var c in Softwareinfo) { - temp_label = document.getElementById(c); - if (Softwareinfo[c] != "-1" && Softwareinfo[c] != "-1s") { - temp_label.innerHTML = Softwareinfo[c]; - } - } - } -} - -//update company image -function updateCompanyWorking(character) { - if (character == "end") { - var img1 = document.getElementById("right"); - img1.style.display = "none"; - var img2 = document.getElementById("left"); - img2.style.display = "none"; - return; - } - var imgid = coordSet[character].imgid; - var left_bias = coordSet[character].left; - var top_bias = coordSet[character].top; - var img = document.getElementById(imgid); - - img.style.display = "block"; - img.style.left = left_bias; - img.style.top = top_bias; - - if (imgid == "left") { - var another_img = document.getElementById("right"); - another_img.style.display = "none"; - } else { - var another_img = document.getElementById("left"); - another_img.style.display = "none"; - } -} - -async function updateParashow(container, command, index, len) { - var cur_content; - if (index == len - 1) { - cur_content = command.slice(0, index); - } - if (index < len) { - cur_content = command.slice(0, index); - if (cur_content != null && cur_content != undefined) { - container.innerHTML = md.render(cur_content); - }; - } - if (index % (scrollinterval) == 0 && if_move == true) { - if (curdialog != null && curdialog != '') { - const newBoxRect = curdialog.getBoundingClientRect(); - total_height += newBoxRect.height; - dialogbody.scrollTo({ top: total_height, behavior: 'smooth' }); - } - } -} - -async function printCommand(paragraph, command) { - var paralen = command.length; - const tasks = []; - - for (let j = 0; j < paralen; j = j + charinterval) { - tasks.push(new Promise(resolve => { - pauseIntervalId = setTimeout(() => { - updateParashow(paragraph, command, j, paralen); - resolve(); - }, timeinterval * j); - })); - - if (isPaused) { - await Promise.all(tasks); - } - } - await Promise.all(tasks); - return 1; -} \ No newline at end of file diff --git a/spaces/evaluate-metric/bleurt/README.md b/spaces/evaluate-metric/bleurt/README.md deleted file mode 100644 index 03a9bf829822e473636bf66e5a25f4edb8526d5b..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/bleurt/README.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: BLEURT -emoji: 🤗 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -tags: -- evaluate -- metric -description: >- - BLEURT a learnt evaluation metric for Natural Language Generation. It is built using multiple phases of transfer learning starting from a pretrained BERT model (Devlin et al. 2018) - and then employing another pre-training phrase using synthetic data. Finally it is trained on WMT human annotations. You may run BLEURT out-of-the-box or fine-tune - it for your specific application (the latter is expected to perform better). - - See the project's README at https://github.com/google-research/bleurt#readme for more information. ---- - -# Metric Card for BLEURT - - -## Metric Description -BLEURT is a learned evaluation metric for Natural Language Generation. It is built using multiple phases of transfer learning starting from a pretrained BERT model [Devlin et al. 2018](https://arxiv.org/abs/1810.04805), employing another pre-training phrase using synthetic data, and finally trained on WMT human annotations. - -It is possible to run BLEURT out-of-the-box or fine-tune it for your specific application (the latter is expected to perform better). -See the project's [README](https://github.com/google-research/bleurt#readme) for more information. - -## Intended Uses -BLEURT is intended to be used for evaluating text produced by language models. - -## How to Use - -This metric takes as input lists of predicted sentences and reference sentences: - -```python ->>> predictions = ["hello there", "general kenobi"] ->>> references = ["hello there", "general kenobi"] ->>> bleurt = load("bleurt", module_type="metric") ->>> results = bleurt.compute(predictions=predictions, references=references) -``` - -### Inputs -- **predictions** (`list` of `str`s): List of generated sentences to score. -- **references** (`list` of `str`s): List of references to compare to. -- **checkpoint** (`str`): BLEURT checkpoint. Will default to `BLEURT-tiny` if not specified. Other models that can be chosen are: `"bleurt-tiny-128"`, `"bleurt-tiny-512"`, `"bleurt-base-128"`, `"bleurt-base-512"`, `"bleurt-large-128"`, `"bleurt-large-512"`, `"BLEURT-20-D3"`, `"BLEURT-20-D6"`, `"BLEURT-20-D12"` and `"BLEURT-20"`. - -### Output Values -- **scores** : a `list` of scores, one per prediction. - -Output Example: -```python -{'scores': [1.0295498371124268, 1.0445425510406494]} - -``` - -BLEURT's output is always a number between 0 and (approximately 1). This value indicates how similar the generated text is to the reference texts, with values closer to 1 representing more similar texts. - -#### Values from Popular Papers - -The [original BLEURT paper](https://arxiv.org/pdf/2004.04696.pdf) reported that the metric is better correlated with human judgment compared to similar metrics such as BERT and BERTscore. - -BLEURT is used to compare models across different asks (e.g. (Table to text generation)[https://paperswithcode.com/sota/table-to-text-generation-on-dart?metric=BLEURT]). - -### Examples - -Example with the default model: -```python ->>> predictions = ["hello there", "general kenobi"] ->>> references = ["hello there", "general kenobi"] ->>> bleurt = load("bleurt", module_type="metric") ->>> results = bleurt.compute(predictions=predictions, references=references) ->>> print(results) -{'scores': [1.0295498371124268, 1.0445425510406494]} -``` - -Example with the `"bleurt-base-128"` model checkpoint: -```python ->>> predictions = ["hello there", "general kenobi"] ->>> references = ["hello there", "general kenobi"] ->>> bleurt = load("bleurt", module_type="metric", checkpoint="bleurt-base-128") ->>> results = bleurt.compute(predictions=predictions, references=references) ->>> print(results) -{'scores': [1.0295498371124268, 1.0445425510406494]} -``` - -## Limitations and Bias -The [original BLEURT paper](https://arxiv.org/pdf/2004.04696.pdf) showed that BLEURT correlates well with human judgment, but this depends on the model and language pair selected. - -Furthermore, currently BLEURT only supports English-language scoring, given that it leverages models trained on English corpora. It may also reflect, to a certain extent, biases and correlations that were present in the model training data. - -Finally, calculating the BLEURT metric involves downloading the BLEURT model that is used to compute the score, which can take a significant amount of time depending on the model chosen. Starting with the default model, `bleurt-tiny`, and testing out larger models if necessary can be a useful approach if memory or internet speed is an issue. - - -## Citation -```bibtex -@inproceedings{bleurt, - title={BLEURT: Learning Robust Metrics for Text Generation}, - author={Thibault Sellam and Dipanjan Das and Ankur P. Parikh}, - booktitle={ACL}, - year={2020}, - url={https://arxiv.org/abs/2004.04696} -} -``` - -## Further References -- The original [BLEURT GitHub repo](https://github.com/google-research/bleurt/) diff --git a/spaces/evanpierce/3D_Photo_Inpainting2/mesh.py b/spaces/evanpierce/3D_Photo_Inpainting2/mesh.py deleted file mode 100644 index 95cae5be1c26e517fa4d81bd03325a0f0017f9ad..0000000000000000000000000000000000000000 --- a/spaces/evanpierce/3D_Photo_Inpainting2/mesh.py +++ /dev/null @@ -1,2296 +0,0 @@ -import os -import numpy as np -try: - import cynetworkx as netx -except ImportError: - import networkx as netx -import matplotlib.pyplot as plt -from functools import partial -from vispy import scene, io -from vispy.scene import visuals -from vispy.visuals.filters import Alpha -import cv2 -from moviepy.editor import ImageSequenceClip -from skimage.transform import resize -import time -import copy -import torch -import os -from utils import path_planning, open_small_mask, clean_far_edge, refine_depth_around_edge -from utils import refine_color_around_edge, filter_irrelevant_edge_new, require_depth_edge, clean_far_edge_new -from utils import create_placeholder, refresh_node, find_largest_rect -from mesh_tools import get_depth_from_maps, get_map_from_ccs, get_edge_from_nodes, get_depth_from_nodes, get_rgb_from_nodes, crop_maps_by_size, convert2tensor, recursive_add_edge, update_info, filter_edge, relabel_node, depth_inpainting -from mesh_tools import refresh_bord_depth, enlarge_border, fill_dummy_bord, extrapolate, fill_missing_node, incomplete_node, get_valid_size, dilate_valid_size, size_operation -import transforms3d -import random -from functools import reduce - -def create_mesh(depth, image, int_mtx, config): - H, W, C = image.shape - ext_H, ext_W = H + 2 * config['extrapolation_thickness'], W + 2 * config['extrapolation_thickness'] - LDI = netx.Graph(H=ext_H, W=ext_W, noext_H=H, noext_W=W, cam_param=int_mtx) - xy2depth = {} - int_mtx_pix = int_mtx * np.array([[W], [H], [1.]]) - LDI.graph['cam_param_pix'], LDI.graph['cam_param_pix_inv'] = int_mtx_pix, np.linalg.inv(int_mtx_pix) - disp = 1. / (-depth) - LDI.graph['hoffset'], LDI.graph['woffset'] = config['extrapolation_thickness'], config['extrapolation_thickness'] - LDI.graph['bord_up'], LDI.graph['bord_down'] = LDI.graph['hoffset'] + 0, LDI.graph['hoffset'] + H - LDI.graph['bord_left'], LDI.graph['bord_right'] = LDI.graph['woffset'] + 0, LDI.graph['woffset'] + W - for idx in range(H): - for idy in range(W): - x, y = idx + LDI.graph['hoffset'], idy + LDI.graph['woffset'] - LDI.add_node((x, y, -depth[idx, idy]), - color=image[idx, idy], - disp=disp[idx, idy], - synthesis=False, - cc_id=set()) - xy2depth[(x, y)] = [-depth[idx, idy]] - for x, y, d in LDI.nodes: - two_nes = [ne for ne in [(x+1, y), (x, y+1)] if ne[0] < LDI.graph['bord_down'] and ne[1] < LDI.graph['bord_right']] - [LDI.add_edge((ne[0], ne[1], xy2depth[ne][0]), (x, y, d)) for ne in two_nes] - LDI = calculate_fov(LDI) - image = np.pad(image, - pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - (config['extrapolation_thickness'], config['extrapolation_thickness']), - (0, 0)), - mode='constant') - depth = np.pad(depth, - pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - (config['extrapolation_thickness'], config['extrapolation_thickness'])), - mode='constant') - - return LDI, xy2depth, image, depth - - -def tear_edges(mesh, threshold = 0.00025, xy2depth=None): - remove_edge_list = [] - remove_horizon, remove_vertical = np.zeros((2, mesh.graph['H'], mesh.graph['W'])) - mesh_nodes = mesh.nodes - for edge in mesh.edges: - if abs(mesh_nodes[edge[0]]['disp'] - mesh_nodes[edge[1]]['disp']) > threshold: - remove_edge_list.append((edge[0], edge[1])) - - near, far = edge if abs(edge[0][2]) < abs(edge[1][2]) else edge[::-1] - - mesh_nodes[far]['near'] = [] if mesh_nodes[far].get('near') is None else mesh_nodes[far]['near'].append(near) - mesh_nodes[near]['far'] = [] if mesh_nodes[near].get('far') is None else mesh_nodes[near]['far'].append(far) - - if near[0] == far[0]: - remove_horizon[near[0], np.minimum(near[1], far[1])] = 1 - elif near[1] == far[1]: - remove_vertical[np.minimum(near[0], far[0]), near[1]] = 1 - mesh.remove_edges_from(remove_edge_list) - - remove_edge_list = [] - - dang_horizon = np.where(np.roll(remove_horizon, 1, 0) + np.roll(remove_horizon, -1, 0) - remove_horizon == 2) - dang_vertical = np.where(np.roll(remove_vertical, 1, 1) + np.roll(remove_vertical, -1, 1) - remove_vertical == 2) - - horizon_condition = lambda x, y: mesh.graph['bord_up'] + 1 <= x < mesh.graph['bord_down'] - 1 - vertical_condition = lambda x, y: mesh.graph['bord_left'] + 1 <= y < mesh.graph['bord_right'] - 1 - - prjto3d = lambda x, y: (x, y, xy2depth[(x, y)][0]) - - node_existence = lambda x, y: mesh.has_node(prjto3d(x, y)) - - for x, y in zip(dang_horizon[0], dang_horizon[1]): - if horizon_condition(x, y) and node_existence(x, y) and node_existence(x, y+1): - remove_edge_list.append((prjto3d(x, y), prjto3d(x, y+1))) - for x, y in zip(dang_vertical[0], dang_vertical[1]): - if vertical_condition(x, y) and node_existence(x, y) and node_existence(x+1, y): - remove_edge_list.append((prjto3d(x, y), prjto3d(x+1, y))) - mesh.remove_edges_from(remove_edge_list) - - return mesh - -def calculate_fov(mesh): - k = mesh.graph['cam_param'] - mesh.graph['hFov'] = 2 * np.arctan(1. / (2*k[0, 0])) - mesh.graph['vFov'] = 2 * np.arctan(1. / (2*k[1, 1])) - mesh.graph['aspect'] = mesh.graph['noext_H'] / mesh.graph['noext_W'] - - return mesh - -def calculate_fov_FB(mesh): - mesh.graph['aspect'] = mesh.graph['H'] / mesh.graph['W'] - if mesh.graph['H'] > mesh.graph['W']: - mesh.graph['hFov'] = 0.508015513 - half_short = np.tan(mesh.graph['hFov']/2.0) - half_long = half_short * mesh.graph['aspect'] - mesh.graph['vFov'] = 2.0 * np.arctan(half_long) - else: - mesh.graph['vFov'] = 0.508015513 - half_short = np.tan(mesh.graph['vFov']/2.0) - half_long = half_short / mesh.graph['aspect'] - mesh.graph['hFov'] = 2.0 * np.arctan(half_long) - - return mesh - -def reproject_3d_int_detail(sx, sy, z, k_00, k_02, k_11, k_12, w_offset, h_offset): - abs_z = abs(z) - return [abs_z * ((sy+0.5-w_offset) * k_00 + k_02), abs_z * ((sx+0.5-h_offset) * k_11 + k_12), abs_z] - -def reproject_3d_int_detail_FB(sx, sy, z, w_offset, h_offset, mesh): - if mesh.graph.get('tan_hFov') is None: - mesh.graph['tan_hFov'] = np.tan(mesh.graph['hFov'] / 2.) - if mesh.graph.get('tan_vFov') is None: - mesh.graph['tan_vFov'] = np.tan(mesh.graph['vFov'] / 2.) - - ray = np.array([(-1. + 2. * ((sy+0.5-w_offset)/(mesh.graph['W'] - 1))) * mesh.graph['tan_hFov'], - (1. - 2. * (sx+0.5-h_offset)/(mesh.graph['H'] - 1)) * mesh.graph['tan_vFov'], - -1]) - point_3d = ray * np.abs(z) - - return point_3d - - -def reproject_3d_int(sx, sy, z, mesh): - k = mesh.graph['cam_param_pix_inv'].copy() - if k[0, 2] > 0: - k = np.linalg.inv(k) - ray = np.dot(k, np.array([sy-mesh.graph['woffset'], sx-mesh.graph['hoffset'], 1]).reshape(3, 1)) - - point_3d = ray * np.abs(z) - point_3d = point_3d.flatten() - - return point_3d - -def generate_init_node(mesh, config, min_node_in_cc): - mesh_nodes = mesh.nodes - - info_on_pix = {} - - ccs = sorted(netx.connected_components(mesh), key = len, reverse=True) - remove_nodes = [] - - for cc in ccs: - - remove_flag = True if len(cc) < min_node_in_cc else False - if remove_flag is False: - for (nx, ny, nd) in cc: - info_on_pix[(nx, ny)] = [{'depth':nd, - 'color':mesh_nodes[(nx, ny, nd)]['color'], - 'synthesis':False, - 'disp':mesh_nodes[(nx, ny, nd)]['disp']}] - else: - [remove_nodes.append((nx, ny, nd)) for (nx, ny, nd) in cc] - - for node in remove_nodes: - far_nodes = [] if mesh_nodes[node].get('far') is None else mesh_nodes[node]['far'] - for far_node in far_nodes: - if mesh.has_node(far_node) and mesh_nodes[far_node].get('near') is not None and node in mesh_nodes[far_node]['near']: - mesh_nodes[far_node]['near'].remove(node) - near_nodes = [] if mesh_nodes[node].get('near') is None else mesh_nodes[node]['near'] - for near_node in near_nodes: - if mesh.has_node(near_node) and mesh_nodes[near_node].get('far') is not None and node in mesh_nodes[near_node]['far']: - mesh_nodes[near_node]['far'].remove(node) - - [mesh.remove_node(node) for node in remove_nodes] - - return mesh, info_on_pix - -def get_neighbors(mesh, node): - return [*mesh.neighbors(node)] - -def generate_face(mesh, info_on_pix, config): - H, W = mesh.graph['H'], mesh.graph['W'] - str_faces = [] - num_node = len(mesh.nodes) - ply_flag = config.get('save_ply') - def out_fmt(input, cur_id_b, cur_id_self, cur_id_a, ply_flag): - if ply_flag is True: - input.append(' '.join(['3', cur_id_b, cur_id_self, cur_id_a]) + '\n') - else: - input.append([cur_id_b, cur_id_self, cur_id_a]) - mesh_nodes = mesh.nodes - for node in mesh_nodes: - cur_id_self = mesh_nodes[node]['cur_id'] - ne_nodes = get_neighbors(mesh, node) - four_dir_nes = {'up': [], 'left': [], - 'down': [], 'right': []} - for ne_node in ne_nodes: - store_tuple = [ne_node, mesh_nodes[ne_node]['cur_id']] - if ne_node[0] == node[0]: - if ne_node[1] == ne_node[1] - 1: - four_dir_nes['left'].append(store_tuple) - else: - four_dir_nes['right'].append(store_tuple) - else: - if ne_node[0] == ne_node[0] - 1: - four_dir_nes['up'].append(store_tuple) - else: - four_dir_nes['down'].append(store_tuple) - for node_a, cur_id_a in four_dir_nes['up']: - for node_b, cur_id_b in four_dir_nes['right']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['right']: - for node_b, cur_id_b in four_dir_nes['down']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['down']: - for node_b, cur_id_b in four_dir_nes['left']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['left']: - for node_b, cur_id_b in four_dir_nes['up']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - - return str_faces - -def reassign_floating_island(mesh, info_on_pix, image, depth): - H, W = mesh.graph['H'], mesh.graph['W'], - mesh_nodes = mesh.nodes - bord_up, bord_down = mesh.graph['bord_up'], mesh.graph['bord_down'] - bord_left, bord_right = mesh.graph['bord_left'], mesh.graph['bord_right'] - W = mesh.graph['W'] - lost_map = np.zeros((H, W)) - - ''' - (5) is_inside(x, y, xmin, xmax, ymin, ymax) : Check if a pixel(x, y) is inside the border. - (6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y). - ''' - key_exist = lambda d, k: k in d - is_inside = lambda x, y, xmin, xmax, ymin, ymax: xmin <= x < xmax and ymin <= y < ymax - get_cross_nes = lambda x, y: [(x + 1, y), (x - 1, y), (x, y - 1), (x, y + 1)] - ''' - (A) Highlight the pixels on isolated floating island. - (B) Number those isolated floating islands with connected component analysis. - (C) For each isolated island: - (1) Find its longest surrounded depth edge. - (2) Propogate depth from that depth edge to the pixels on the isolated island. - (3) Build the connection between the depth edge and that isolated island. - ''' - for x in range(H): - for y in range(W): - if is_inside(x, y, bord_up, bord_down, bord_left, bord_right) and not(key_exist(info_on_pix, (x, y))): - lost_map[x, y] = 1 - _, label_lost_map = cv2.connectedComponents(lost_map.astype(np.uint8), connectivity=4) - mask = np.zeros((H, W)) - mask[bord_up:bord_down, bord_left:bord_right] = 1 - label_lost_map = (label_lost_map * mask).astype(np.int) - - for i in range(1, label_lost_map.max()+1): - lost_xs, lost_ys = np.where(label_lost_map == i) - surr_edge_ids = {} - for lost_x, lost_y in zip(lost_xs, lost_ys): - if (lost_x, lost_y) == (295, 389) or (lost_x, lost_y) == (296, 389): - import pdb; pdb.set_trace() - for ne in get_cross_nes(lost_x, lost_y): - if key_exist(info_on_pix, ne): - for info in info_on_pix[ne]: - ne_node = (ne[0], ne[1], info['depth']) - if key_exist(mesh_nodes[ne_node], 'edge_id'): - edge_id = mesh_nodes[ne_node]['edge_id'] - surr_edge_ids[edge_id] = surr_edge_ids[edge_id] + [ne_node] if \ - key_exist(surr_edge_ids, edge_id) else [ne_node] - if len(surr_edge_ids) == 0: - continue - edge_id, edge_nodes = sorted([*surr_edge_ids.items()], key=lambda x: len(x[1]), reverse=True)[0] - edge_depth_map = np.zeros((H, W)) - for node in edge_nodes: - edge_depth_map[node[0], node[1]] = node[2] - lost_xs, lost_ys = np.where(label_lost_map == i) - while lost_xs.shape[0] > 0: - lost_xs, lost_ys = np.where(label_lost_map == i) - for lost_x, lost_y in zip(lost_xs, lost_ys): - propagated_depth = [] - real_nes = [] - for ne in get_cross_nes(lost_x, lost_y): - if not(is_inside(ne[0], ne[1], bord_up, bord_down, bord_left, bord_right)) or \ - edge_depth_map[ne[0], ne[1]] == 0: - continue - propagated_depth.append(edge_depth_map[ne[0], ne[1]]) - real_nes.append(ne) - if len(real_nes) == 0: - continue - reassign_depth = np.mean(propagated_depth) - label_lost_map[lost_x, lost_y] = 0 - edge_depth_map[lost_x, lost_y] = reassign_depth - depth[lost_x, lost_y] = -reassign_depth - mesh.add_node((lost_x, lost_y, reassign_depth), color=image[lost_x, lost_y], - synthesis=False, - disp=1./reassign_depth, - cc_id=set()) - info_on_pix[(lost_x, lost_y)] = [{'depth':reassign_depth, - 'color':image[lost_x, lost_y], - 'synthesis':False, - 'disp':1./reassign_depth}] - new_connections = [((lost_x, lost_y, reassign_depth), - (ne[0], ne[1], edge_depth_map[ne[0], ne[1]])) for ne in real_nes] - mesh.add_edges_from(new_connections) - - return mesh, info_on_pix, depth - -def remove_node_feat(mesh, *feats): - mesh_nodes = mesh.nodes - for node in mesh_nodes: - for feat in feats: - mesh_nodes[node][feat] = None - - return mesh - -def update_status(mesh, info_on_pix, depth=None): - ''' - (2) clear_node_feat(G, *fts) : Clear all the node feature on graph G. - (6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y). - ''' - key_exist = lambda d, k: d.get(k) is not None - is_inside = lambda x, y, xmin, xmax, ymin, ymax: xmin <= x < xmax and ymin <= y < ymax - get_cross_nes = lambda x, y: [(x + 1, y), (x - 1, y), (x, y - 1), (x, y + 1)] - append_element = lambda d, k, x: d[k] + [x] if key_exist(d, k) else [x] - - def clear_node_feat(G, fts): - le_nodes = G.nodes - for k in le_nodes: - v = le_nodes[k] - for ft in fts: - if ft in v: - v[ft] = None - - clear_node_feat(mesh, ['edge_id', 'far', 'near']) - bord_up, bord_down = mesh.graph['bord_up'], mesh.graph['bord_down'] - bord_left, bord_right = mesh.graph['bord_left'], mesh.graph['bord_right'] - - le_nodes = mesh.nodes - - for node_key in le_nodes: - if mesh.neighbors(node_key).__length_hint__() == 4: - continue - four_nes = [xx for xx in get_cross_nes(node_key[0], node_key[1]) if - is_inside(xx[0], xx[1], bord_up, bord_down, bord_left, bord_right) and - xx in info_on_pix] - [four_nes.remove((ne_node[0], ne_node[1])) for ne_node in mesh.neighbors(node_key)] - for ne in four_nes: - for info in info_on_pix[ne]: - assert mesh.has_node((ne[0], ne[1], info['depth'])), "No node_key" - ind_node = le_nodes[node_key] - if abs(node_key[2]) > abs(info['depth']): - ind_node['near'] = append_element(ind_node, 'near', (ne[0], ne[1], info['depth'])) - else: - ind_node['far'] = append_element(ind_node, 'far', (ne[0], ne[1], info['depth'])) - if depth is not None: - for key, value in info_on_pix.items(): - if depth[key[0], key[1]] != abs(value[0]['depth']): - value[0]['disp'] = 1. / value[0]['depth'] - depth[key[0], key[1]] = abs(value[0]['depth']) - - return mesh, depth, info_on_pix - else: - return mesh - -def group_edges(LDI, config, image, remove_conflict_ordinal, spdb=False): - - ''' - (1) add_new_node(G, node) : add "node" to graph "G" - (2) add_new_edge(G, node_a, node_b) : add edge "node_a--node_b" to graph "G" - (3) exceed_thre(x, y, thre) : Check if difference between "x" and "y" exceed threshold "thre" - (4) key_exist(d, k) : Check if key "k' exists in dictionary "d" - (5) comm_opp_bg(G, x, y) : Check if node "x" and "y" in graph "G" treat the same opposite node as background - (6) comm_opp_fg(G, x, y) : Check if node "x" and "y" in graph "G" treat the same opposite node as foreground - ''' - add_new_node = lambda G, node: None if G.has_node(node) else G.add_node(node) - add_new_edge = lambda G, node_a, node_b: None if G.has_edge(node_a, node_b) else G.add_edge(node_a, node_b) - exceed_thre = lambda x, y, thre: (abs(x) - abs(y)) > thre - key_exist = lambda d, k: d.get(k) is not None - comm_opp_bg = lambda G, x, y: key_exist(G.nodes[x], 'far') and key_exist(G.nodes[y], 'far') and \ - not(set(G.nodes[x]['far']).isdisjoint(set(G.nodes[y]['far']))) - comm_opp_fg = lambda G, x, y: key_exist(G.nodes[x], 'near') and key_exist(G.nodes[y], 'near') and \ - not(set(G.nodes[x]['near']).isdisjoint(set(G.nodes[y]['near']))) - discont_graph = netx.Graph() - ''' - (A) Skip the pixel at image boundary, we don't want to deal with them. - (B) Identify discontinuity by the number of its neighbor(degree). - If the degree < 4(up/right/buttom/left). We will go through following steps: - (1) Add the discontinuity pixel "node" to graph "discont_graph". - (2) Find "node"'s cross neighbor(up/right/buttom/left) "ne_node". - - If the cross neighbor "ne_node" is a discontinuity pixel(degree("ne_node") < 4), - (a) add it to graph "discont_graph" and build the connection between "ne_node" and "node". - (b) label its cross neighbor as invalid pixels "inval_diag_candi" to avoid building - connection between original discontinuity pixel "node" and "inval_diag_candi". - - Otherwise, find "ne_node"'s cross neighbors, called diagonal candidate "diag_candi". - - The "diag_candi" is diagonal to the original discontinuity pixel "node". - - If "diag_candi" exists, go to step(3). - (3) A diagonal candidate "diag_candi" will be : - - added to the "discont_graph" if its degree < 4. - - connected to the original discontinuity pixel "node" if it satisfied either - one of following criterion: - (a) the difference of disparity between "diag_candi" and "node" is smaller than default threshold. - (b) the "diag_candi" and "node" face the same opposite pixel. (See. function "tear_edges") - (c) Both of "diag_candi" and "node" must_connect to each other. (See. function "combine_end_node") - (C) Aggregate each connected part in "discont_graph" into "discont_ccs" (A.K.A. depth edge). - ''' - for node in LDI.nodes: - if not(LDI.graph['bord_up'] + 1 <= node[0] <= LDI.graph['bord_down'] - 2 and \ - LDI.graph['bord_left'] + 1 <= node[1] <= LDI.graph['bord_right'] - 2): - continue - neighbors = [*LDI.neighbors(node)] - if len(neighbors) < 4: - add_new_node(discont_graph, node) - diag_candi_anc, inval_diag_candi, discont_nes = set(), set(), set() - for ne_node in neighbors: - if len([*LDI.neighbors(ne_node)]) < 4: - add_new_node(discont_graph, ne_node) - add_new_edge(discont_graph, ne_node, node) - discont_nes.add(ne_node) - else: - diag_candi_anc.add(ne_node) - inval_diag_candi = set([inval_diagonal for ne_node in discont_nes for inval_diagonal in LDI.neighbors(ne_node) if \ - abs(inval_diagonal[0] - node[0]) < 2 and abs(inval_diagonal[1] - node[1]) < 2]) - for ne_node in diag_candi_anc: - if ne_node[0] == node[0]: - diagonal_xys = [[ne_node[0] + 1, ne_node[1]], [ne_node[0] - 1, ne_node[1]]] - elif ne_node[1] == node[1]: - diagonal_xys = [[ne_node[0], ne_node[1] + 1], [ne_node[0], ne_node[1] - 1]] - for diag_candi in LDI.neighbors(ne_node): - if [diag_candi[0], diag_candi[1]] in diagonal_xys and LDI.degree(diag_candi) < 4: - if diag_candi not in inval_diag_candi: - if not exceed_thre(1./node[2], 1./diag_candi[2], config['depth_threshold']) or \ - (comm_opp_bg(LDI, diag_candi, node) and comm_opp_fg(LDI, diag_candi, node)): - add_new_node(discont_graph, diag_candi) - add_new_edge(discont_graph, diag_candi, node) - if key_exist(LDI.nodes[diag_candi], 'must_connect') and node in LDI.nodes[diag_candi]['must_connect'] and \ - key_exist(LDI.nodes[node], 'must_connect') and diag_candi in LDI.nodes[node]['must_connect']: - add_new_node(discont_graph, diag_candi) - add_new_edge(discont_graph, diag_candi, node) - if spdb == True: - import pdb; pdb.set_trace() - discont_ccs = [*netx.connected_components(discont_graph)] - ''' - In some corner case, a depth edge "discont_cc" will contain both - foreground(FG) and background(BG) pixels. This violate the assumption that - a depth edge can only composite by one type of pixel(FG or BG). - We need to further divide this depth edge into several sub-part so that the - assumption is satisfied. - (A) A depth edge is invalid if both of its "far_flag"(BG) and - "near_flag"(FG) are True. - (B) If the depth edge is invalid, we need to do: - (1) Find the role("oridinal") of each pixel on the depth edge. - "-1" --> Its opposite pixels has smaller depth(near) than it. - It is a backgorund pixel. - "+1" --> Its opposite pixels has larger depth(far) than it. - It is a foregorund pixel. - "0" --> Some of opposite pixels has larger depth(far) than it, - and some has smaller pixel than it. - It is an ambiguous pixel. - (2) For each pixel "discont_node", check if its neigbhors' roles are consistent. - - If not, break the connection between the neighbor "ne_node" that has a role - different from "discont_node". - - If yes, remove all the role that are inconsistent to its neighbors "ne_node". - (3) Connected component analysis to re-identified those divided depth edge. - (C) Aggregate each connected part in "discont_graph" into "discont_ccs" (A.K.A. depth edge). - ''' - if remove_conflict_ordinal: - new_discont_ccs = [] - num_new_cc = 0 - for edge_id, discont_cc in enumerate(discont_ccs): - near_flag = False - far_flag = False - for discont_node in discont_cc: - near_flag = True if key_exist(LDI.nodes[discont_node], 'far') else near_flag - far_flag = True if key_exist(LDI.nodes[discont_node], 'near') else far_flag - if far_flag and near_flag: - break - if far_flag and near_flag: - for discont_node in discont_cc: - discont_graph.nodes[discont_node]['ordinal'] = \ - np.array([key_exist(LDI.nodes[discont_node], 'far'), - key_exist(LDI.nodes[discont_node], 'near')]) * \ - np.array([-1, 1]) - discont_graph.nodes[discont_node]['ordinal'] = \ - np.sum(discont_graph.nodes[discont_node]['ordinal']) - remove_nodes, remove_edges = [], [] - for discont_node in discont_cc: - ordinal_relation = np.sum([discont_graph.nodes[xx]['ordinal'] \ - for xx in discont_graph.neighbors(discont_node)]) - near_side = discont_graph.nodes[discont_node]['ordinal'] <= 0 - if abs(ordinal_relation) < len([*discont_graph.neighbors(discont_node)]): - remove_nodes.append(discont_node) - for ne_node in discont_graph.neighbors(discont_node): - remove_flag = (near_side and not(key_exist(LDI.nodes[ne_node], 'far'))) or \ - (not near_side and not(key_exist(LDI.nodes[ne_node], 'near'))) - remove_edges += [(discont_node, ne_node)] if remove_flag else [] - else: - if near_side and key_exist(LDI.nodes[discont_node], 'near'): - LDI.nodes[discont_node].pop('near') - elif not(near_side) and key_exist(LDI.nodes[discont_node], 'far'): - LDI.nodes[discont_node].pop('far') - discont_graph.remove_edges_from(remove_edges) - sub_mesh = discont_graph.subgraph(list(discont_cc)).copy() - sub_discont_ccs = [*netx.connected_components(sub_mesh)] - is_redun_near = lambda xx: len(xx) == 1 and xx[0] in remove_nodes and key_exist(LDI.nodes[xx[0]], 'far') - for sub_discont_cc in sub_discont_ccs: - if is_redun_near(list(sub_discont_cc)): - LDI.nodes[list(sub_discont_cc)[0]].pop('far') - new_discont_ccs.append(sub_discont_cc) - else: - new_discont_ccs.append(discont_cc) - discont_ccs = new_discont_ccs - new_discont_ccs = None - if spdb == True: - import pdb; pdb.set_trace() - - for edge_id, edge_cc in enumerate(discont_ccs): - for node in edge_cc: - LDI.nodes[node]['edge_id'] = edge_id - - return discont_ccs, LDI, discont_graph - -def combine_end_node(mesh, edge_mesh, edge_ccs, depth): - import collections - mesh_nodes = mesh.nodes - connect_dict = dict() - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - connect_info = [] - for valid_edge_node in valid_edge_cc: - single_connect = set() - for ne_node in mesh.neighbors(valid_edge_node): - if mesh_nodes[ne_node].get('far') is not None: - for fn in mesh_nodes[ne_node].get('far'): - if mesh.has_node(fn) and mesh_nodes[fn].get('edge_id') is not None: - single_connect.add(mesh_nodes[fn]['edge_id']) - if mesh_nodes[ne_node].get('near') is not None: - for fn in mesh_nodes[ne_node].get('near'): - if mesh.has_node(fn) and mesh_nodes[fn].get('edge_id') is not None: - single_connect.add(mesh_nodes[fn]['edge_id']) - connect_info.extend([*single_connect]) - connect_dict[valid_edge_id] = collections.Counter(connect_info) - - end_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - edge_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - for valid_edge_node in valid_edge_cc: - edge_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - if len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - num_ne = 1 - if num_ne == 1: - end_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_node[2] - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - if mesh.has_node((nx, ny, end_maps[nx, ny])) is False: - invalid_nodes.add((nx, ny)) - continue - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - mesh_nes = [*mesh.neighbors((nx, ny, end_maps[nx, ny]))] - remove_num = 0 - for fne in four_nes: - if (fne[0], fne[1], end_maps[fne[0], fne[1]]) in mesh_nes: - remove_num += 1 - if remove_num == len(four_nes): - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - if mesh_nodes[(nx, ny, end_maps[nx, ny])].get('edge_id') is None: - continue - else: - self_id = mesh_nodes[(nx, ny, end_maps[nx, ny])].get('edge_id') - self_connect = connect_dict[self_id] if connect_dict.get(self_id) is not None else dict() - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - for fne in four_nes: - if mesh_nodes[(fne[0], fne[1], end_maps[fne[0], fne[1]])].get('edge_id') is None: - continue - else: - ne_id = mesh_nodes[(fne[0], fne[1], end_maps[fne[0], fne[1]])]['edge_id'] - if self_connect.get(ne_id) is None or self_connect.get(ne_id) == 1: - continue - else: - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - for fne in four_nes: - if mesh.has_node((fne[0], fne[1], end_maps[fne[0], fne[1]])): - node_a, node_b = (fne[0], fne[1], end_maps[fne[0], fne[1]]), (nx, ny, end_maps[nx, ny]) - mesh.add_edge(node_a, node_b) - mesh_nodes[node_b]['must_connect'] = set() if mesh_nodes[node_b].get('must_connect') is None else mesh_nodes[node_b]['must_connect'] - mesh_nodes[node_b]['must_connect'].add(node_a) - mesh_nodes[node_b]['must_connect'] |= set([xx for xx in [*edge_mesh.neighbors(node_a)] if \ - (xx[0] - node_b[0]) < 2 and (xx[1] - node_b[1]) < 2]) - mesh_nodes[node_a]['must_connect'] = set() if mesh_nodes[node_a].get('must_connect') is None else mesh_nodes[node_a]['must_connect'] - mesh_nodes[node_a]['must_connect'].add(node_b) - mesh_nodes[node_a]['must_connect'] |= set([xx for xx in [*edge_mesh.neighbors(node_b)] if \ - (xx[0] - node_a[0]) < 2 and (xx[1] - node_a[1]) < 2]) - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - - return mesh - -def remove_redundant_edge(mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=1000, invalid=False, spdb=False): - point_to_amount = {} - point_to_id = {} - end_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - for valid_edge_node in valid_edge_cc: - point_to_amount[valid_edge_node] = len(valid_edge_cc) - point_to_id[valid_edge_node] = valid_edge_id - if edge_mesh.has_node(valid_edge_node) is True: - if len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - end_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - nxs, nys = np.where(end_maps > -1) - point_to_adjoint = {} - for nx, ny in zip(nxs, nys): - adjoint_edges = set([end_maps[x, y] for x, y in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)] if end_maps[x, y] != -1]) - point_to_adjoint[end_maps[nx, ny]] = (point_to_adjoint[end_maps[nx, ny]] | adjoint_edges) if point_to_adjoint.get(end_maps[nx, ny]) is not None else adjoint_edges - valid_edge_ccs = filter_edge(mesh, edge_ccs, config, invalid=invalid) - edge_canvas = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(valid_edge_ccs): - for valid_edge_node in valid_edge_cc: - edge_canvas[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - if spdb is True: - plt.imshow(edge_canvas); plt.show() - import pdb; pdb.set_trace() - for valid_edge_id, valid_edge_cc in enumerate(valid_edge_ccs): - end_number = 0 - four_end_number = 0 - eight_end_number = 0 - db_eight_end_number = 0 - if len(valid_edge_cc) > redundant_number: - continue - for valid_edge_node in valid_edge_cc: - if len([*edge_mesh.neighbors(valid_edge_node)]) == 3: - break - elif len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - hx, hy, hz = valid_edge_node - if invalid is False: - eight_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - if len(eight_nes) == 0: - end_number += 1 - if invalid is True: - four_nes = []; eight_nes = []; db_eight_nes = [] - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - eight_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - db_eight_nes = [(x, y) for x in range(hx - 2, hx + 3) for y in range(hy - 2, hy + 3) \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id and (x, y) != (hx, hy)] - if len(four_nes) == 0 or len(eight_nes) == 0: - end_number += 1 - if len(four_nes) == 0: - four_end_number += 1 - if len(eight_nes) == 0: - eight_end_number += 1 - if len(db_eight_nes) == 0: - db_eight_end_number += 1 - elif len([*edge_mesh.neighbors(valid_edge_node)]) == 0: - hx, hy, hz = valid_edge_node - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and \ - mesh.has_edge(valid_edge_node, (x, y, info_on_pix[(x, y)][0]['depth'])) is False] - for ne in four_nes: - try: - if invalid is True or (point_to_amount.get(ne) is None or point_to_amount[ne] < redundant_number) or \ - point_to_id[ne] in point_to_adjoint.get(point_to_id[valid_edge_node], set()): - mesh.add_edge(valid_edge_node, ne) - except: - import pdb; pdb.set_trace() - if (invalid is not True and end_number >= 1) or (invalid is True and end_number >= 2 and eight_end_number >= 1 and db_eight_end_number >= 1): - for valid_edge_node in valid_edge_cc: - hx, hy, _ = valid_edge_node - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and \ - mesh.has_edge(valid_edge_node, (x, y, info_on_pix[(x, y)][0]['depth'])) is False and \ - (edge_canvas[x, y] == -1 or edge_canvas[x, y] == valid_edge_id)] - for ne in four_nes: - if invalid is True or (point_to_amount.get(ne) is None or point_to_amount[ne] < redundant_number) or \ - point_to_id[ne] in point_to_adjoint.get(point_to_id[valid_edge_node], set()): - mesh.add_edge(valid_edge_node, ne) - - return mesh - -def judge_dangle(mark, mesh, node): - if not (1 <= node[0] < mesh.graph['H']-1) or not(1 <= node[1] < mesh.graph['W']-1): - return mark - mesh_neighbors = [*mesh.neighbors(node)] - mesh_neighbors = [xx for xx in mesh_neighbors if 0 < xx[0] < mesh.graph['H'] - 1 and 0 < xx[1] < mesh.graph['W'] - 1] - if len(mesh_neighbors) >= 3: - return mark - elif len(mesh_neighbors) <= 1: - mark[node[0], node[1]] = (len(mesh_neighbors) + 1) - else: - dan_ne_node_a = mesh_neighbors[0] - dan_ne_node_b = mesh_neighbors[1] - if abs(dan_ne_node_a[0] - dan_ne_node_b[0]) > 1 or \ - abs(dan_ne_node_a[1] - dan_ne_node_b[1]) > 1: - mark[node[0], node[1]] = 3 - - return mark - -def remove_dangling(mesh, edge_ccs, edge_mesh, info_on_pix, image, depth, config): - - tmp_edge_ccs = copy.deepcopy(edge_ccs) - for edge_cc_id, valid_edge_cc in enumerate(tmp_edge_ccs): - if len(valid_edge_cc) > 1 or len(valid_edge_cc) == 0: - continue - single_edge_node = [*valid_edge_cc][0] - hx, hy, hz = single_edge_node - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None]) - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None] - sub_mesh = mesh.subgraph(eight_nes).copy() - ccs = netx.connected_components(sub_mesh) - four_ccs = [] - for cc_id, _cc in enumerate(ccs): - four_ccs.append(set()) - for cc_node in _cc: - if abs(cc_node[0] - hx) + abs(cc_node[1] - hy) < 2: - four_ccs[cc_id].add(cc_node) - largest_cc = sorted(four_ccs, key=lambda x: (len(x), -np.sum([abs(xx[2] - hz) for xx in x])))[-1] - if len(largest_cc) < 2: - for ne in four_nes: - mesh.add_edge(single_edge_node, ne) - else: - mesh.remove_edges_from([(single_edge_node, ne) for ne in mesh.neighbors(single_edge_node)]) - new_depth = np.mean([xx[2] for xx in largest_cc]) - info_on_pix[(hx, hy)][0]['depth'] = new_depth - info_on_pix[(hx, hy)][0]['disp'] = 1./new_depth - new_node = (hx, hy, new_depth) - mesh = refresh_node(single_edge_node, mesh.node[single_edge_node], new_node, dict(), mesh) - edge_ccs[edge_cc_id] = set([new_node]) - for ne in largest_cc: - mesh.add_edge(new_node, ne) - - mark = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for edge_idx, edge_cc in enumerate(edge_ccs): - for edge_node in edge_cc: - if not (mesh.graph['bord_up'] <= edge_node[0] < mesh.graph['bord_down']-1) or \ - not (mesh.graph['bord_left'] <= edge_node[1] < mesh.graph['bord_right']-1): - continue - mesh_neighbors = [*mesh.neighbors(edge_node)] - mesh_neighbors = [xx for xx in mesh_neighbors \ - if mesh.graph['bord_up'] < xx[0] < mesh.graph['bord_down'] - 1 and \ - mesh.graph['bord_left'] < xx[1] < mesh.graph['bord_right'] - 1] - if len([*mesh.neighbors(edge_node)]) >= 3: - continue - elif len([*mesh.neighbors(edge_node)]) <= 1: - mark[edge_node[0], edge_node[1]] += (len([*mesh.neighbors(edge_node)]) + 1) - else: - dan_ne_node_a = [*mesh.neighbors(edge_node)][0] - dan_ne_node_b = [*mesh.neighbors(edge_node)][1] - if abs(dan_ne_node_a[0] - dan_ne_node_b[0]) > 1 or \ - abs(dan_ne_node_a[1] - dan_ne_node_b[1]) > 1: - mark[edge_node[0], edge_node[1]] += 3 - mxs, mys = np.where(mark == 1) - conn_0_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']))] - mxs, mys = np.where(mark == 2) - conn_1_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']))] - for node in conn_0_nodes: - hx, hy = node[0], node[1] - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None] - re_depth = {'value' : 0, 'count': 0} - for ne in four_nes: - mesh.add_edge(node, ne) - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - re_depth = re_depth['value'] / re_depth['count'] - mapping_dict = {node: (node[0], node[1], re_depth)} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - for node in conn_1_nodes: - hx, hy = node[0], node[1] - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None]) - self_nes = set([ne2 for ne1 in mesh.neighbors(node) for ne2 in mesh.neighbors(ne1) if ne2 in eight_nes]) - eight_nes = [*(eight_nes - self_nes)] - sub_mesh = mesh.subgraph(eight_nes).copy() - ccs = netx.connected_components(sub_mesh) - largest_cc = sorted(ccs, key=lambda x: (len(x), -np.sum([abs(xx[0] - node[0]) + abs(xx[1] - node[1]) for xx in x])))[-1] - - mesh.remove_edges_from([(xx, node) for xx in mesh.neighbors(node)]) - re_depth = {'value' : 0, 'count': 0} - for cc_node in largest_cc: - if cc_node[0] == node[0] and cc_node[1] == node[1]: - continue - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - if abs(cc_node[0] - node[0]) + abs(cc_node[1] - node[1]) < 2: - mesh.add_edge(cc_node, node) - try: - re_depth = re_depth['value'] / re_depth['count'] - except: - re_depth = node[2] - renode = (node[0], node[1], re_depth) - mapping_dict = {node: renode} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - edge_mesh, mesh, mark, info_on_pix = recursive_add_edge(edge_mesh, mesh, info_on_pix, renode, mark) - mxs, mys = np.where(mark == 3) - conn_2_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth'])) and \ - mesh.degree((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth'])) == 2] - sub_mesh = mesh.subgraph(conn_2_nodes).copy() - ccs = netx.connected_components(sub_mesh) - for cc in ccs: - candidate_nodes = [xx for xx in cc if sub_mesh.degree(xx) == 1] - for node in candidate_nodes: - if mesh.has_node(node) is False: - continue - ne_node = [xx for xx in mesh.neighbors(node) if xx not in cc][0] - hx, hy = node[0], node[1] - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and (x, y, info_on_pix[(x, y)][0]['depth']) not in cc]) - ne_sub_mesh = mesh.subgraph(eight_nes).copy() - ne_ccs = netx.connected_components(ne_sub_mesh) - try: - ne_cc = [ne_cc for ne_cc in ne_ccs if ne_node in ne_cc][0] - except: - import pdb; pdb.set_trace() - largest_cc = [xx for xx in ne_cc if abs(xx[0] - node[0]) + abs(xx[1] - node[1]) == 1] - mesh.remove_edges_from([(xx, node) for xx in mesh.neighbors(node)]) - re_depth = {'value' : 0, 'count': 0} - for cc_node in largest_cc: - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - mesh.add_edge(cc_node, node) - try: - re_depth = re_depth['value'] / re_depth['count'] - except: - re_depth = node[2] - renode = (node[0], node[1], re_depth) - mapping_dict = {node: renode} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - edge_mesh, mesh, mark, info_on_pix = recursive_add_edge(edge_mesh, mesh, info_on_pix, renode, mark) - break - if len(cc) == 1: - node = [node for node in cc][0] - hx, hy = node[0], node[1] - nine_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx, hy), (hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and mesh.has_node((x, y, info_on_pix[(x, y)][0]['depth']))]) - ne_sub_mesh = mesh.subgraph(nine_nes).copy() - ne_ccs = netx.connected_components(ne_sub_mesh) - for ne_cc in ne_ccs: - if node in ne_cc: - re_depth = {'value' : 0, 'count': 0} - for ne in ne_cc: - if abs(ne[0] - node[0]) + abs(ne[1] - node[1]) == 1: - mesh.add_edge(node, ne) - re_depth['value'] += ne[2] - re_depth['count'] += 1. - re_depth = re_depth['value'] / re_depth['count'] - mapping_dict = {node: (node[0], node[1], re_depth)} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - - - return mesh, info_on_pix, edge_mesh, depth, mark - -def context_and_holes(mesh, edge_ccs, config, specific_edge_id, specific_edge_loc, depth_feat_model, - connect_points_ccs=None, inpaint_iter=0, filter_edge=False, vis_edge_id=None): - edge_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - mask_info = {} - for edge_id, edge_cc in enumerate(edge_ccs): - for edge_node in edge_cc: - edge_maps[edge_node[0], edge_node[1]] = edge_id - - context_ccs = [set() for x in range(len(edge_ccs))] - extend_context_ccs = [set() for x in range(len(edge_ccs))] - extend_erode_context_ccs = [set() for x in range(len(edge_ccs))] - extend_edge_ccs = [set() for x in range(len(edge_ccs))] - accomp_extend_context_ccs = [set() for x in range(len(edge_ccs))] - erode_context_ccs = [set() for x in range(len(edge_ccs))] - broken_mask_ccs = [set() for x in range(len(edge_ccs))] - invalid_extend_edge_ccs = [set() for x in range(len(edge_ccs))] - intouched_ccs = [set() for x in range(len(edge_ccs))] - redundant_ccs = [set() for x in range(len(edge_ccs))] - if inpaint_iter == 0: - background_thickness = config['background_thickness'] - context_thickness = config['context_thickness'] - else: - background_thickness = config['background_thickness_2'] - context_thickness = config['context_thickness_2'] - - mesh_nodes = mesh.nodes - for edge_id, edge_cc in enumerate(edge_ccs): - if context_thickness == 0 or (len(specific_edge_id) > 0 and edge_id not in specific_edge_id): - continue - edge_group = {} - for edge_node in edge_cc: - far_nodes = mesh_nodes[edge_node].get('far') - if far_nodes is None: - continue - for far_node in far_nodes: - if far_node in edge_cc: - continue - context_ccs[edge_id].add(far_node) - if mesh_nodes[far_node].get('edge_id') is not None: - if edge_group.get(mesh_nodes[far_node]['edge_id']) is None: - edge_group[mesh_nodes[far_node]['edge_id']] = set() - edge_group[mesh_nodes[far_node]['edge_id']].add(far_node) - if len(edge_cc) > 2: - for edge_key in [*edge_group.keys()]: - if len(edge_group[edge_key]) == 1: - context_ccs[edge_id].remove([*edge_group[edge_key]][0]) - for edge_id, edge_cc in enumerate(edge_ccs): - if inpaint_iter != 0: - continue - tmp_intouched_nodes = set() - for edge_node in edge_cc: - raw_intouched_nodes = set(mesh_nodes[edge_node].get('near')) if mesh_nodes[edge_node].get('near') is not None else set() - tmp_intouched_nodes |= set([xx for xx in raw_intouched_nodes if mesh_nodes[xx].get('edge_id') is not None and \ - len(context_ccs[mesh_nodes[xx].get('edge_id')]) > 0]) - intouched_ccs[edge_id] |= tmp_intouched_nodes - tmp_intouched_nodes = None - mask_ccs = copy.deepcopy(edge_ccs) - forbidden_len = 3 - forbidden_map = np.ones((mesh.graph['H'] - forbidden_len, mesh.graph['W'] - forbidden_len)) - forbidden_map = np.pad(forbidden_map, ((forbidden_len, forbidden_len), (forbidden_len, forbidden_len)), mode='constant').astype(np.bool) - cur_tmp_mask_map = np.zeros_like(forbidden_map).astype(np.bool) - passive_background = 10 if 10 is not None else background_thickness - passive_context = 1 if 1 is not None else context_thickness - - for edge_id, edge_cc in enumerate(edge_ccs): - cur_mask_cc = None; cur_mask_cc = [] - cur_context_cc = None; cur_context_cc = [] - cur_accomp_near_cc = None; cur_accomp_near_cc = [] - cur_invalid_extend_edge_cc = None; cur_invalid_extend_edge_cc = [] - cur_comp_far_cc = None; cur_comp_far_cc = [] - tmp_erode = [] - if len(context_ccs[edge_id]) == 0 or (len(specific_edge_id) > 0 and edge_id not in specific_edge_id): - continue - for i in range(max(background_thickness, context_thickness)): - cur_tmp_mask_map.fill(False) - if i == 0: - tmp_mask_nodes = copy.deepcopy(mask_ccs[edge_id]) - tmp_intersect_nodes = [] - tmp_intersect_context_nodes = [] - mask_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - context_depth = np.zeros((mesh.graph['H'], mesh.graph['W'])) - comp_cnt_depth = np.zeros((mesh.graph['H'], mesh.graph['W'])) - connect_map = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for node in tmp_mask_nodes: - mask_map[node[0], node[1]] = True - depth_count = 0 - if mesh_nodes[node].get('far') is not None: - for comp_cnt_node in mesh_nodes[node]['far']: - comp_cnt_depth[node[0], node[1]] += abs(comp_cnt_node[2]) - depth_count += 1 - if depth_count > 0: - comp_cnt_depth[node[0], node[1]] = comp_cnt_depth[node[0], node[1]] / depth_count - connect_node = [] - if mesh_nodes[node].get('connect_point_id') is not None: - connect_node.append(mesh_nodes[node]['connect_point_id']) - connect_point_id = np.bincount(connect_node).argmax() if len(connect_node) > 0 else -1 - if connect_point_id > -1 and connect_points_ccs is not None: - for xx in connect_points_ccs[connect_point_id]: - if connect_map[xx[0], xx[1]] == 0: - connect_map[xx[0], xx[1]] = xx[2] - if mesh_nodes[node].get('connect_point_exception') is not None: - for xx in mesh_nodes[node]['connect_point_exception']: - if connect_map[xx[0], xx[1]] == 0: - connect_map[xx[0], xx[1]] = xx[2] - tmp_context_nodes = [*context_ccs[edge_id]] - tmp_erode.append([*context_ccs[edge_id]]) - context_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - if (context_map.astype(np.uint8) * mask_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - for node in tmp_context_nodes: - context_map[node[0], node[1]] = True - context_depth[node[0], node[1]] = node[2] - context_map[mask_map == True] = False - if (context_map.astype(np.uint8) * mask_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - tmp_intouched_nodes = [*intouched_ccs[edge_id]] - intouched_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - for node in tmp_intouched_nodes: intouched_map[node[0], node[1]] = True - intouched_map[mask_map == True] = False - tmp_redundant_nodes = set() - tmp_noncont_nodes = set() - noncont_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - intersect_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - intersect_context_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - if i > passive_background and inpaint_iter == 0: - new_tmp_intersect_nodes = None - new_tmp_intersect_nodes = [] - for node in tmp_intersect_nodes: - nes = mesh.neighbors(node) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and\ - bool(intersect_map[ne[0], ne[1]]) is False and\ - bool(intersect_context_map[ne[0], ne[1]]) is False: - break_flag = False - if (i - passive_background) % 2 == 0 and (i - passive_background) % 8 != 0: - four_nes = [xx for xx in[[ne[0] - 1, ne[1]], [ne[0] + 1, ne[1]], [ne[0], ne[1] - 1], [ne[0], ne[1] + 1]] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W']] - for fne in four_nes: - if bool(mask_map[fne[0], fne[1]]) is True: - break_flag = True - break - if break_flag is True: - continue - intersect_map[ne[0], ne[1]] = True - new_tmp_intersect_nodes.append(ne) - tmp_intersect_nodes = None - tmp_intersect_nodes = new_tmp_intersect_nodes - - if i > passive_context and inpaint_iter == 1: - new_tmp_intersect_context_nodes = None - new_tmp_intersect_context_nodes = [] - for node in tmp_intersect_context_nodes: - nes = mesh.neighbors(node) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and\ - bool(intersect_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - intersect_context_map[ne[0], ne[1]] = True - new_tmp_intersect_context_nodes.append(ne) - tmp_intersect_context_nodes = None - tmp_intersect_context_nodes = new_tmp_intersect_context_nodes - - new_tmp_mask_nodes = None - new_tmp_mask_nodes = [] - for node in tmp_mask_nodes: - four_nes = {xx:[] for xx in [(node[0] - 1, node[1]), (node[0] + 1, node[1]), (node[0], node[1] - 1), (node[0], node[1] + 1)] if \ - 0 <= xx[0] < connect_map.shape[0] and 0 <= xx[1] < connect_map.shape[1]} - if inpaint_iter > 0: - for ne in four_nes.keys(): - if connect_map[ne[0], ne[1]] == True: - tmp_context_nodes.append((ne[0], ne[1], connect_map[ne[0], ne[1]])) - context_map[ne[0], ne[1]] = True - nes = mesh.neighbors(node) - if inpaint_iter > 0: - for ne in nes: four_nes[(ne[0], ne[1])].append(ne[2]) - nes = [] - for kfne, vfnes in four_nes.items(): vfnes.sort(key = lambda xx: abs(xx), reverse=True) - for kfne, vfnes in four_nes.items(): - for vfne in vfnes: nes.append((kfne[0], kfne[1], vfne)) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(intersect_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - if i == passive_background and inpaint_iter == 0: - if np.any(context_map[max(ne[0] - 1, 0):min(ne[0] + 2, mesh.graph['H']), max(ne[1] - 1, 0):min(ne[1] + 2, mesh.graph['W'])]) == True: - intersect_map[ne[0], ne[1]] = True - tmp_intersect_nodes.append(ne) - continue - if i < background_thickness: - if inpaint_iter == 0: - cur_mask_cc.append(ne) - elif mesh_nodes[ne].get('inpaint_id') == 1: - cur_mask_cc.append(ne) - else: - continue - mask_ccs[edge_id].add(ne) - if inpaint_iter == 0: - if comp_cnt_depth[node[0], node[1]] > 0 and comp_cnt_depth[ne[0], ne[1]] == 0: - comp_cnt_depth[ne[0], ne[1]] = comp_cnt_depth[node[0], node[1]] - if mesh_nodes[ne].get('far') is not None: - for comp_far_node in mesh_nodes[ne]['far']: - cur_comp_far_cc.append(comp_far_node) - cur_accomp_near_cc.append(ne) - cur_invalid_extend_edge_cc.append(comp_far_node) - if mesh_nodes[ne].get('edge_id') is not None and \ - len(context_ccs[mesh_nodes[ne].get('edge_id')]) > 0: - intouched_fars = set(mesh_nodes[ne].get('far')) if mesh_nodes[ne].get('far') is not None else set() - accum_intouched_fars = set(intouched_fars) - for intouched_far in intouched_fars: - accum_intouched_fars |= set([*mesh.neighbors(intouched_far)]) - for intouched_far in accum_intouched_fars: - if bool(mask_map[intouched_far[0], intouched_far[1]]) is True or \ - bool(context_map[intouched_far[0], intouched_far[1]]) is True: - continue - tmp_redundant_nodes.add(intouched_far) - intouched_map[intouched_far[0], intouched_far[1]] = True - if mesh_nodes[ne].get('near') is not None: - intouched_nears = set(mesh_nodes[ne].get('near')) - for intouched_near in intouched_nears: - if bool(mask_map[intouched_near[0], intouched_near[1]]) is True or \ - bool(context_map[intouched_near[0], intouched_near[1]]) is True: - continue - tmp_redundant_nodes.add(intouched_near) - intouched_map[intouched_near[0], intouched_near[1]] = True - if not (mesh_nodes[ne].get('inpaint_id') != 1 and inpaint_iter == 1): - new_tmp_mask_nodes.append(ne) - mask_map[ne[0], ne[1]] = True - tmp_mask_nodes = new_tmp_mask_nodes - - new_tmp_context_nodes = None - new_tmp_context_nodes = [] - for node in tmp_context_nodes: - nes = mesh.neighbors(node) - if inpaint_iter > 0: - four_nes = {(node[0] - 1, node[1]):[], (node[0] + 1, node[1]):[], (node[0], node[1] - 1):[], (node[0], node[1] + 1):[]} - for ne in nes: four_nes[(ne[0], ne[1])].append(ne[2]) - nes = [] - for kfne, vfnes in four_nes.items(): vfnes.sort(key = lambda xx: abs(xx), reverse=True) - for kfne, vfnes in four_nes.items(): - for vfne in vfnes: nes.append((kfne[0], kfne[1], vfne)) - for ne in nes: - mask_flag = (bool(mask_map[ne[0], ne[1]]) is False) - if bool(context_map[ne[0], ne[1]]) is False and mask_flag and \ - bool(forbidden_map[ne[0], ne[1]]) is True and bool(noncont_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - if i == passive_context and inpaint_iter == 1: - mnes = mesh.neighbors(ne) - if any([mask_map[mne[0], mne[1]] == True for mne in mnes]) is True: - intersect_context_map[ne[0], ne[1]] = True - tmp_intersect_context_nodes.append(ne) - continue - if False and mesh_nodes[ne].get('near') is not None and mesh_nodes[ne].get('edge_id') != edge_id: - noncont_nears = set(mesh_nodes[ne].get('near')) - for noncont_near in noncont_nears: - if bool(context_map[noncont_near[0], noncont_near[1]]) is False: - tmp_noncont_nodes.add(noncont_near) - noncont_map[noncont_near[0], noncont_near[1]] = True - new_tmp_context_nodes.append(ne) - context_map[ne[0], ne[1]] = True - context_depth[ne[0], ne[1]] = ne[2] - cur_context_cc.extend(new_tmp_context_nodes) - tmp_erode.append(new_tmp_context_nodes) - tmp_context_nodes = None - tmp_context_nodes = new_tmp_context_nodes - new_tmp_intouched_nodes = None; new_tmp_intouched_nodes = [] - - for node in tmp_intouched_nodes: - if bool(context_map[node[0], node[1]]) is True or bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_intouched_nodes.append(ne) - intouched_map[ne[0], ne[1]] = True - tmp_intouched_nodes = None - tmp_intouched_nodes = set(new_tmp_intouched_nodes) - new_tmp_redundant_nodes = None; new_tmp_redundant_nodes = [] - for node in tmp_redundant_nodes: - if bool(context_map[node[0], node[1]]) is True or \ - bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_redundant_nodes.append(ne) - intouched_map[ne[0], ne[1]] = True - tmp_redundant_nodes = None - tmp_redundant_nodes = set(new_tmp_redundant_nodes) - new_tmp_noncont_nodes = None; new_tmp_noncont_nodes = [] - for node in tmp_noncont_nodes: - if bool(context_map[node[0], node[1]]) is True or \ - bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - rmv_flag = False - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(noncont_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - patch_context_map = context_map[max(ne[0] - 1, 0):min(ne[0] + 2, context_map.shape[0]), - max(ne[1] - 1, 0):min(ne[1] + 2, context_map.shape[1])] - if bool(np.any(patch_context_map)) is True: - new_tmp_noncont_nodes.append(ne) - noncont_map[ne[0], ne[1]] = True - tmp_noncont_nodes = None - tmp_noncont_nodes = set(new_tmp_noncont_nodes) - if inpaint_iter == 0: - depth_dict = get_depth_from_maps(context_map, mask_map, context_depth, mesh.graph['H'], mesh.graph['W'], log_depth=config['log_depth']) - mask_size = get_valid_size(depth_dict['mask']) - mask_size = dilate_valid_size(mask_size, depth_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(depth_dict['context']) - context_size = dilate_valid_size(context_size, depth_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - depth_dict = depth_inpainting(None, None, None, None, mesh, config, union_size, depth_feat_model, None, given_depth_dict=depth_dict, spdb=False) - near_depth_map, raw_near_depth_map = np.zeros((mesh.graph['H'], mesh.graph['W'])), np.zeros((mesh.graph['H'], mesh.graph['W'])) - filtered_comp_far_cc, filtered_accomp_near_cc = set(), set() - for node in cur_accomp_near_cc: - near_depth_map[node[0], node[1]] = depth_dict['output'][node[0], node[1]] - raw_near_depth_map[node[0], node[1]] = node[2] - for node in cur_comp_far_cc: - four_nes = [xx for xx in [(node[0] - 1, node[1]), (node[0] + 1, node[1]), (node[0], node[1] - 1), (node[0], node[1] + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - near_depth_map[xx[0], xx[1]] != 0 and \ - abs(near_depth_map[xx[0], xx[1]]) < abs(node[2])] - if len(four_nes) > 0: - filtered_comp_far_cc.add(node) - for ne in four_nes: - filtered_accomp_near_cc.add((ne[0], ne[1], -abs(raw_near_depth_map[ne[0], ne[1]]))) - cur_comp_far_cc, cur_accomp_near_cc = filtered_comp_far_cc, filtered_accomp_near_cc - mask_ccs[edge_id] |= set(cur_mask_cc) - context_ccs[edge_id] |= set(cur_context_cc) - accomp_extend_context_ccs[edge_id] |= set(cur_accomp_near_cc).intersection(cur_mask_cc) - extend_edge_ccs[edge_id] |= set(cur_accomp_near_cc).intersection(cur_mask_cc) - extend_context_ccs[edge_id] |= set(cur_comp_far_cc) - invalid_extend_edge_ccs[edge_id] |= set(cur_invalid_extend_edge_cc) - erode_size = [0] - for tmp in tmp_erode: - erode_size.append(len(tmp)) - if len(erode_size) > 1: - erode_size[-1] += erode_size[-2] - if inpaint_iter == 0: - tmp_width = config['depth_edge_dilate'] - else: - tmp_width = 0 - while float(erode_size[tmp_width]) / (erode_size[-1] + 1e-6) > 0.3: - tmp_width = tmp_width - 1 - try: - if tmp_width == 0: - erode_context_ccs[edge_id] = set([]) - else: - erode_context_ccs[edge_id] = set(reduce(lambda x, y : x + y, [] + tmp_erode[:tmp_width])) - except: - import pdb; pdb.set_trace() - erode_context_cc = copy.deepcopy(erode_context_ccs[edge_id]) - for erode_context_node in erode_context_cc: - if (inpaint_iter != 0 and (mesh_nodes[erode_context_node].get('inpaint_id') is None or - mesh_nodes[erode_context_node].get('inpaint_id') == 0)): - erode_context_ccs[edge_id].remove(erode_context_node) - else: - context_ccs[edge_id].remove(erode_context_node) - context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for context_node in context_ccs[edge_id]: - context_map[context_node[0], context_node[1]] = 1 - extend_context_ccs[edge_id] = extend_context_ccs[edge_id] - mask_ccs[edge_id] - accomp_extend_context_ccs[edge_id] - if inpaint_iter == 0: - all_ecnt_cc = set() - for ecnt_id, ecnt_cc in enumerate(extend_context_ccs): - constraint_context_ids = set() - constraint_context_cc = set() - constraint_erode_context_cc = set() - tmp_mask_cc = set() - accum_context_cc = None; accum_context_cc = [] - for ecnt_node in accomp_extend_context_ccs[ecnt_id]: - if edge_maps[ecnt_node[0], ecnt_node[1]] > -1: - constraint_context_ids.add(int(round(edge_maps[ecnt_node[0], ecnt_node[1]]))) - constraint_erode_context_cc = erode_context_ccs[ecnt_id] - for constraint_context_id in constraint_context_ids: - constraint_context_cc = constraint_context_cc | context_ccs[constraint_context_id] | erode_context_ccs[constraint_context_id] - constraint_erode_context_cc = constraint_erode_context_cc | erode_context_ccs[constraint_context_id] - for i in range(background_thickness): - if i == 0: - tmp_context_nodes = copy.deepcopy(ecnt_cc) - tmp_invalid_context_nodes = copy.deepcopy(invalid_extend_edge_ccs[ecnt_id]) - tmp_mask_nodes = copy.deepcopy(accomp_extend_context_ccs[ecnt_id]) - tmp_context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - tmp_mask_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - tmp_invalid_context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - for node in tmp_mask_nodes: - tmp_mask_map[node[0], node[1]] = True - for node in context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in erode_context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in extend_context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in invalid_extend_edge_ccs[ecnt_id]: - tmp_invalid_context_map[node[0], node[1]] = True - init_invalid_context_map = tmp_invalid_context_map.copy() - init_context_map = tmp - if (tmp_mask_map.astype(np.uint8) * tmp_context_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - if vis_edge_id is not None and ecnt_id == vis_edge_id: - f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) - ax1.imshow(tmp_context_map * 1); ax2.imshow(init_invalid_context_map * 1 + tmp_context_map * 2) - plt.show() - import pdb; pdb.set_trace() - else: - tmp_context_nodes = new_tmp_context_nodes - new_tmp_context_nodes = None - tmp_mask_nodes = new_tmp_mask_nodes - new_tmp_mask_nodes = None - tmp_invalid_context_nodes = new_tmp_invalid_context_nodes - new_tmp_invalid_context_nodes = None - new_tmp_context_nodes = None - new_tmp_context_nodes = [] - new_tmp_invalid_context_nodes = None - new_tmp_invalid_context_nodes = [] - new_tmp_mask_nodes = set([]) - for node in tmp_context_nodes: - for ne in mesh.neighbors(node): - if ne in constraint_context_cc and \ - bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_context_nodes.append(ne) - tmp_context_map[ne[0], ne[1]] = True - accum_context_cc.extend(new_tmp_context_nodes) - for node in tmp_invalid_context_nodes: - for ne in mesh.neighbors(node): - if bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(tmp_invalid_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - tmp_invalid_context_map[ne[0], ne[1]] = True - new_tmp_invalid_context_nodes.append(ne) - for node in tmp_mask_nodes: - for ne in mesh.neighbors(node): - if bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(tmp_invalid_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_mask_nodes.add(ne) - tmp_mask_map[ne[0], ne[1]] = True - init_invalid_context_map[tmp_context_map] = False - _, tmp_label_map = cv2.connectedComponents((init_invalid_context_map | tmp_context_map).astype(np.uint8), connectivity=8) - tmp_label_ids = set(np.unique(tmp_label_map[init_invalid_context_map])) - if (tmp_mask_map.astype(np.uint8) * tmp_context_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - if vis_edge_id is not None and ecnt_id == vis_edge_id: - f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) - ax1.imshow(tmp_label_map); ax2.imshow(init_invalid_context_map * 1 + tmp_context_map * 2) - plt.show() - import pdb; pdb.set_trace() - extend_context_ccs[ecnt_id] |= set(accum_context_cc) - extend_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] - mask_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] & constraint_erode_context_cc - extend_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - tmp_context_cc = context_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - if len(tmp_context_cc) > 0: - context_ccs[ecnt_id] = tmp_context_cc - tmp_mask_cc = tmp_mask_cc - context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - mask_ccs[ecnt_id] = mask_ccs[ecnt_id] | tmp_mask_cc - - return context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, invalid_extend_edge_ccs, edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs - -def DL_inpaint_edge(mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model=None, - depth_edge_model=None, - depth_edge_model_init=None, - depth_feat_model=None, - specific_edge_id=-1, - specific_edge_loc=None, - inpaint_iter=0): - - if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0): - device = config["gpu_ids"] - else: - device = "cpu" - - edge_map = np.zeros_like(depth) - new_edge_ccs = [set() for _ in range(len(edge_ccs))] - edge_maps_with_id = edge_maps - edge_condition = lambda x, m: m.nodes[x].get('far') is not None and len(m.nodes[x].get('far')) > 0 - edge_map = get_map_from_ccs(edge_ccs, mesh.graph['H'], mesh.graph['W'], mesh, edge_condition) - np_depth, np_image = depth.copy(), image.copy() - image_c = image.shape[-1] - image = torch.FloatTensor(image.transpose(2, 0, 1)).unsqueeze(0).to(device) - if depth.ndim < 3: - depth = depth[..., None] - depth = torch.FloatTensor(depth.transpose(2, 0, 1)).unsqueeze(0).to(device) - mesh.graph['max_edge_id'] = len(edge_ccs) - connnect_points_ccs = [set() for _ in range(len(edge_ccs))] - gp_time, tmp_mesh_time, bilateral_time = 0, 0, 0 - edges_infos = dict() - edges_in_mask = [set() for _ in range(len(edge_ccs))] - tmp_specific_edge_id = [] - for edge_id, (context_cc, mask_cc, erode_context_cc, extend_context_cc, edge_cc) in enumerate(zip(context_ccs, mask_ccs, erode_context_ccs, extend_context_ccs, edge_ccs)): - if len(specific_edge_id) > 0: - if edge_id not in specific_edge_id: - continue - if len(context_cc) < 1 or len(mask_cc) < 1: - continue - edge_dict = get_edge_from_nodes(context_cc | extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, edge_cc, extend_edge_ccs[edge_id], - mesh.graph['H'], mesh.graph['W'], mesh) - edge_dict['edge'], end_depth_maps, _ = \ - filter_irrelevant_edge_new(edge_dict['self_edge'], edge_dict['comp_edge'], - edge_map, - edge_maps_with_id, - edge_id, - edge_dict['context'], - edge_dict['depth'], mesh, context_cc | erode_context_cc | extend_context_cc | extend_erode_context_ccs[edge_id], spdb=False) - if specific_edge_loc is not None and \ - (specific_edge_loc is not None and edge_dict['mask'][specific_edge_loc[0], specific_edge_loc[1]] == 0): - continue - mask_size = get_valid_size(edge_dict['mask']) - mask_size = dilate_valid_size(mask_size, edge_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(edge_dict['context']) - context_size = dilate_valid_size(context_size, edge_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - patch_edge_dict = dict() - patch_edge_dict['mask'], patch_edge_dict['context'], patch_edge_dict['rgb'], \ - patch_edge_dict['disp'], patch_edge_dict['edge'] = \ - crop_maps_by_size(union_size, edge_dict['mask'], edge_dict['context'], - edge_dict['rgb'], edge_dict['disp'], edge_dict['edge']) - x_anchor, y_anchor = [union_size['x_min'], union_size['x_max']], [union_size['y_min'], union_size['y_max']] - tensor_edge_dict = convert2tensor(patch_edge_dict) - input_edge_feat = torch.cat((tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - 1 - tensor_edge_dict['context'], - tensor_edge_dict['mask']), dim=1) - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - with torch.no_grad(): - depth_edge_output = depth_edge_model.forward_3P(tensor_edge_dict['mask'], - tensor_edge_dict['context'], - tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - unit_length=128, - cuda=device) - depth_edge_output = depth_edge_output.cpu() - tensor_edge_dict['output'] = (depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask'] + tensor_edge_dict['edge'] - else: - tensor_edge_dict['output'] = tensor_edge_dict['edge'] - depth_edge_output = tensor_edge_dict['edge'] + 0 - patch_edge_dict['output'] = tensor_edge_dict['output'].squeeze().data.cpu().numpy() - edge_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'])) - edge_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_edge_dict['output'] - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - if ((depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask']).max() > 0: - try: - edge_dict['fpath_map'], edge_dict['npath_map'], break_flag, npaths, fpaths, invalid_edge_id = \ - clean_far_edge_new(edge_dict['output'], end_depth_maps, edge_dict['mask'], edge_dict['context'], mesh, info_on_pix, edge_dict['self_edge'], inpaint_iter, config) - except: - import pdb; pdb.set_trace() - pre_npath_map = edge_dict['npath_map'].copy() - if config.get('repeat_inpaint_edge') is True: - for _ in range(2): - tmp_input_edge = ((edge_dict['npath_map'] > -1) + edge_dict['edge']).clip(0, 1) - patch_tmp_input_edge = crop_maps_by_size(union_size, tmp_input_edge)[0] - tensor_input_edge = torch.FloatTensor(patch_tmp_input_edge)[None, None, ...] - depth_edge_output = depth_edge_model.forward_3P(tensor_edge_dict['mask'], - tensor_edge_dict['context'], - tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_input_edge, - unit_length=128, - cuda=device) - depth_edge_output = depth_edge_output.cpu() - depth_edge_output = (depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask'] + tensor_edge_dict['edge'] - depth_edge_output = depth_edge_output.squeeze().data.cpu().numpy() - full_depth_edge_output = np.zeros((mesh.graph['H'], mesh.graph['W'])) - full_depth_edge_output[union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - depth_edge_output - edge_dict['fpath_map'], edge_dict['npath_map'], break_flag, npaths, fpaths, invalid_edge_id = \ - clean_far_edge_new(full_depth_edge_output, end_depth_maps, edge_dict['mask'], edge_dict['context'], mesh, info_on_pix, edge_dict['self_edge'], inpaint_iter, config) - for nid in npaths.keys(): - npath, fpath = npaths[nid], fpaths[nid] - start_mx, start_my, end_mx, end_my = -1, -1, -1, -1 - if end_depth_maps[npath[0][0], npath[0][1]] != 0: - start_mx, start_my = npath[0][0], npath[0][1] - if end_depth_maps[npath[-1][0], npath[-1][1]] != 0: - end_mx, end_my = npath[-1][0], npath[-1][1] - if start_mx == -1: - import pdb; pdb.set_trace() - valid_end_pt = () if end_mx == -1 else (end_mx, end_my, info_on_pix[(end_mx, end_my)][0]['depth']) - new_edge_info = dict(fpath=fpath, - npath=npath, - cont_end_pts=valid_end_pt, - mask_id=edge_id, - comp_edge_id=nid, - depth=end_depth_maps[start_mx, start_my]) - if edges_infos.get((start_mx, start_my)) is None: - edges_infos[(start_mx, start_my)] = [] - edges_infos[(start_mx, start_my)].append(new_edge_info) - edges_in_mask[edge_id].add((start_mx, start_my)) - if len(valid_end_pt) > 0: - new_edge_info = dict(fpath=fpath[::-1], - npath=npath[::-1], - cont_end_pts=(start_mx, start_my, info_on_pix[(start_mx, start_my)][0]['depth']), - mask_id=edge_id, - comp_edge_id=nid, - depth=end_depth_maps[end_mx, end_my]) - if edges_infos.get((end_mx, end_my)) is None: - edges_infos[(end_mx, end_my)] = [] - edges_infos[(end_mx, end_my)].append(new_edge_info) - edges_in_mask[edge_id].add((end_mx, end_my)) - for edge_id, (context_cc, mask_cc, erode_context_cc, extend_context_cc, edge_cc) in enumerate(zip(context_ccs, mask_ccs, erode_context_ccs, extend_context_ccs, edge_ccs)): - if len(specific_edge_id) > 0: - if edge_id not in specific_edge_id: - continue - if len(context_cc) < 1 or len(mask_cc) < 1: - continue - edge_dict = get_edge_from_nodes(context_cc | extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, edge_cc, extend_edge_ccs[edge_id], - mesh.graph['H'], mesh.graph['W'], mesh) - if specific_edge_loc is not None and \ - (specific_edge_loc is not None and edge_dict['mask'][specific_edge_loc[0], specific_edge_loc[1]] == 0): - continue - else: - tmp_specific_edge_id.append(edge_id) - edge_dict['edge'], end_depth_maps, _ = \ - filter_irrelevant_edge_new(edge_dict['self_edge'], edge_dict['comp_edge'], - edge_map, - edge_maps_with_id, - edge_id, - edge_dict['context'], - edge_dict['depth'], mesh, context_cc | erode_context_cc | extend_context_cc | extend_erode_context_ccs[edge_id], spdb=False) - discard_map = np.zeros_like(edge_dict['edge']) - mask_size = get_valid_size(edge_dict['mask']) - mask_size = dilate_valid_size(mask_size, edge_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(edge_dict['context']) - context_size = dilate_valid_size(context_size, edge_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - patch_edge_dict = dict() - patch_edge_dict['mask'], patch_edge_dict['context'], patch_edge_dict['rgb'], \ - patch_edge_dict['disp'], patch_edge_dict['edge'] = \ - crop_maps_by_size(union_size, edge_dict['mask'], edge_dict['context'], - edge_dict['rgb'], edge_dict['disp'], edge_dict['edge']) - x_anchor, y_anchor = [union_size['x_min'], union_size['x_max']], [union_size['y_min'], union_size['y_max']] - tensor_edge_dict = convert2tensor(patch_edge_dict) - input_edge_feat = torch.cat((tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - 1 - tensor_edge_dict['context'], - tensor_edge_dict['mask']), dim=1) - edge_dict['output'] = edge_dict['edge'].copy() - - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - edge_dict['fpath_map'], edge_dict['npath_map'] = edge_dict['fpath_map'] * 0 - 1, edge_dict['npath_map'] * 0 - 1 - end_pts = edges_in_mask[edge_id] - for end_pt in end_pts: - cur_edge_infos = edges_infos[(end_pt[0], end_pt[1])] - cur_info = [xx for xx in cur_edge_infos if xx['mask_id'] == edge_id][0] - other_infos = [xx for xx in cur_edge_infos if xx['mask_id'] != edge_id and len(xx['cont_end_pts']) > 0] - if len(cur_info['cont_end_pts']) > 0 or (len(cur_info['cont_end_pts']) == 0 and len(other_infos) == 0): - for fnode in cur_info['fpath']: - edge_dict['fpath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - for fnode in cur_info['npath']: - edge_dict['npath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - fnmap = edge_dict['fpath_map'] * 1 - fnmap[edge_dict['npath_map'] != -1] = edge_dict['npath_map'][edge_dict['npath_map'] != -1] - for end_pt in end_pts: - cur_edge_infos = edges_infos[(end_pt[0], end_pt[1])] - cur_info = [xx for xx in cur_edge_infos if xx['mask_id'] == edge_id][0] - cur_depth = cur_info['depth'] - other_infos = [xx for xx in cur_edge_infos if xx['mask_id'] != edge_id and len(xx['cont_end_pts']) > 0] - comp_edge_id = cur_info['comp_edge_id'] - if len(cur_info['cont_end_pts']) == 0 and len(other_infos) > 0: - other_infos = sorted(other_infos, key=lambda aa: abs(abs(aa['cont_end_pts'][2]) - abs(cur_depth))) - for other_info in other_infos: - tmp_fmap, tmp_nmap = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1, np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for fnode in other_info['fpath']: - if fnmap[fnode[0], fnode[1]] != -1: - tmp_fmap = tmp_fmap * 0 - 1 - break - else: - tmp_fmap[fnode[0], fnode[1]] = comp_edge_id - if fnmap[fnode[0], fnode[1]] != -1: - continue - for fnode in other_info['npath']: - if fnmap[fnode[0], fnode[1]] != -1: - tmp_nmap = tmp_nmap * 0 - 1 - break - else: - tmp_nmap[fnode[0], fnode[1]] = comp_edge_id - if fnmap[fnode[0], fnode[1]] != -1: - continue - break - if min(tmp_fmap.max(), tmp_nmap.max()) != -1: - edge_dict['fpath_map'] = tmp_fmap - edge_dict['fpath_map'][edge_dict['valid_area'] == 0] = -1 - edge_dict['npath_map'] = tmp_nmap - edge_dict['npath_map'][edge_dict['valid_area'] == 0] = -1 - discard_map = ((tmp_nmap != -1).astype(np.uint8) + (tmp_fmap != -1).astype(np.uint8)) * edge_dict['mask'] - else: - for fnode in cur_info['fpath']: - edge_dict['fpath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - for fnode in cur_info['npath']: - edge_dict['npath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - if edge_dict['npath_map'].min() == 0 or edge_dict['fpath_map'].min() == 0: - import pdb; pdb.set_trace() - edge_dict['output'] = (edge_dict['npath_map'] > -1) * edge_dict['mask'] + edge_dict['context'] * edge_dict['edge'] - mesh, _, _, _ = create_placeholder(edge_dict['context'], edge_dict['mask'], - edge_dict['depth'], edge_dict['fpath_map'], - edge_dict['npath_map'], mesh, inpaint_iter, - edge_ccs, - extend_edge_ccs[edge_id], - edge_maps_with_id, - edge_id) - - dxs, dys = np.where(discard_map != 0) - for dx, dy in zip(dxs, dys): - mesh.nodes[(dx, dy)]['inpaint_twice'] = False - depth_dict = depth_inpainting(context_cc, extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, mesh, config, union_size, depth_feat_model, edge_dict['output']) - refine_depth_output = depth_dict['output']*depth_dict['mask'] - for near_id in np.unique(edge_dict['npath_map'])[1:]: - refine_depth_output = refine_depth_around_edge(refine_depth_output.copy(), - (edge_dict['fpath_map'] == near_id).astype(np.uint8) * edge_dict['mask'], - (edge_dict['fpath_map'] == near_id).astype(np.uint8), - (edge_dict['npath_map'] == near_id).astype(np.uint8) * edge_dict['mask'], - depth_dict['mask'].copy(), - depth_dict['output'] * depth_dict['context'], - config) - depth_dict['output'][depth_dict['mask'] > 0] = refine_depth_output[depth_dict['mask'] > 0] - rgb_dict = get_rgb_from_nodes(context_cc | extend_context_cc, - erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, mesh.graph['H'], mesh.graph['W'], mesh) - if np.all(rgb_dict['mask'] == edge_dict['mask']) is False: - import pdb; pdb.set_trace() - rgb_dict['edge'] = edge_dict['output'] - patch_rgb_dict = dict() - patch_rgb_dict['mask'], patch_rgb_dict['context'], patch_rgb_dict['rgb'], \ - patch_rgb_dict['edge'] = crop_maps_by_size(union_size, rgb_dict['mask'], - rgb_dict['context'], rgb_dict['rgb'], - rgb_dict['edge']) - tensor_rgb_dict = convert2tensor(patch_rgb_dict) - resize_rgb_dict = {k: v.clone() for k, v in tensor_rgb_dict.items()} - max_hw = np.array([*patch_rgb_dict['mask'].shape[-2:]]).max() - init_frac = config['largest_size'] / (np.array([*patch_rgb_dict['mask'].shape[-2:]]).prod() ** 0.5) - resize_hw = [patch_rgb_dict['mask'].shape[-2] * init_frac, patch_rgb_dict['mask'].shape[-1] * init_frac] - resize_max_hw = max(resize_hw) - frac = (np.floor(resize_max_hw / 128.) * 128.) / max_hw - if frac < 1: - resize_mark = torch.nn.functional.interpolate(torch.cat((resize_rgb_dict['mask'], - resize_rgb_dict['context']), - dim=1), - scale_factor=frac, - mode='area') - resize_rgb_dict['mask'] = (resize_mark[:, 0:1] > 0).float() - resize_rgb_dict['context'] = (resize_mark[:, 1:2] == 1).float() - resize_rgb_dict['context'][resize_rgb_dict['mask'] > 0] = 0 - resize_rgb_dict['rgb'] = torch.nn.functional.interpolate(resize_rgb_dict['rgb'], - scale_factor=frac, - mode='area') - resize_rgb_dict['rgb'] = resize_rgb_dict['rgb'] * resize_rgb_dict['context'] - resize_rgb_dict['edge'] = torch.nn.functional.interpolate(resize_rgb_dict['edge'], - scale_factor=frac, - mode='area') - resize_rgb_dict['edge'] = (resize_rgb_dict['edge'] > 0).float() * 0 - resize_rgb_dict['edge'] = resize_rgb_dict['edge'] * (resize_rgb_dict['context'] + resize_rgb_dict['mask']) - rgb_input_feat = torch.cat((resize_rgb_dict['rgb'], resize_rgb_dict['edge']), dim=1) - rgb_input_feat[:, 3] = 1 - rgb_input_feat[:, 3] - resize_mask = open_small_mask(resize_rgb_dict['mask'], resize_rgb_dict['context'], 3, 41) - specified_hole = resize_mask - with torch.no_grad(): - rgb_output = rgb_model.forward_3P(specified_hole, - resize_rgb_dict['context'], - resize_rgb_dict['rgb'], - resize_rgb_dict['edge'], - unit_length=128, - cuda=device) - rgb_output = rgb_output.cpu() - if config.get('gray_image') is True: - rgb_output = rgb_output.mean(1, keepdim=True).repeat((1,3,1,1)) - rgb_output = rgb_output.cpu() - resize_rgb_dict['output'] = rgb_output * resize_rgb_dict['mask'] + resize_rgb_dict['rgb'] - tensor_rgb_dict['output'] = resize_rgb_dict['output'] - if frac < 1: - tensor_rgb_dict['output'] = torch.nn.functional.interpolate(tensor_rgb_dict['output'], - size=tensor_rgb_dict['mask'].shape[-2:], - mode='bicubic') - tensor_rgb_dict['output'] = tensor_rgb_dict['output'] * \ - tensor_rgb_dict['mask'] + (tensor_rgb_dict['rgb'] * tensor_rgb_dict['context']) - patch_rgb_dict['output'] = tensor_rgb_dict['output'].data.cpu().numpy().squeeze().transpose(1,2,0) - rgb_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'], 3)) - rgb_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_rgb_dict['output'] - - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) or inpaint_iter > 0: - edge_occlusion = True - else: - edge_occlusion = False - for node in erode_context_cc: - if rgb_dict['mask'][node[0], node[1]] > 0: - for info in info_on_pix[(node[0], node[1])]: - if abs(info['depth']) == abs(node[2]): - info['update_color'] = (rgb_dict['output'][node[0], node[1]] * 255).astype(np.uint8) - if frac < 1.: - depth_edge_dilate_2_color_flag = False - else: - depth_edge_dilate_2_color_flag = True - hxs, hys = np.where((rgb_dict['mask'] > 0) & (rgb_dict['erode'] == 0)) - for hx, hy in zip(hxs, hys): - real_depth = None - if abs(depth_dict['output'][hx, hy]) <= abs(np_depth[hx, hy]): - depth_dict['output'][hx, hy] = np_depth[hx, hy] + 0.01 - node = (hx, hy, -depth_dict['output'][hx, hy]) - if info_on_pix.get((node[0], node[1])) is not None: - for info in info_on_pix.get((node[0], node[1])): - if info.get('inpaint_id') is None or abs(info['inpaint_id'] < mesh.nodes[(hx, hy)]['inpaint_id']): - pre_depth = info['depth'] if info.get('real_depth') is None else info['real_depth'] - if abs(node[2]) < abs(pre_depth): - node = (node[0], node[1], -(abs(pre_depth) + 0.001)) - if mesh.has_node(node): - real_depth = node[2] - while True: - if mesh.has_node(node): - node = (node[0], node[1], -(abs(node[2]) + 0.001)) - else: - break - if real_depth == node[2]: - real_depth = None - cur_disp = 1./node[2] - if not(mesh.has_node(node)): - if not mesh.has_node((node[0], node[1])): - print("2D node not found.") - import pdb; pdb.set_trace() - if inpaint_iter == 1: - paint = (rgb_dict['output'][hx, hy] * 255).astype(np.uint8) - else: - paint = (rgb_dict['output'][hx, hy] * 255).astype(np.uint8) - ndict = dict(color=paint, - synthesis=True, - disp=cur_disp, - cc_id=set([edge_id]), - overlap_number=1.0, - refine_depth=False, - edge_occlusion=edge_occlusion, - depth_edge_dilate_2_color_flag=depth_edge_dilate_2_color_flag, - real_depth=real_depth) - mesh, _, _ = refresh_node((node[0], node[1]), mesh.nodes[(node[0], node[1])], node, ndict, mesh, stime=True) - if inpaint_iter == 0 and mesh.degree(node) < 4: - connnect_points_ccs[edge_id].add(node) - if info_on_pix.get((hx, hy)) is None: - info_on_pix[(hx, hy)] = [] - new_info = {'depth':node[2], - 'color': paint, - 'synthesis':True, - 'disp':cur_disp, - 'cc_id':set([edge_id]), - 'inpaint_id':inpaint_iter + 1, - 'edge_occlusion':edge_occlusion, - 'overlap_number':1.0, - 'real_depth': real_depth} - info_on_pix[(hx, hy)].append(new_info) - specific_edge_id = tmp_specific_edge_id - for erode_id, erode_context_cc in enumerate(erode_context_ccs): - if len(specific_edge_id) > 0 and erode_id not in specific_edge_id: - continue - for erode_node in erode_context_cc: - for info in info_on_pix[(erode_node[0], erode_node[1])]: - if info['depth'] == erode_node[2]: - info['color'] = info['update_color'] - mesh.nodes[erode_node]['color'] = info['update_color'] - np_image[(erode_node[0], erode_node[1])] = info['update_color'] - new_edge_ccs = [set() for _ in range(mesh.graph['max_edge_id'] + 1)] - for node in mesh.nodes: - if len(node) == 2: - mesh.remove_node(node) - continue - if mesh.nodes[node].get('edge_id') is not None and mesh.nodes[node].get('inpaint_id') == inpaint_iter + 1: - if mesh.nodes[node].get('inpaint_twice') is False: - continue - try: - new_edge_ccs[mesh.nodes[node].get('edge_id')].add(node) - except: - import pdb; pdb.set_trace() - specific_mask_nodes = None - if inpaint_iter == 0: - mesh, info_on_pix = refine_color_around_edge(mesh, info_on_pix, new_edge_ccs, config, False) - - return mesh, info_on_pix, specific_mask_nodes, new_edge_ccs, connnect_points_ccs, np_image - - -def write_ply(image, - depth, - int_mtx, - ply_name, - config, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model): - depth = depth.astype(np.float64) - input_mesh, xy2depth, image, depth = create_mesh(depth, image, int_mtx, config) - - H, W = input_mesh.graph['H'], input_mesh.graph['W'] - input_mesh = tear_edges(input_mesh, config['depth_threshold'], xy2depth) - input_mesh, info_on_pix = generate_init_node(input_mesh, config, min_node_in_cc=200) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=False) - edge_canvas = np.zeros((H, W)) - 1 - - input_mesh, info_on_pix, depth = reassign_floating_island(input_mesh, info_on_pix, image, depth) - input_mesh = update_status(input_mesh, info_on_pix) - specific_edge_id = [] - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - pre_depth = depth.copy() - input_mesh, info_on_pix, edge_mesh, depth, aft_mark = remove_dangling(input_mesh, edge_ccs, edge_mesh, info_on_pix, image, depth, config) - - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - edge_canvas = np.zeros((H, W)) - 1 - - mesh, info_on_pix, depth = fill_missing_node(input_mesh, info_on_pix, image, depth) - if config['extrapolate_border'] is True: - pre_depth = depth.copy() - input_mesh, info_on_pix, depth = refresh_bord_depth(input_mesh, info_on_pix, image, depth) - input_mesh = remove_node_feat(input_mesh, 'edge_id') - aft_depth = depth.copy() - input_mesh, info_on_pix, depth, image = enlarge_border(input_mesh, info_on_pix, depth, image, config) - noext_H, noext_W = H, W - H, W = image.shape[:2] - input_mesh, info_on_pix = fill_dummy_bord(input_mesh, info_on_pix, image, depth, config) - edge_ccs, input_mesh, edge_mesh = \ - group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = \ - group_edges(input_mesh, config, image, remove_conflict_ordinal=True, spdb=False) - input_mesh = remove_redundant_edge(input_mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=config['redundant_number'], spdb=False) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh = remove_redundant_edge(input_mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=config['redundant_number'], invalid=True, spdb=False) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - edge_condition = lambda x, m: m.nodes[x].get('far') is not None and len(m.nodes[x].get('far')) > 0 - edge_map = get_map_from_ccs(edge_ccs, input_mesh.graph['H'], input_mesh.graph['W'], input_mesh, edge_condition) - other_edge_with_id = get_map_from_ccs(edge_ccs, input_mesh.graph['H'], input_mesh.graph['W'], real_id=True) - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="down") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right-up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right-down") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left-up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left-down") - specific_edge_loc = None - specific_edge_id = [] - vis_edge_id = None - context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, \ - init_mask_connect, edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs = \ - context_and_holes(input_mesh, - edge_ccs, - config, - specific_edge_id, - specific_edge_loc, - depth_feat_model, - inpaint_iter=0, - vis_edge_id=vis_edge_id) - edge_canvas = np.zeros((H, W)) - mask = np.zeros((H, W)) - context = np.zeros((H, W)) - vis_edge_ccs = filter_edge(input_mesh, edge_ccs, config) - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - specific_edge_loc = None - FG_edge_maps = edge_maps.copy() - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - # for cc_id, cc in enumerate(edge_ccs): - # for node in cc: - # edge_canvas[node[0], node[1]] = cc_id - # f, ((ax0, ax1, ax2)) = plt.subplots(1, 3, sharex=True, sharey=True); ax0.imshow(1./depth); ax1.imshow(image); ax2.imshow(edge_canvas); plt.show() - input_mesh, info_on_pix, specific_edge_nodes, new_edge_ccs, connect_points_ccs, image = DL_inpaint_edge(input_mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model, - specific_edge_id, - specific_edge_loc, - inpaint_iter=0) - specific_edge_id = [] - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - connect_points_ccs = [set() for _ in connect_points_ccs] - context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, init_mask_connect, \ - edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs = \ - context_and_holes(input_mesh, new_edge_ccs, config, specific_edge_id, specific_edge_loc, depth_feat_model, connect_points_ccs, inpaint_iter=1) - mask_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - context_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - erode_context_ccs_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - # edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - # for cc_id, cc in enumerate(edge_ccs): - # for node in cc: - # edge_canvas[node[0], node[1]] = cc_id - specific_edge_id = [] - input_mesh, info_on_pix, specific_edge_nodes, new_edge_ccs, _, image = DL_inpaint_edge(input_mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model, - specific_edge_id, - specific_edge_loc, - inpaint_iter=1) - vertex_id = 0 - input_mesh.graph['H'], input_mesh.graph['W'] = input_mesh.graph['noext_H'], input_mesh.graph['noext_W'] - background_canvas = np.zeros((input_mesh.graph['H'], - input_mesh.graph['W'], - 3)) - ply_flag = config.get('save_ply') - if ply_flag is True: - node_str_list = [] - else: - node_str_color = [] - node_str_point = [] - out_fmt = lambda x, x_flag: str(x) if x_flag is True else x - point_time = 0 - hlight_time = 0 - cur_id_time = 0 - node_str_time = 0 - generate_face_time = 0 - point_list = [] - k_00, k_02, k_11, k_12 = \ - input_mesh.graph['cam_param_pix_inv'][0, 0], input_mesh.graph['cam_param_pix_inv'][0, 2], \ - input_mesh.graph['cam_param_pix_inv'][1, 1], input_mesh.graph['cam_param_pix_inv'][1, 2] - w_offset = input_mesh.graph['woffset'] - h_offset = input_mesh.graph['hoffset'] - for pix_xy, pix_list in info_on_pix.items(): - for pix_idx, pix_info in enumerate(pix_list): - pix_depth = pix_info['depth'] if pix_info.get('real_depth') is None else pix_info['real_depth'] - str_pt = [out_fmt(x, ply_flag) for x in reproject_3d_int_detail(pix_xy[0], pix_xy[1], pix_depth, - k_00, k_02, k_11, k_12, w_offset, h_offset)] - if input_mesh.has_node((pix_xy[0], pix_xy[1], pix_info['depth'])) is False: - return False - continue - if pix_info.get('overlap_number') is not None: - str_color = [out_fmt(x, ply_flag) for x in (pix_info['color']/pix_info['overlap_number']).astype(np.uint8).tolist()] - else: - str_color = [out_fmt(x, ply_flag) for x in pix_info['color'].tolist()] - if pix_info.get('edge_occlusion') is True: - str_color.append(out_fmt(4, ply_flag)) - else: - if pix_info.get('inpaint_id') is None: - str_color.append(out_fmt(1, ply_flag)) - else: - str_color.append(out_fmt(pix_info.get('inpaint_id') + 1, ply_flag)) - if pix_info.get('modified_border') is True or pix_info.get('ext_pixel') is True: - if len(str_color) == 4: - str_color[-1] = out_fmt(5, ply_flag) - else: - str_color.append(out_fmt(5, ply_flag)) - pix_info['cur_id'] = vertex_id - input_mesh.nodes[(pix_xy[0], pix_xy[1], pix_info['depth'])]['cur_id'] = out_fmt(vertex_id, ply_flag) - vertex_id += 1 - if ply_flag is True: - node_str_list.append(' '.join(str_pt) + ' ' + ' '.join(str_color) + '\n') - else: - node_str_color.append(str_color) - node_str_point.append(str_pt) - str_faces = generate_face(input_mesh, info_on_pix, config) - if config['save_ply'] is True: - print("Writing mesh file %s ..." % ply_name) - with open(ply_name, 'w') as ply_fi: - ply_fi.write('ply\n' + 'format ascii 1.0\n') - ply_fi.write('comment H ' + str(int(input_mesh.graph['H'])) + '\n') - ply_fi.write('comment W ' + str(int(input_mesh.graph['W'])) + '\n') - ply_fi.write('comment hFov ' + str(float(input_mesh.graph['hFov'])) + '\n') - ply_fi.write('comment vFov ' + str(float(input_mesh.graph['vFov'])) + '\n') - ply_fi.write('element vertex ' + str(len(node_str_list)) + '\n') - ply_fi.write('property float x\n' + \ - 'property float y\n' + \ - 'property float z\n' + \ - 'property uchar red\n' + \ - 'property uchar green\n' + \ - 'property uchar blue\n' + \ - 'property uchar alpha\n') - ply_fi.write('element face ' + str(len(str_faces)) + '\n') - ply_fi.write('property list uchar int vertex_index\n') - ply_fi.write('end_header\n') - ply_fi.writelines(node_str_list) - ply_fi.writelines(str_faces) - ply_fi.close() - return input_mesh - else: - H = int(input_mesh.graph['H']) - W = int(input_mesh.graph['W']) - hFov = input_mesh.graph['hFov'] - vFov = input_mesh.graph['vFov'] - node_str_color = np.array(node_str_color).astype(np.float32) - node_str_color[..., :3] = node_str_color[..., :3] / 255. - node_str_point = np.array(node_str_point) - str_faces = np.array(str_faces) - - return node_str_point, node_str_color, str_faces, H, W, hFov, vFov - -def read_ply(mesh_fi): - ply_fi = open(mesh_fi, 'r') - Height = None - Width = None - hFov = None - vFov = None - while True: - line = ply_fi.readline().split('\n')[0] - if line.startswith('element vertex'): - num_vertex = int(line.split(' ')[-1]) - elif line.startswith('element face'): - num_face = int(line.split(' ')[-1]) - elif line.startswith('comment'): - if line.split(' ')[1] == 'H': - Height = int(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'W': - Width = int(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'hFov': - hFov = float(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'vFov': - vFov = float(line.split(' ')[-1].split('\n')[0]) - elif line.startswith('end_header'): - break - contents = ply_fi.readlines() - vertex_infos = contents[:num_vertex] - face_infos = contents[num_vertex:] - verts = [] - colors = [] - faces = [] - for v_info in vertex_infos: - str_info = [float(v) for v in v_info.split('\n')[0].split(' ')] - if len(str_info) == 6: - vx, vy, vz, r, g, b = str_info - else: - vx, vy, vz, r, g, b, hi = str_info - verts.append([vx, vy, vz]) - colors.append([r, g, b, hi]) - verts = np.array(verts) - try: - colors = np.array(colors) - colors[..., :3] = colors[..., :3]/255. - except: - import pdb - pdb.set_trace() - - for f_info in face_infos: - _, v1, v2, v3 = [int(f) for f in f_info.split('\n')[0].split(' ')] - faces.append([v1, v2, v3]) - faces = np.array(faces) - - - return verts, colors, faces, Height, Width, hFov, vFov - - -class Canvas_view(): - def __init__(self, - fov, - verts, - faces, - colors, - canvas_size, - factor=1, - bgcolor='gray', - proj='perspective', - ): - self.canvas = scene.SceneCanvas(bgcolor=bgcolor, size=(canvas_size*factor, canvas_size*factor)) - self.view = self.canvas.central_widget.add_view() - self.view.camera = 'perspective' - self.view.camera.fov = fov - self.mesh = visuals.Mesh(shading=None) - self.mesh.attach(Alpha(1.0)) - self.view.add(self.mesh) - self.tr = self.view.camera.transform - self.mesh.set_data(vertices=verts, faces=faces, vertex_colors=colors[:, :3]) - self.translate([0,0,0]) - self.rotate(axis=[1,0,0], angle=180) - self.view_changed() - - def translate(self, trans=[0,0,0]): - self.tr.translate(trans) - - def rotate(self, axis=[1,0,0], angle=0): - self.tr.rotate(axis=axis, angle=angle) - - def view_changed(self): - self.view.camera.view_changed() - - def render(self): - return self.canvas.render() - - def reinit_mesh(self, verts, faces, colors): - self.mesh.set_data(vertices=verts, faces=faces, vertex_colors=colors[:, :3]) - - def reinit_camera(self, fov): - self.view.camera.fov = fov - self.view.camera.view_changed() - - -def output_3d_photo(verts, colors, faces, Height, Width, hFov, vFov, tgt_poses, video_traj_types, ref_pose, - output_dir, ref_image, int_mtx, config, image, videos_poses, video_basename, original_H=None, original_W=None, - border=None, depth=None, normal_canvas=None, all_canvas=None, mean_loc_depth=None): - - cam_mesh = netx.Graph() - cam_mesh.graph['H'] = Height - cam_mesh.graph['W'] = Width - cam_mesh.graph['original_H'] = original_H - cam_mesh.graph['original_W'] = original_W - int_mtx_real_x = int_mtx[0] * Width - int_mtx_real_y = int_mtx[1] * Height - cam_mesh.graph['hFov'] = 2 * np.arctan((1. / 2.) * ((cam_mesh.graph['original_W']) / int_mtx_real_x[0])) - cam_mesh.graph['vFov'] = 2 * np.arctan((1. / 2.) * ((cam_mesh.graph['original_H']) / int_mtx_real_y[1])) - colors = colors[..., :3] - - fov_in_rad = max(cam_mesh.graph['vFov'], cam_mesh.graph['hFov']) - fov = (fov_in_rad * 180 / np.pi) - print("fov: " + str(fov)) - init_factor = 1 - if config.get('anti_flickering') is True: - init_factor = 3 - if (cam_mesh.graph['original_H'] is not None) and (cam_mesh.graph['original_W'] is not None): - canvas_w = cam_mesh.graph['original_W'] - canvas_h = cam_mesh.graph['original_H'] - else: - canvas_w = cam_mesh.graph['W'] - canvas_h = cam_mesh.graph['H'] - canvas_size = max(canvas_h, canvas_w) - if normal_canvas is None: - normal_canvas = Canvas_view(fov, - verts, - faces, - colors, - canvas_size=canvas_size, - factor=init_factor, - bgcolor='gray', - proj='perspective') - else: - normal_canvas.reinit_mesh(verts, faces, colors) - normal_canvas.reinit_camera(fov) - img = normal_canvas.render() - backup_img, backup_all_img, all_img_wo_bound = img.copy(), img.copy() * 0, img.copy() * 0 - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_AREA) - if border is None: - border = [0, img.shape[0], 0, img.shape[1]] - H, W = cam_mesh.graph['H'], cam_mesh.graph['W'] - if (cam_mesh.graph['original_H'] is not None) and (cam_mesh.graph['original_W'] is not None): - aspect_ratio = cam_mesh.graph['original_H'] / cam_mesh.graph['original_W'] - else: - aspect_ratio = cam_mesh.graph['H'] / cam_mesh.graph['W'] - if aspect_ratio > 1: - img_h_len = cam_mesh.graph['H'] if cam_mesh.graph.get('original_H') is None else cam_mesh.graph['original_H'] - img_w_len = img_h_len / aspect_ratio - anchor = [0, - img.shape[0], - int(max(0, int((img.shape[1])//2 - img_w_len//2))), - int(min(int((img.shape[1])//2 + img_w_len//2), (img.shape[1])-1))] - elif aspect_ratio <= 1: - img_w_len = cam_mesh.graph['W'] if cam_mesh.graph.get('original_W') is None else cam_mesh.graph['original_W'] - img_h_len = img_w_len * aspect_ratio - anchor = [int(max(0, int((img.shape[0])//2 - img_h_len//2))), - int(min(int((img.shape[0])//2 + img_h_len//2), (img.shape[0])-1)), - 0, - img.shape[1]] - anchor = np.array(anchor) - plane_width = np.tan(fov_in_rad/2.) * np.abs(mean_loc_depth) - for video_pose, video_traj_type in zip(videos_poses, video_traj_types): - stereos = [] - tops = []; buttoms = []; lefts = []; rights = [] - for tp_id, tp in enumerate(video_pose): - rel_pose = np.linalg.inv(np.dot(tp, np.linalg.inv(ref_pose))) - axis, angle = transforms3d.axangles.mat2axangle(rel_pose[0:3, 0:3]) - normal_canvas.rotate(axis=axis, angle=(angle*180)/np.pi) - normal_canvas.translate(rel_pose[:3,3]) - new_mean_loc_depth = mean_loc_depth - float(rel_pose[2, 3]) - if 'dolly' in video_traj_type: - new_fov = float((np.arctan2(plane_width, np.array([np.abs(new_mean_loc_depth)])) * 180. / np.pi) * 2) - normal_canvas.reinit_camera(new_fov) - else: - normal_canvas.reinit_camera(fov) - normal_canvas.view_changed() - img = normal_canvas.render() - img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0) - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_AREA) - img = img[anchor[0]:anchor[1], anchor[2]:anchor[3]] - img = img[int(border[0]):int(border[1]), int(border[2]):int(border[3])] - - if any(np.array(config['crop_border']) > 0.0): - H_c, W_c, _ = img.shape - o_t = int(H_c * config['crop_border'][0]) - o_l = int(W_c * config['crop_border'][1]) - o_b = int(H_c * config['crop_border'][2]) - o_r = int(W_c * config['crop_border'][3]) - img = img[o_t:H_c-o_b, o_l:W_c-o_r] - img = cv2.resize(img, (W_c, H_c), interpolation=cv2.INTER_CUBIC) - - """ - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_CUBIC) - img = img[anchor[0]:anchor[1], anchor[2]:anchor[3]] - img = img[int(border[0]):int(border[1]), int(border[2]):int(border[3])] - - if config['crop_border'] is True: - top, buttom, left, right = find_largest_rect(img, bg_color=(128, 128, 128)) - tops.append(top); buttoms.append(buttom); lefts.append(left); rights.append(right) - """ - stereos.append(img[..., :3]) - normal_canvas.translate(-rel_pose[:3,3]) - normal_canvas.rotate(axis=axis, angle=-(angle*180)/np.pi) - normal_canvas.view_changed() - """ - if config['crop_border'] is True: - atop, abuttom = min(max(tops), img.shape[0]//2 - 10), max(min(buttoms), img.shape[0]//2 + 10) - aleft, aright = min(max(lefts), img.shape[1]//2 - 10), max(min(rights), img.shape[1]//2 + 10) - atop -= atop % 2; abuttom -= abuttom % 2; aleft -= aleft % 2; aright -= aright % 2 - else: - atop = 0; abuttom = img.shape[0] - img.shape[0] % 2; aleft = 0; aright = img.shape[1] - img.shape[1] % 2 - """ - atop = 0; abuttom = img.shape[0] - img.shape[0] % 2; aleft = 0; aright = img.shape[1] - img.shape[1] % 2 - crop_stereos = [] - for stereo in stereos: - crop_stereos.append((stereo[atop:abuttom, aleft:aright, :3] * 1).astype(np.uint8)) - stereos = crop_stereos - clip = ImageSequenceClip(stereos, fps=config['fps']) - if isinstance(video_basename, list): - video_basename = video_basename[0] - clip.write_videofile(os.path.join(output_dir, video_basename + '_' + video_traj_type + '.mp4'), fps=config['fps']) - - - - return normal_canvas, all_canvas diff --git a/spaces/facebook/ov-seg/open_vocab_seg/evaluation/generalized_sem_seg_evaluation.py b/spaces/facebook/ov-seg/open_vocab_seg/evaluation/generalized_sem_seg_evaluation.py deleted file mode 100644 index ce960ae7cbffde4a981be941ed03a8fc7025ed80..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/evaluation/generalized_sem_seg_evaluation.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import itertools -import json -import numpy as np -import os -from collections import OrderedDict -import PIL.Image as Image -import torch - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.comm import all_gather, is_main_process, synchronize -from detectron2.utils.file_io import PathManager - -from detectron2.evaluation import SemSegEvaluator - - -class GeneralizedSemSegEvaluator(SemSegEvaluator): - """ - Evaluate semantic segmentation metrics. - """ - - def __init__( - self, - dataset_name, - distributed=True, - output_dir=None, - *, - num_classes=None, - ignore_label=None, - post_process_func=None, - ): - super().__init__( - dataset_name, - distributed=distributed, - output_dir=output_dir, - num_classes=num_classes, - ignore_label=ignore_label, - ) - meta = MetadataCatalog.get(dataset_name) - try: - self._evaluation_set = meta.evaluation_set - except AttributeError: - self._evaluation_set = None - self.post_process_func = ( - post_process_func - if post_process_func is not None - else lambda x, **kwargs: x - ) - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a model. - It is a list of dicts. Each dict corresponds to an image and - contains keys like "height", "width", "file_name". - outputs: the outputs of a model. It is either list of semantic segmentation predictions - (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic - segmentation prediction in the same format. - """ - for input, output in zip(inputs, outputs): - output = self.post_process_func( - output["sem_seg"], image=np.array(Image.open(input["file_name"])) - ) - output = output.argmax(dim=0).to(self._cpu_device) - pred = np.array(output, dtype=np.int) - with PathManager.open( - self.input_file_to_gt_file[input["file_name"]], "rb" - ) as f: - gt = np.array(Image.open(f), dtype=np.int) - - gt[gt == self._ignore_label] = self._num_classes - - self._conf_matrix += np.bincount( - (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) - - def evaluate(self): - """ - Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval): - - * Mean intersection-over-union averaged across classes (mIoU) - * Frequency Weighted IoU (fwIoU) - * Mean pixel accuracy averaged across classes (mACC) - * Pixel Accuracy (pACC) - """ - if self._distributed: - synchronize() - conf_matrix_list = all_gather(self._conf_matrix) - self._predictions = all_gather(self._predictions) - self._predictions = list(itertools.chain(*self._predictions)) - if not is_main_process(): - return - - self._conf_matrix = np.zeros_like(self._conf_matrix) - for conf_matrix in conf_matrix_list: - self._conf_matrix += conf_matrix - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "sem_seg_predictions.json") - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(self._predictions)) - - acc = np.full(self._num_classes, np.nan, dtype=np.float) - iou = np.full(self._num_classes, np.nan, dtype=np.float) - tp = self._conf_matrix.diagonal()[:-1].astype(np.float) - pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float) - class_weights = pos_gt / np.sum(pos_gt) - pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float) - acc_valid = pos_gt > 0 - acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid] - iou_valid = (pos_gt + pos_pred) > 0 - union = pos_gt + pos_pred - tp - iou[acc_valid] = tp[acc_valid] / union[acc_valid] - macc = np.sum(acc[acc_valid]) / np.sum(acc_valid) - miou = np.sum(iou[acc_valid]) / np.sum(iou_valid) - fiou = np.sum(iou[acc_valid] * class_weights[acc_valid]) - pacc = np.sum(tp) / np.sum(pos_gt) - - res = {} - res["mIoU"] = 100 * miou - res["fwIoU"] = 100 * fiou - for i, name in enumerate(self._class_names): - res["IoU-{}".format(name)] = 100 * iou[i] - res["mACC"] = 100 * macc - res["pACC"] = 100 * pacc - for i, name in enumerate(self._class_names): - res["ACC-{}".format(name)] = 100 * acc[i] - if self._evaluation_set is not None: - for set_name, set_inds in self._evaluation_set.items(): - iou_list = [] - set_inds = np.array(set_inds, np.int) - mask = np.zeros((len(iou),)).astype(np.bool) - mask[set_inds] = 1 - miou = np.sum(iou[mask][acc_valid[mask]]) / np.sum(iou_valid[mask]) - pacc = np.sum(tp[mask]) / np.sum(pos_gt[mask]) - res["mIoU-{}".format(set_name)] = 100 * miou - res["pAcc-{}".format(set_name)] = 100 * pacc - iou_list.append(miou) - miou = np.sum(iou[~mask][acc_valid[~mask]]) / np.sum(iou_valid[~mask]) - pacc = np.sum(tp[~mask]) / np.sum(pos_gt[~mask]) - res["mIoU-un{}".format(set_name)] = 100 * miou - res["pAcc-un{}".format(set_name)] = 100 * pacc - iou_list.append(miou) - res["hIoU-{}".format(set_name)] = ( - 100 * len(iou_list) / sum([1 / iou for iou in iou_list]) - ) - if self._output_dir: - file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(res, f) - results = OrderedDict({"sem_seg": res}) - self._logger.info(results) - return results diff --git a/spaces/falterWliame/Face_Mask_Detection/Alpha Test Psicologia Pdf Download.md b/spaces/falterWliame/Face_Mask_Detection/Alpha Test Psicologia Pdf Download.md deleted file mode 100644 index 96a47eb7d5103844a013bb6d4831d8c792f012d3..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Alpha Test Psicologia Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Alpha Test Psicologia Pdf Download


            Download File 🌟 https://urlca.com/2uDdpR



            -
            -by M Pocinho · 2019 - Cronbach's alpha, α (or alpha coefficient) measures reliability or internal consistency. Reliability can be defined as how well a test measures what it should. . The alpha test was developed for the . Manual de psicology. La primera parte: . Percula psicology. Trampa e indigena. . Nosotros sabemos que la primera parte: . Percula psicology. Trampa e indigena. . Nosotros sabemos que la primera parte: . Percula psicology. Trampa e indigena. . Nosotros sabemos que la primera parte: . Percula psicology. Trampa e indigena. . Nosotros sabemos que la primera parte: . Percula psicology. Trampa e indigena. . La primera parte: . Percula psicology. Trampa e indigena. . Nosotros sabemos que la primera parte: . 8a78ff9644
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/F1 2018 Headline Edition V1.06 DLC [FitGirl Repack] Hack Pc.md b/spaces/falterWliame/Face_Mask_Detection/F1 2018 Headline Edition V1.06 DLC [FitGirl Repack] Hack Pc.md deleted file mode 100644 index e272abf3539cb8c930431a2aff72e39fa98c1da0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/F1 2018 Headline Edition V1.06 DLC [FitGirl Repack] Hack Pc.md +++ /dev/null @@ -1,12 +0,0 @@ -

            F1 2018 Headline Edition V1.06 DLC [FitGirl Repack] Hack Pc


            DOWNLOAD 🗸 https://urlca.com/2uDdDz



            - -#FitGirlRepacks HOW TO DOWNLOAD F1 2018: HEADLINE EDITION - V1.06 + DLC | Thanks FitGirl Repack. Hello! -FitGirl is in touch with you and I will tell you live and show you how to download F1 2018: HEADLINE EDITION - V1.06 + DLC. -Download link for this version of the game: https://www.fitgirl.com/downloads/get-fitgirl-headdlc-edition-v1016-edition-for-fitgirl-edition.html?i=EQQMVZ3C0J -Hello. -Not every motorsport fan is ready to lay out hundreds of thousands of rubles on their PC to buy expensive games. -And here and there, not all games are currently compatible with your Windows. -So when I 8a78ff9644
            -
            -
            -

            diff --git a/spaces/falterWliame/Face_Mask_Detection/Printshop Mail Full Version Free 13 2021.md b/spaces/falterWliame/Face_Mask_Detection/Printshop Mail Full Version Free 13 2021.md deleted file mode 100644 index fe49bd4d077892eb9dc192fd62b4ea11ec13a6d4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Printshop Mail Full Version Free 13 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

            printshop mail full version free 13


            Download Zip 🔗 https://urlca.com/2uDdIE



            -
            -Entries 10-15-13. What's new in PrintShop Mail. . PrintShop Mail 98 documents. . PrintShop Mail then allows your PostScript® printer to print to or . This setting can be changed from the Start menu. See 'Start Menu' on page 44. If the printer . See "Printing and Faxing" on page 439. If the printer is not . See "Printing and Faxing" on page 439. See "Connecting a Printer" on page 258. See "Printing and Faxing" on page 439. From the Start menu, select Programs, and then select . See "Printing and Faxing" on page 439. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/fatiXbelha/sd/Clash of Clans for PC Windows 8.1 Tips and Tricks to Master the Game.md b/spaces/fatiXbelha/sd/Clash of Clans for PC Windows 8.1 Tips and Tricks to Master the Game.md deleted file mode 100644 index 109d5de47ae804883aced9d4f068ae96c4463abe..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Clash of Clans for PC Windows 8.1 Tips and Tricks to Master the Game.md +++ /dev/null @@ -1,131 +0,0 @@ - -

            Clash of Clans Game Download for PC Windows 8.1

            -

            Are you a fan of strategy games that challenge your mind and test your skills? Do you want to lead your own army of barbarians, wizards, dragons, and other mighty warriors? Do you want to join millions of players from around the world in epic clan battles? If you answered yes to any of these questions, then you should try Clash of Clans, one of the most popular and addictive mobile games ever created.

            -

            What is Clash of Clans?

            -

            Clash of Clans is a free-to-play online multiplayer game developed by Supercell, a Finnish company that also created other hit games like Hay Day, Boom Beach, and Brawl Stars. In Clash of Clans, you are the chief of your own village, which you have to build, defend, and expand by collecting resources, training troops, and attacking other players' bases. You can also join or create a clan with other players and cooperate in clan wars, clan games, and other events.

            -

            clash of clans game download for pc windows 8.1


            Download File ✯✯✯ https://urllie.com/2uNxiU



            -

            Features of Clash of Clans

            -

            Clash of Clans has many features that make it an exciting and enjoyable game for players of all ages and preferences. Some of these features are:

            -
              -
            • A variety of troops, buildings, spells, and heroes to choose from and customize your army and base.
            • -
            • A dynamic and competitive ranking system that rewards you for your achievements and progress.
            • -
            • A rich and diverse world with different themes, environments, and enemies to explore and conquer.
            • -
            • A friendly and active community with chat, forums, clans, and social media integration.
            • -
            • Regular updates and events that add new content and challenges to the game.
            • -
            -

            How to download and play Clash of Clans on PC Windows 8.1

            -

            Although Clash of Clans is primarily designed for mobile devices, you can also play it on your PC Windows 8.1 with the help of an emulator. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, but one of the best ones is LDPlayer, which is fast, safe, and easy to use. Here are the steps to download and play Clash of Clans on PC Windows 8.1 using LDPlayer:

            -
              -
            1. Download LDPlayer from its official website and install it on your PC.
            2. -
            3. Launch LDPlayer and search for Clash of Clans in the LD Store or Google Play Store.
            4. -
            5. Click on the install button and wait for the game to download.
            6. -
            7. Once the game is installed, click on the icon to launch it.
            8. -
            9. Log in with your Google account or create a new one if you don't have one.
            10. -
            11. Enjoy playing Clash of Clans on your PC Windows 8.1!
            12. -
            -

            Benefits of playing Clash of Clans on PC Windows 8.1

            -

            Playing Clash of Clans on your PC Windows 8.1 has many benefits that can enhance your gaming experience and performance. Some of these benefits are:

            -

            Bigger screen and better graphics

            -

            Playing Clash of Clans on your PC Windows 8.1 allows you to enjoy the game's stunning graphics and animations on a bigger screen. You can also adjust the resolution and quality settings according to your preference. This way, you can see more details and have a better

            Faster performance and smoother gameplay

            -

            Playing Clash of Clans on your PC Windows 8.1 also gives you faster performance and smoother gameplay. You don't have to worry about lag, crashes, or battery drain that can affect your mobile device. You can also use LDPlayer's features like multi-instance, sync, and macro to run multiple accounts, synchronize your actions, and automate your tasks.

            -

            Keyboard and mouse control

            -

            Another benefit of playing Clash of Clans on your PC Windows 8.1 is that you can use your keyboard and mouse to control the game. You can customize the key mapping and mouse sensitivity according to your preference. This way, you can have more accuracy and convenience when building your base, training your troops, and attacking your enemies.

            -

            Tips and tricks for playing Clash of Clans on PC Windows 8.1

            -

            Now that you know how to download and play Clash of Clans on PC Windows 8.1, here are some tips and tricks that can help you improve your skills and strategy in the game:

            -

            Choose your base layout wisely

            -

            Your base layout is one of the most important factors that determine your defense and offense in Clash of Clans. You should choose a base layout that suits your play style, goals, and resources. For example, if you want to protect your loot, you should place your storages in the center of your base and surround them with walls, traps, and defensive buildings. If you want to push trophies, you should place your town hall in the center of your base and make sure it is well defended. You can also use online tools like [Clash of Clans Builder] to design and test your base layout before applying it in the game.

            -

            Upgrade your troops and buildings regularly

            -

            Another tip for playing Clash of Clans on PC Windows 8.1 is to upgrade your troops and buildings regularly. Upgrading your troops will make them stronger, faster, and more effective in battles. Upgrading your buildings will unlock new features, increase your production, and enhance your defense. You should prioritize upgrading the buildings and troops that are most useful for your strategy and goals. For example, if you like to use air attacks, you should upgrade your balloons, dragons, air defenses, and air sweepers. If you like to use ground attacks, you should upgrade your giants, wizards, wall breakers, cannons, and mortars.

            -

            How to play clash of clans on pc windows 8.1
            -Clash of clans pc windows 8.1 free download
            -Best emulator for clash of clans on pc windows 8.1
            -Clash of clans for windows 8.1 laptop download
            -Clash of clans game download for pc windows 8.1 offline
            -Clash of clans game download for pc windows 8.1 64 bit
            -Clash of clans game download for pc windows 8.1 full version
            -Clash of clans game download for pc windows 8.1 without bluestacks
            -Clash of clans game download for pc windows 8.1 with keyboard
            -Clash of clans game download for pc windows 8.1 latest version
            -Clash of clans game download for pc windows 8.1 softonic
            -Clash of clans game download for pc windows 8.1 filehippo
            -Clash of clans game download for pc windows 8.1 apk
            -Clash of clans game download for pc windows 8.1 mod
            -Clash of clans game download for pc windows 8.1 hack
            -Clash of clans game download for pc windows 8.1 cheats
            -Clash of clans game download for pc windows 8.1 tips and tricks
            -Clash of clans game download for pc windows 8.1 guide
            -Clash of clans game download for pc windows 8.1 review
            -Clash of clans game download for pc windows 8.1 system requirements
            -Clash of clans game download for pc windows 8.1 update
            -Clash of clans game download for pc windows 8.1 new features
            -Clash of clans game download for pc windows 8.1 gameplay
            -Clash of clans game download for pc windows 8.1 graphics
            -Clash of clans game download for pc windows 8.1 sound
            -Clash of clans game download for pc windows 8.1 controls
            -Clash of clans game download for pc windows 8.1 settings
            -Clash of clans game download for pc windows 8.1 support
            -Clash of clans game download for pc windows 8.1 error
            -Clash of clans game download for pc windows 8.1 fix
            -Clash of clans game download for pc windows 8.1 patch
            -Clash of clans game download for pc windows 8.1 crack
            -Clash of clans game download for pc windows 8.1 serial key
            -Clash of clans game download for pc windows 8.1 activation code
            -Clash of clans game download for pc windows 8.1 license key
            -Clash of clans game download for pc windows 8.1 product key
            -Clash of clans game download for pc windows 8.1 registration key
            -Clash of clans game download for pc windows 8.1 keygen
            -Clash of clans game download for pc windows 8.1 generator
            -Clash of clans game download for pc windows 8.1 installer
            -Clash of clans game download for pc windows 8.1 setup
            -Clash of clans game download for pc windows 8.1 exe file
            -Clash of clans game download for pc windows 8.1 rar file
            -Clash of clans game download for pc windows 8.1 zip file
            -Clash of clans game download for pc windows 8.1 iso file
            -Clash of clans game download for pc windows 8.1 torrent file
            -Clash of clans game download for pc windows 8.1 direct link
            -Clash of clans game download for pc windows 8.1 mirror link

            -

            Join a clan and participate in clan wars

            -

            One of the best features of Clash of Clans is the clan system, which allows you to join or create a clan with other players and cooperate in clan wars, clan games, and other events. Joining a clan will give you many benefits, such as:

            -
              -
            • Getting donations from your clanmates to fill your army camps and clan castle.
            • -
            • Getting advice and feedback from your clanmates to improve your skills and strategy.
            • -
            • Getting rewards from clan wars, clan games, and other events that can help you progress faster in the game.
            • -
            • Having fun and making friends with players from around the world who share your passion for the game.
            • -
            -

            You should join a clan that matches your level, activity, language, and expectations. You should also participate in clan wars regularly and follow the clan rules and guidelines.

            -

            Use spells and heroes strategically

            -

            The last tip for playing Clash of Clans on PC Windows 8.1 is to use spells and heroes strategically. Spells are powerful items that can boost your troops or hinder your enemies in battles. Heroes are special units that have unique abilities and can be leveled up indefinitely. You should use spells and heroes wisely to maximize their effects and minimize their costs. For example:

            -
              -
            • You should use healing spells to heal your troops when they are low on health or under heavy fire.
            • -
            • You should use rage spells to increase the damage and speed of your troops when they are near the enemy's base or defenses.
            • -
            • You should use freeze spells to stop the enemy's defenses or inferno towers from attacking your troops for a few seconds.
            • -
            • You should use poison spells to damage or slow down the enemy's clan castle troops or heroes.
            • -
            • You should use jump spells to help your troops jump over walls or obstacles.
            • -
            • You should use haste spells to make your troops move faster without increasing their damage.
            • -
            • You should use clone spells to create copies of your troops that have the same stats but disappear after a while.
            • -
            • You should use skeleton spells to summon skeletons that can distract or damage the enemy's defenses or resources.
            • -
            • You should use bat spells to summon bats that can target air or ground defenses or resources.
            • -
            • You should use your barbarian king to tank damage and deal high damage to enemy buildings and troops.
            • -
            • You should use your archer queen to snipe enemy buildings and troops from a distance and cloak herself when in danger.
            • -
            • You should use your grand warden to support your troops with his aura and ability and switch between ground and air mode depending on your army composition.
            • -
            • You should use your royal champion to target enemy defenses and shields and stun them with her ability.
            • -
            • You should use your battle machine to smash enemy buildings and troops and regenerate health with his ability.
            • -
            -

            Conclusion

            -

            Clash of Clans is a fun and addictive game that you can play on your PC Windows 8.1 with the help of an emulator like LDPlayer. Playing Clash of Clans on PC Windows 8.1 has many benefits, such as bigger screen, better graphics, faster performance, smoother gameplay, and keyboard and mouse control. You can also improve your skills and strategy in the game by following some tips and tricks, such as choosing your base layout wisely, upgrading your troops and buildings regularly, joining a clan and participating in clan wars, and using spells and heroes strategically. If you are looking for a game that will challenge your mind, test your skills, and entertain you for hours, then you should download and play Clash of Clans on PC Windows 8.1 today!

            -

            FAQs

            -

            Here are some frequently asked questions about Clash of Clans game download for PC Windows 8.1:

            -
              -
            1. Q: Is Clash of Clans free to play on PC Windows 8.1?
            2. -
            3. A: Yes, Clash of Clans is free to play on PC Windows 8.1. However, you may need to purchase some in-game items or features with real money if you want to enhance your gaming experience.
            4. -
            5. Q: Is Clash of Clans safe to play on PC Windows 8.1?
            6. -
            7. A: Yes, Clash of Clans is safe to play on PC Windows 8.1 as long as you use a reliable emulator like LDPlayer and download the game from the official sources like LD Store or Google Play Store. You should also avoid using any hacks, cheats, or mods that may harm your device or account.
            8. -
            9. Q: Can I play Clash of Clans on PC Windows 8.1 with my mobile account?
            10. -
            11. A: Yes, you can play Clash of Clans on PC Windows 8.1 with your mobile account by logging in with your Google account or Supercell ID. You can also sync your progress and data across different devices by using the same account.
            12. -
            13. Q: Can I play Clash of Clans on PC Windows 8.1 offline?
            14. -
            15. A: No, you cannot play Clash of Clans on PC Windows 8.1 offline. You need an internet connection to access the game's features and content.
            16. -
            17. Q: Can I play Clash of Clans on PC Windows 8.1 with other players?
            18. -
            19. A: Yes, you can play Clash of Clans on PC Windows 8.1 with other players from around the world by joining or creating a clan, chatting with them, donating or requesting troops, and participating in clan wars, clan games, and other events.
            20. -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy the Thrill of Real Gangster Crime with Mod APK Hack - Latest Version Available.md b/spaces/fatiXbelha/sd/Enjoy the Thrill of Real Gangster Crime with Mod APK Hack - Latest Version Available.md deleted file mode 100644 index 39320dc7c19eacf1a7a402e1ec9a943b4c7f628d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy the Thrill of Real Gangster Crime with Mod APK Hack - Latest Version Available.md +++ /dev/null @@ -1,92 +0,0 @@ -
            -

            Real Gangster Crime Hack Mod APK Latest Version: Everything You Need to Know

            -

            Do you love playing open-world action games with a lot of freedom and customization? Do you want to experience the thrill of being a gangster in a fictional city full of crime and chaos? If yes, then you should check out Real Gangster Crime, a popular game by Naxeex Action & RPG Games. And if you want to make the game even more fun and exciting, you should try the Real Gangster Crime Hack Mod APK, which gives you unlimited money, weapons, cars, and more. In this article, we will tell you everything you need to know about this game and its modded version.

            -

            real gangster crime hack mod apk latest version


            Download Filehttps://urllie.com/2uNz1w



            -

            What is Real Gangster Crime?

            -

            Real Gangster Crime is a 3D open-world action game that lets you explore the streets of New Vegas, a city full of gangs, cops, and criminals. You can choose your own path in this game, whether you want to be a hero, a villain, or anything in between. You can also customize your character with different clothes, hairstyles, tattoos, and accessories.

            -

            Features of Real Gangster Crime

            -

            Some of the features of Real Gangster Crime are:

            -
              -
            • A large and diverse map with various locations and landmarks.
            • -
            • A variety of vehicles, including cars, bikes, helicopters, tanks, and more.
            • -
            • A wide range of weapons, from pistols and rifles to rocket launchers and grenades.
            • -
            • A dynamic combat system with realistic physics and ragdoll effects.
            • -
            • A lot of missions and quests to complete and earn rewards.
            • -
            • A free-roaming mode where you can do whatever you want without any restrictions.
            • -
            -

            How to play Real Gangster Crime

            -

            To play Real Gangster Crime, you need to download and install it from the Google Play Store or the App Store. The game is free to play, but it contains ads and in-app purchases. Once you launch the game, you can choose your character and start your adventure in New Vegas. You can use the virtual joystick on the left side of the screen to move around, and the buttons on the right side to perform actions like shooting, jumping, driving, etc. You can also access the map, inventory, settings, and other options from the menu icon on the top left corner of the screen.

            -

            real gangster crime mod apk unlimited money and gems
            -real gangster crime game hack download for android
            -real gangster crime latest version mod apk free
            -real gangster crime cheats codes for android
            -real gangster crime mod menu apk download
            -real gangster crime hack online generator
            -real gangster crime unlimited money mod apk
            -real gangster crime game mod apk revdl
            -real gangster crime new vegas hack mod apk
            -real gangster crime hack apk no root
            -real gangster crime mod apk rexdl
            -real gangster crime game hack version download
            -real gangster crime hack mod apk 2023
            -real gangster crime mod apk offline
            -real gangster crime hack tool apk
            -real gangster crime game mod apk unlimited everything
            -real gangster crime hack apk latest version 5.9.4
            -real gangster crime mod apk happymod
            -real gangster crime game hack apk pure
            -real gangster crime hack mod apk an1.com[^1^]
            -real gangster crime mod apk unlimited diamonds and coins
            -real gangster crime game hack mod download
            -real gangster crime latest version hack apk
            -real gangster crime mod apk all unlocked
            -real gangster crime hack apk for pc
            -real gangster crime mod apk android 1
            -real gangster crime game hack online
            -real gangster crime hack version download for android
            -real gangster crime mod apk unlimited health and ammo
            -real gangster crime hack apk obb
            -real gangster crime mod apk vip unlocked
            -real gangster crime game hack ios
            -real gangster crime hack version free download
            -real gangster crime mod apk no ads
            -real gangster crime hack apk data file host
            -real gangster crime mod apk unlimited cars and weapons
            -real gangster crime game hack 2023
            -real gangster crime hack version app download
            -real gangster crime mod apk latest update
            -real gangster crime hack apk mega.nz

            -

            What is Real Gangster Crime Hack Mod APK?

            -

            Real Gangster Crime Hack Mod APK is a modified version of the original game that gives you unlimited money, weapons, cars, and other resources. With this modded version, you can enjoy the game without any limitations or restrictions. You can buy anything you want from the shop, upgrade your skills and equipment, and unlock all the features of the game.

            -

            Benefits of Real Gangster Crime Hack Mod APK

            -

            Some of the benefits of Real Gangster Crime Hack Mod APK are:

            -
              -
            • You can get unlimited money to spend on anything you want.
            • -
            • You can get unlimited weapons to use in combat and missions.
            • -
            • You can get unlimited cars to drive around the city.
            • -
            • You can get unlimited health and armor to survive any damage.
            • -
            • You can get unlimited ammo to never run out of bullets.
            • -
            • You can get unlimited stars to avoid getting chased by the cops.
            • -
            -

            How to download and install Real Gangster Crime Hack Mod APK

            -

            To download and install Real Gangster Crime Hack Mod APK, you need to follow these steps:

            -
              -
            1. Go to [this link](^1^) and download the APK file of Real Gangster Crime Hack Mod APK.
            2. -
            3. Go to your device settings and enable the installation of apps from unknown sources.
            4. -
            5. Locate the downloaded APK file on your device and tap on it to install it.
            6. -
            7. Wait for the installation process to finish and then launch the game from your app drawer.
            8. -
            9. Enjoy the game with all the modded features and unlimited resources.
            10. -
            -

            Conclusion

            -

            Real Gangster Crime is a fun and addictive game that lets you live the life of a gangster in a city full of crime and chaos. You can explore the city, complete missions, fight enemies, drive vehicles, and customize your character. And if you want to make the game even more exciting, you can download and install the Real Gangster Crime Hack Mod APK, which gives you unlimited money, weapons, cars, and more. This way, you can enjoy the game without any limitations or restrictions. So, what are you waiting for? Download the game and the modded version now and have a blast!

            -

            FAQs

            -

            Here are some frequently asked questions about Real Gangster Crime and its modded version:

            - - - - - - -
            Q: Is Real Gangster Crime Hack Mod APK safe to use?A: Yes, it is safe to use as long as you download it from a trusted source like [this link]. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.
            Q: Do I need to root or jailbreak my device to use Real Gangster Crime Hack Mod APK?A: No, you do not need to root or jailbreak your device to use Real Gangster Crime Hack Mod APK. It works on both rooted and non-rooted devices.
            Q: Will I get banned from the game if I use Real Gangster Crime Hack Mod APK?A: No, you will not get banned from the game if you use Real Gangster Crime Hack Mod APK. The modded version has an anti-ban feature that prevents the game from detecting your modded activities. However, you should still be careful and not abuse the modded features too much.
            Q: Can I play online with other players if I use Real Gangster Crime Hack Mod APK?A: Yes, you can play online with other players if you use Real Gangster Crime Hack Mod APK. The modded version does not affect your online connectivity or compatibility with other players.
            Q: Can I update the game if I use Real Gangster Crime Hack Mod APK?A: Yes, you can update the game if you use Real Gangster Crime Hack Mod APK. However, you may need to download and install the latest version of the modded APK every time the game gets updated. Otherwise, you may lose some of the modded features or face some errors.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/scripts/train.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/scripts/train.py deleted file mode 100644 index d885cfde49a0b21140e663e475918698d5e51ee3..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/scripts/train.py +++ /dev/null @@ -1,88 +0,0 @@ -""" -This file runs the main training/val loop -""" -import os -import json -import math -import sys -import pprint -import torch -from argparse import Namespace - -sys.path.append(".") -sys.path.append("..") - -from options.train_options import TrainOptions -from training.coach import Coach - - -def main(): - opts = TrainOptions().parse() - previous_train_ckpt = None - if opts.resume_training_from_ckpt: - opts, previous_train_ckpt = load_train_checkpoint(opts) - else: - setup_progressive_steps(opts) - create_initial_experiment_dir(opts) - - coach = Coach(opts, previous_train_ckpt) - coach.train() - - -def load_train_checkpoint(opts): - train_ckpt_path = opts.resume_training_from_ckpt - previous_train_ckpt = torch.load(opts.resume_training_from_ckpt, map_location='cpu') - new_opts_dict = vars(opts) - opts = previous_train_ckpt['opts'] - opts['resume_training_from_ckpt'] = train_ckpt_path - update_new_configs(opts, new_opts_dict) - pprint.pprint(opts) - opts = Namespace(**opts) - if opts.sub_exp_dir is not None: - sub_exp_dir = opts.sub_exp_dir - opts.exp_dir = os.path.join(opts.exp_dir, sub_exp_dir) - create_initial_experiment_dir(opts) - return opts, previous_train_ckpt - - -def setup_progressive_steps(opts): - log_size = int(math.log(opts.stylegan_size, 2)) - num_style_layers = 2*log_size - 2 - num_deltas = num_style_layers - 1 - if opts.progressive_start is not None: # If progressive delta training - opts.progressive_steps = [0] - next_progressive_step = opts.progressive_start - for i in range(num_deltas): - opts.progressive_steps.append(next_progressive_step) - next_progressive_step += opts.progressive_step_every - - assert opts.progressive_steps is None or is_valid_progressive_steps(opts, num_style_layers), \ - "Invalid progressive training input" - - -def is_valid_progressive_steps(opts, num_style_layers): - return len(opts.progressive_steps) == num_style_layers and opts.progressive_steps[0] == 0 - - -def create_initial_experiment_dir(opts): - if os.path.exists(opts.exp_dir): - raise Exception('Oops... {} already exists'.format(opts.exp_dir)) - os.makedirs(opts.exp_dir) - - opts_dict = vars(opts) - pprint.pprint(opts_dict) - with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f: - json.dump(opts_dict, f, indent=4, sort_keys=True) - - -def update_new_configs(ckpt_opts, new_opts): - for k, v in new_opts.items(): - if k not in ckpt_opts: - ckpt_opts[k] = v - if new_opts['update_param_list']: - for param in new_opts['update_param_list']: - ckpt_opts[param] = new_opts[param] - - -if __name__ == '__main__': - main() diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl Stars Latest Version APK How to Unlock All the Characters and Skins.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl Stars Latest Version APK How to Unlock All the Characters and Skins.md deleted file mode 100644 index cb3e596f5b57fce00f02e241870162ef6bfcab63..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Brawl Stars Latest Version APK How to Unlock All the Characters and Skins.md +++ /dev/null @@ -1,258 +0,0 @@ -
            -

            Brawl Stars Latest Version APK: Everything You Need to Know

            -

            If you are looking for a fun and exciting multiplayer game for your mobile device, you might want to check out Brawl Stars. Brawl Stars is a game from Supercell, the makers of Clash of Clans and Clash Royale, that offers fast-paced 3v3 battles and a battle royale mode. You can play with friends or solo across a variety of game modes in under three minutes. You can also unlock and upgrade dozens of brawlers with powerful abilities, skins, and gadgets. In this article, we will tell you everything you need to know about Brawl Stars latest version APK, including how to download it, what's new in it, how to play it, and how to enjoy it more.

            -

            What is Brawl Stars?

            -

            Brawl Stars is a mobile game that combines elements of shooter, MOBA, and battle royale genres. It was released globally in December 2018 by Supercell, a Finnish game developer known for its popular games like Clash of Clans and Clash Royale. Here are some of the features that make Brawl Stars stand out from other games:

            -

            brawl stars latest version apk


            Download File ✺✺✺ https://gohhs.com/2uPtfp



            -

            A fast-paced multiplayer game from Supercell

            -

            Brawl Stars is designed for mobile devices, which means you can play it anytime and anywhere. The matches are short and intense, lasting for only two or three minutes. You can team up with your friends or play solo against players from around the world. The game has a simple and intuitive control scheme that lets you move, aim, shoot, and use your super ability with ease.

            -

            A variety of game modes and characters to choose from

            -

            Brawl Stars has four main game modes that offer different objectives and strategies. They are:

            -

            brawl stars new update apk download
            -brawl stars mod apk latest version 2023
            -brawl stars apk for android tv
            -brawl stars hack apk latest version
            -brawl stars apk for pc windows 10
            -brawl stars latest version apk pure
            -brawl stars apk for tablet
            -brawl stars unlimited gems apk latest version
            -brawl stars apk for android 4.4
            -brawl stars private server apk latest version
            -brawl stars old version apk download
            -brawl stars apk for android 11
            -brawl stars nulls apk latest version
            -brawl stars apk for pc windows 7
            -brawl stars free skins apk latest version
            -brawl stars apk for android 5.1
            -brawl stars rebrawl apk latest version
            -brawl stars apk for pc windows 8
            -brawl stars all brawlers unlocked apk latest version
            -brawl stars apk for android 6.0
            -brawl stars lwarb beta apk latest version
            -brawl stars apk for pc windows xp
            -brawl stars mega box hack apk latest version
            -brawl stars apk for android 7.0
            -brawl stars starr park update apk download
            -brawl stars apk for pc windows vista
            -brawl stars unlimited money apk latest version
            -brawl stars apk for android 8.0
            -brawl stars season 8 update apk download
            -brawl stars apk for pc windows 11
            -brawl stars god mode apk latest version
            -brawl stars apk for android 9.0
            -brawl stars season 9 update apk download
            -brawl stars apk for macbook pro
            -brawl stars unlimited tickets apk latest version
            -brawl stars apk for android 10.0
            -brawl stars season 10 update apk download
            -brawl stars apk for macbook air
            -brawl stars unlimited power points apk latest version
            -brawl stars apk for android 12.0
            -brawl stars season 11 update apk download
            -brawl stars apk for chromebook
            -brawl stars unlimited trophies apk latest version
            -brawl stars season 12 update apk download

            -
              -
            • Gem Grab (3v3): Team up and out-strategize the opposing team. Collect and hold 10 gems to win, but get fragged and lose your gems.
            • -
            • Showdown (Solo/Duo): A battle royale style fight for survival. Collect power-ups for your brawler. Grab a friend or play solo - be the last brawler standing in the rowdiest battle royale yet. Winner take all!
            • -
            • Brawl Ball (3v3): It's a whole new brawl game! Show off your soccer/football skills and score two goals before the other team. There are no red cards here.
            • -
            • Bounty (3v3): Take out opponents to earn stars, but don't let them pick you off. The squad with the most stars wins the match!
            • -
            -

            Besides these game modes, there are also special events and limited-time modes that offer more variety and rewards. For example, you can play Robo Rumble, Boss Fight, Super City Rampage, and more.

            -

            Brawl Stars also has a diverse cast of characters, called brawlers, that you can unlock and upgrade. Each brawler has a unique personality, appearance, and ability. Some of them are:

            -
              -
            • Shelly: A shotgun-wielding brawler who can blast enemies at close range and charge up her super to unleash a powerful shell.
            • -
            • Nita: A brawler who can summon a big bear to fight by her side and stun enemies with her shockwave.
            • -
            • Colt: A sharpshooter who can fire a barrage of bullets and use his super to unleash a bullet storm.
            • -
            • Bull: A tough brawler who can charge through obstacles and enemies with his super and deal massive damage at close range.
            • -
            • Jessie: A smart brawler who can build a turret that shoots at enemies and bounce her shots off multiple targets.
            • -
            • And many more!
            • -
            -

            A constantly evolving game with new content and features

            -

            Brawl Stars is not a static game that stays the same forever. It is constantly updated with new content and features that keep the game fresh and exciting. For example, every few weeks, there is a new season that introduces a new theme, a new brawler, new skins, new gadgets, new maps, and more. There are also balance changes that tweak the performance of the brawlers and the game modes. You can always expect something new and different in Brawl Stars.

            -

            How to download Brawl Stars latest version APK?

            -

            If you want to play Brawl Stars on your mobile device, you have two options: the official way or the alternative way. Here are the pros and cons of each option:

            -

            The official way: Google Play Store or App Store

            -

            The official way to download Brawl Stars is to use the Google Play Store or the App Store, depending on your device. This is the safest and easiest way to get the game, as you don't have to worry about compatibility issues, viruses, or malware. You also get automatic updates whenever there is a new version available. To download Brawl Stars from the Google Play Store or the App Store, you just need to follow these steps:

            -
              -
            1. Open the Google Play Store or the App Store on your device.
            2. -
            3. Search for "Brawl Stars" in the search bar.
            4. -
            5. Tap on the Brawl Stars icon and then tap on "Install" or "Get".
            6. -
            7. Wait for the download and installation to finish.
            8. -
            9. Launch the game and enjoy!
            10. -
            -

            The alternative way: APK websites or third-party sources

            -

            The alternative way to download Brawl Stars is to use APK websites or third-party sources that offer APK files of the game. APK stands for Android Package Kit, which is a file format that contains all the components of an Android app. By downloading an APK file of Brawl Stars, you can install the game on your device without using the Google Play Store or the App Store. However, this method has some risks and benefits that you should be aware of:

            -

            The risks of using APK files

            -
              -
            • You might download a fake or modified version of Brawl Stars that contains viruses, malware, or unwanted ads.
            • -
            • You might violate the terms of service of Supercell or Google Play Store or App Store by using an unauthorized source of the game.
            • -
            • You might encounter compatibility issues or bugs that affect your gameplay experience.
            • -
            • You might miss out on some features or updates that are only available in the official version of Brawl Stars.
            • -
            -

            The benefits of using APK files

            -
              -
            • You might access Brawl Stars in regions where it is not officially available yet.
            • -
            • You might get some extra features or mods that are not included in the official version of Brawl Stars.
            • -
            • You might save some storage space on your device by downloading a smaller APK file than the full app.
            • -
            -

            If you decide to use APK files to download Brawl Stars, you need to follow these steps:

              -
            1. Find a reliable and trustworthy APK website that offers Brawl Stars latest version APK. Some examples are APKPure, APKMirror, and APKCombo.
            2. -
            3. Search for "Brawl Stars" on the website and choose the latest version available.
            4. -
            5. Download the APK file to your device. You might need to enable "Unknown sources" or "Allow from this source" in your device settings to allow the installation of apps from outside the Google Play Store or the App Store.
            6. -
            7. Locate the APK file on your device and tap on it to install it.
            8. -
            9. Launch the game and enjoy!
            10. -
            -

            What's new in Brawl Stars latest version APK?

            -

            Brawl Stars is always adding new content and features to keep the game fresh and exciting. The latest version of Brawl Stars, which is 37.250, was released on June 21, 2023. Here are some of the highlights of this update:

            -

            The new season: Starr Force

            -

            The new season of Brawl Stars is called Starr Force, and it has a sci-fi theme. The season introduces a new environment, a new brawler, new skins, new gadgets, new maps, and more. Here are some of the details of the season:

            -
              -
            • The new environment is called Starr Park Space, and it features a futuristic space station with rockets, satellites, and lasers.
            • -
            • The new brawler is Colonel Ruffs, who is the leader of Starr Force. He is a chromatic brawler who can shoot double laser bullets and use his super to call an orbital supply drop that buffs his allies and damages enemies.
            • -
            • The new skins include Space Ox Bull, Navigator Colette, Dark Lord Spike, Dark Tide Carl, Ronin Ruffs, Saloon 8-Bit, and Goldarm Gang skins for Rico, Darryl, and Penny.
            • -
            • The new gadgets include Rocket Laces for Brock, Vitamin Booster for Poco, Return to Sender for Colette, Fisticuffs for Edgar, and Scrap Sucker for Gene.
            • -
            • The new maps include some fan-made maps that were selected from the Map Maker contest.
            • -
            -

            The new brawler: Colonel Ruffs

            -

            As mentioned above, the new brawler in Brawl Stars is Colonel Ruffs, who is the leader of Starr Force. He is a chromatic brawler who can be unlocked from Brawl Pass or Brawl Boxes. Here are some of his stats and abilities:

            - - - - - - - - -
            StatValue
            Health3360
            AttackDouble-Barreled: Colonel Ruffs fires two parallel laser shots that deal 560 damage each.
            SuperAir Strike: Colonel Ruffs marks an area with a laser pointer and calls an orbital supply drop that deals 700 damage to enemies and leaves behind a power-up that increases the health and damage of him and his allies by 20% for the rest of the match.
            GadgetTake Cover: Colonel Ruffs creates a sandbag wall in front of him that blocks enemy attacks.
            Star PowerAir Superiority: Colonel Ruffs' super now also destroys walls and bushes within the area.
            Star PowerField Promotion: Colonel Ruffs permanently increases the maximum health of him and his allies by 30 every time he hits them with his super.
            -

            The new skins, gadgets, maps, and more

            -

            Besides the new season and the new brawler, Brawl Stars also has some other new content and features that you can enjoy in the latest version. They are:

            -

            The new skins

            -

            Brawl Stars has some amazing skins that you can use to customize your brawlers and make them look cooler. The latest version has some new skins that you can get from Brawl Pass, Brawl Boxes, or the Shop. They are:

            -
              -
            • Space Ox Bull: A skin that turns Bull into a space ox with horns, hooves, and a jetpack.
            • -
            • Navigator Colette: A skin that turns Colette into a space explorer with a helmet, a suit, and a star map.
            • -
            • Dark Lord Spike: A skin that turns Spike into a dark lord with a cape, a mask, and a red lightsaber.
            • -
            • Dark Tide Carl: A skin that turns Carl into a dark tide pirate with a hat, an eye patch, and a hook.
            • -
            • Ronin Ruffs: A skin that turns Ruffs into a ronin, a samurai without a master, with a kimono, a sword, and a scar.
            • -
            • Saloon 8-Bit: A skin that turns 8-Bit into a saloon owner with a hat, a vest, and a mustache.
            • -
            • Goldarm Gang skins: A set of skins that turn Rico, Darryl, and Penny into members of the Goldarm Gang, a group of outlaws who rob banks and trains. They have gold accents, bandanas, and weapons.
            • -
            -

            The new gadgets

            -

            Brawl Stars has some gadgets that you can use to enhance your brawlers' abilities and give them an edge in battle. The latest version has some new gadgets that you can unlock from Brawl Boxes or the Shop. They are:

            -
              -
            • Rocket Laces for Brock: Brock activates his rocket laces and jumps over walls and obstacles.
            • -
            • Vitamin Booster for Poco: Poco instantly heals himself and all nearby allies for 500 health.
            • -
            • Return to Sender for Colette: Colette returns 60% of the damage she receives from the next attack.
            • -
            • Fisticuffs for Edgar: Edgar increases his damage by 25% for four seconds.
            • -
            • Scrap Sucker for Gene: Gene steals 30% of the health from the closest enemy brawler within his attack range.
            • -
            -

            The new maps

            -

            Brawl Stars has some maps that you can play on in different game modes. The latest version has some new maps that were created by the community using the Map Maker feature. They are:

            -
              -
            • Gem Grab: Crystal Arcade, Flooded Mine, Hard Rock Mine, Royal Flush, and Undermine.
            • -
            • Showdown: Acid Lakes, Cavern Churn, Feast or Famine, Forsaken Falls, and Skull Creek.
            • -
            • Brawl Ball: Backyard Bowl, Center Stage, Pinball Dreams, Sneaky Fields, and Super Stadium.
            • -
            • Bounty: Canal Grande, Dry Season, Excel, Layer Cake, and Shooting Star.
            • -
            -

            How to play Brawl Stars latest version APK?

            -

            Now that you have downloaded Brawl Stars latest version APK and learned about its new content and features, you might be wondering how to play it. Don't worry, we have you covered. Here are some basic tips and tricks for each game mode:

            -

            The basic controls and mechanics

            -

            Brawl Stars has a simple and intuitive control scheme that lets you move, aim, shoot, and use your super ability with ease. You can choose between two control modes: joystick or tap. In joystick mode, you use a virtual joystick on the left side of the screen to move your brawler, and another one on the right side to aim and shoot. You can also drag and release the right joystick to use your super ability. In tap mode, you tap on the screen to move your brawler, and swipe to aim and shoot. You can also tap on the super button to use your super ability.

            -

            Brawl Stars also has some basic mechanics that you should know before playing. They are:

            -
              -
            • Health: Your brawler's health is shown by a green bar above their head. If it reaches zero, your brawler is defeated and respawns after a few seconds. You can heal your brawler by staying out of combat for a while or by using some gadgets or star powers.
            • -
            • Ammo: Your brawler's ammo is shown by three bars below their health bar. Each time you shoot or use your super ability, you consume one bar of ammo. You can reload your ammo by not shooting for a while or by using some gadgets or star powers.
            • -
            • Super: Your brawler's super ability is shown by a yellow circle around their portrait. Each time you hit an enemy with your normal attack or take damage from them, you charge up your super ability. When it is fully charged, you can use it by dragging and releasing the right joystick or tapping on the super button. Your super ability is usually more powerful than your normal attack and can have different effects depending on your brawler.
            • -
            -

            The tips and tricks for each game mode

            -

            Brawl Stars has four main game modes that offer different objectives and strategies. Here are some tips and tricks for each game mode:

            -

            Gem Grab

            -
              -
            • The objective of Gem Grab is to collect and hold 10 gems as a team for 15 seconds. The gems spawn in the center of the map every few seconds.
            • -
            • The best brawlers for Gem Grab are those who can control the center area, support their teammates, and escape from enemies. Some examples are Pam, Gene, Poco, Nita, and Tara.
            • -
            • The best strategy for Gem Grab is to have a balanced team composition with a gem carrier, a support, and an aggro. The gem carrier is the brawler who collects and holds the gems, and should stay behind the support and avoid risky fights. The support is the brawler who helps the gem carrier by healing, shielding, or buffing them, and should also protect the center area from enemies. The aggro is the brawler who distracts and harasses the enemy team, and should also try to steal their gems or prevent them from escaping.
            • -
            • Some tips for Gem Grab are:
            • -
                -
              • Don't be greedy. If you have enough gems to win, retreat to your base and wait for the countdown to end. Don't chase enemies or go for more gems unless you are sure you can do it safely.
              • -
              • Don't be reckless. If you are carrying a lot of gems, don't go into dangerous situations or expose yourself to enemy fire. Let your teammates do the fighting and cover you.
              • -
              • Don't be selfish. If you are not carrying any gems, don't take them from your gem carrier or the center area unless they are in danger of being stolen by enemies. Instead, focus on helping your gem carrier or attacking the enemy team.
              • -
              -
            -

            Showdown

            -
              -
            • The objective of Showdown is to be the last brawler standing in a map with 10 players. You can play solo or duo with a partner. You can also collect power-ups that increase your health and damage by destroying boxes or defeating enemies.
            • -
            • The best brawlers for Showdown are those who can survive on their own, deal high damage, and use the map to their advantage. Some examples are Edgar, Colt, Bull, Crow, and Leon.
            • -
            • The best strategy for Showdown is to adapt to the situation and use your skills wisely. Depending on the map, the number of enemies, and the power-ups available, you might want to play aggressively or defensively, hide or fight, team up or betray.
            • -
            • Some tips for Showdown are:
            • -
                -
              • Be aware of your surroundings. Use the bushes, walls, and obstacles to hide from enemies or ambush them. Watch out for traps, poison clouds, meteors, and robots that can damage you.
              • -
              • Be smart about your power-ups. Don't waste your time or risk your life chasing power-ups that are too far away or guarded by enemies. Instead, focus on getting the ones that are nearby or easy to get. Also, don't underestimate the power of a single power-up, as it can make a big difference in a fight.
              • -
              • Be flexible about your alliances. In duo mode, you can team up with another player by spinning around them or using friendly emotes. This can help you survive longer and eliminate enemies together. However, be careful of backstabbers who might betray you at any moment. In solo mode, you can also team up with other players temporarily by spinning around them or using friendly emotes. This can help you deal with stronger enemies or share power-ups. However, be ready to fight them at any moment, as there is only one winner in Showdown.
              • -
              -
            -

            Brawl Ball

            -
              -
            • The objective of Brawl Ball is to score two goals before the enemy team by kicking or carrying a ball into their goal. You can also win by having more goals than the enemy team when the time runs out.
            • -
            • The best brawlers for Brawl Ball are those who can move fast, shoot accurately, break walls, and stun enemies. Some examples are El Primo, Spike, Rico, Bibi, and Frank.
            • -
            • The best strategy for Brawl Ball is to have a balanced team composition with a striker, a defender, and a support. The striker is the brawler who carries or kicks the ball into the enemy goal, and should have high mobility, damage, and accuracy. The defender is the brawler who protects their own goal from enemy attacks, and should have high health, range, and wall-breaking ability. The support is the brawler who helps the striker and the defender by healing, buffing, or stunning enemies, and should have high utility, control, and versatility.
            • -
            • Some tips for Brawl Ball are:
            • -
                -
              • Don't hold the ball for too long. If you are carrying the ball, you can't attack or use your super ability. This makes you vulnerable to enemy attacks and prevents you from charging your super. Instead, pass the ball to your teammates or kick it forward when you see an opening.
              • -
              • Don't shoot the ball randomly. If you shoot the ball without aiming or planning, you might miss the goal or give the ball to the enemy team. Instead, aim carefully and shoot when you have a clear shot or a good angle.
              • -
              • Don't ignore the enemy brawlers. If you focus too much on the ball and ignore the enemy brawlers, you might get killed or outplayed by them. Instead, pay attention to their positions, movements, and abilities, and try to counter them or avoid them.
              • -
              -
            -

            Bounty

            -
              -
            • The objective of Bounty is to collect more stars than the enemy team by defeating them. Each brawler starts with two stars, and one more star is added to their bounty every time they defeat an enemy without dying. The team with the most stars at the end of the match wins.
            • -
            • The best brawlers for Bounty are those who can deal high damage from a long distance, survive for a long time, and escape from danger. Some examples are Piper, Brock, Bo, Tick, and Bea.
            • -
            • The best strategy for Bounty is to have a balanced team composition with a sniper, a controller, and a runner. The sniper is the brawler who can deal high damage from a long distance and take out enemies with one or two shots. The controller is the brawler who can control the map with traps, mines, or turrets and prevent enemies from approaching. The runner is the brawler who can move fast and dodge enemy attacks and collect stars or chase enemies.
            • -
            • Some tips for Bounty are:
            • -
                -
              • Don't die unnecessarily. If you die, you lose all your stars and give them to the enemy team. This can make a huge difference in the score and the outcome of the match. Instead, play cautiously and retreat when you are low on health or outnumbered.
              • -
              • Don't chase enemies blindly. If you chase enemies without backup or strategy, you might fall into a trap or get ambushed by their teammates. Instead, coordinate with your teammates and use your abilities wisely.
              • -
              • Don't forget about the star in the middle. There is a star in the middle of the map that spawns at the beginning of the match and every time it is collected. This star can be a tie-breaker or a game-changer in close matches. Instead, try to secure it or contest it when possible.
              • -
              -
            -

            How to enjoy Brawl Stars latest version APK more?

            -

            Brawl Stars is already a fun and exciting game that you can play for hours without getting bored. However, there are some ways that you can enjoy it even more. Here are some suggestions:

            -

            Join or create a club with other players

            -

            Brawl Stars has a feature called clubs that allows you to join or create a group of players who share your interests and goals. You can chat with your club members, play friendly matches with them, invite them to your team, or compete with other clubs in club wars. You can also earn club trophies that contribute to your club's ranking and reputation. Joining or creating a club can help you make new friends, learn new strategies, have more fun, and improve your skills. You can join or create a club by tapping on the club button on the main screen.

            -

            Participate in special events and challenges

            -

            Brawl Stars has some special events and challenges that offer more variety and rewards than the regular game modes. They are:

            -
              -
            • Robo Rumble: A special event where you and two other players have to defend a safe from waves of robots. The longer you survive, the more coins you earn.
            • -
            • Boss Fight: A special event where you and two other players have to fight a giant robot boss with different abilities. The harder the difficulty, the more coins you earn.
            • -
            • Super City Rampage: A special event where you and two other players have to stop a giant monster from destroying the city. The faster you defeat it, the more coins you earn.
            • -
            • Power Play: A challenge where you can use your maxed-out brawlers to compete in three matches per day with different game modes and maps. The more wins you get, the more star points you earn.
            • -
            • Championship Challenge: A challenge where you can compete in a series of matches with different game modes and maps. If you win 12 matches without losing four, you qualify for the monthly finals and have a chance to win real money prizes.
            • -
            -

            You can access these special events and challenges by tapping on the trophy road button on the main screen.

            -

            Follow Brawl Stars on social media and official website

            -

            Brawl Stars has some social media accounts and an official website that you can follow to get the latest news, updates, sneak peeks, tips, fan art, contests, and more. They are:

            -
              -
            • Facebook: https://www.facebook.com/brawlstars/
            • -
            • Twitter: https://twitter.com/brawlstars
            • -
            • Instagram: https://www.instagram.com/brawlstars/
            • -
            • YouTube: https://www.youtube.com/brawlstars
            • -
            • Reddit: https://www.reddit.com/r/Brawlstars/
            • -
            • Discord: https://discord.gg/brawlstars
            • -
            • Official website: https://supercell.com/en/games/brawlstars/
            • -
            -

            Conclusion

            -

            Brawl Stars is a mobile game that offers fast-paced 3v3 battles and a battle royale mode. You can play with friends or solo across a variety of game modes in under three minutes. You can also unlock and upgrade dozens of brawlers with powerful abilities, skins, and gadgets. Brawl Stars is constantly updated with new content and features that keep the game fresh and exciting. In this article, we have told you everything you need to know about Brawl Stars latest version APK, including how to download it, what's new in it, how to play it, and how to enjoy it more. We hope you found this article helpful and informative. Now go ahead and brawl on!

            -

            FAQs

            -

            Here are some frequently asked questions about Brawl Stars latest version APK:

            -

            Q: Is Brawl Stars free to play?

            -

            A: Yes, Brawl Stars is free to download and play. However, there are some optional in-app purchases that can enhance your gameplay experience or speed up your progress.

            -

            Q: Is Brawl Stars compatible with my device?

            -

            A: Brawl Stars requires Android 4.3 or higher or iOS 9.0 or higher to run. It also requires at least 150 MB of free storage space on your device.

            -

            Q: Is Brawl Stars safe to play?

            -

            A: Yes, Brawl Stars is safe to play as long as you download it from the official sources (Google Play Store or App Store) or reliable APK websites (APKPure, APKMirror, APKCombo). However, be careful of fake or modified versions of Brawl Stars that might contain viruses, malware, or unwanted ads.

            -

            Q: How can I contact Supercell for support or feedback?

            -

            A: You can contact Supercell for support or feedback by tapping on the settings button on the main screen and then tapping on "Help and Support" or "Feedback". You can also visit their support website at https://supercell.helpshift.com/a/brawl-stars/.

            -

            Q: How can I learn more about Brawl Stars?

            -

            A: You can learn more about Brawl Stars by reading their blog posts at https://blog.brawlstars.com/, watching their videos at https://www.youtube.com/brawlstars, or browsing their wiki at https://brawlstars.fandom .com/. You can also follow Brawl Stars on social media and official website as mentioned above.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram Jio Phone The Easiest Way to Share Your Moments with the World.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram Jio Phone The Easiest Way to Share Your Moments with the World.md deleted file mode 100644 index cce3ef92729e8b5b71f2547c98ef4b734ccac812..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram Jio Phone The Easiest Way to Share Your Moments with the World.md +++ /dev/null @@ -1,142 +0,0 @@ - -

            Download Instagram Jio Phone: How to Install and Use Instagram on Your Jio Phone

            -

            Do you want to download Instagram on your Jio phone and enjoy the popular social media app? If yes, then you are in the right place. In this article, we will show you how to download, install, and use Instagram on your Jio phone in simple steps. We will also share some tips and tricks for making the most of Instagram on your Jio phone.

            -

            download instagram jio phone


            DOWNLOADhttps://gohhs.com/2uPuxo



            -

            What is Instagram?

            -

            Instagram is a free app that allows you to create and share your photos, stories, reels, and videos with the friends and followers you care about. You can also connect with people from all over the world who share your interests and passions. You can explore different categories of content, such as music, sports, fashion, art, comedy, and more. You can also watch IGTV for longer videos from your favorite creators.

            -

            Why use Instagram on your Jio phone?

            -

            There are many reasons why you might want to use Instagram on your Jio phone. Here are some of them:

            -
              -
            • You can express yourself creatively and authentically through your posts.
            • -
            • You can stay in touch with your friends and family through messages and comments.
            • -
            • You can discover new things and learn from others who inspire you.
            • -
            • You can have fun and be entertained by watching short videos and reels.
            • -
            • You can support local businesses and brands that you like.
            • -
            -

            How to download Instagram on your Jio phone?

            -

            There are two ways to download Instagram on your Jio phone. You can either use the Jio Store app or the browser app. Here are the steps for both methods:

            -

            How to download instagram app on jio phone
            -Download instagram apk for jio phone
            -Instagram for jio phone free download
            -Jio phone me instagram kaise download kare
            -Download instagram lite for jio phone
            -Instagram download karna hai jio phone me
            -Jio phone instagram app download link
            -Download instagram reels for jio phone
            -Jio phone me instagram kaise chalaye
            -Instagram video download in jio phone
            -Download instagram story on jio phone
            -Jio phone me instagram se photo kaise download kare
            -Instagram app download for jio phone keypad
            -Download instagram mod apk for jio phone
            -Jio phone me instagram id kaise banaye
            -Instagram live download in jio phone
            -Jio phone me instagram update kaise kare
            -Download instagram video without watermark in jio phone
            -Instagram app download for jio phone 1500
            -Download instagram status in jio phone
            -Jio phone me instagram par video kaise upload kare
            -Download instagram photos and videos in jio phone
            -Instagram app download for jio phone 2021
            -Download instagram profile picture in jio phone
            -Jio phone me instagram par account kaise banaye
            -Download instagram highlights in jio phone
            -Jio phone me instagram par followers kaise badhaye
            -Download instagram music in jio phone
            -Instagram app download for jio phone 2020
            -Download instagram dm in jio phone
            -Jio phone me instagram par video call kaise kare
            -Download instagram bio in jio phone
            -Instagram app download for jio phone 501
            -Download instagram captions in jio phone
            -Jio phone me instagram par live kaise aaye
            -Download instagram filters in jio phone
            -Instagram app download for jio phone 2022
            -Download instagram fonts in jio phone
            -Jio phone me instagram par emoji kaise lagaye
            -Download instagram hashtags in jio phone
            -Instagram app download for jio phone 999
            -Download instagram stickers in jio phone
            -Jio phone me instagram par voice message kaise bheje
            -Download instagram voice changer in jio phone
            -Instagram app download for jio phone 6999

            -

            Method 1: Using the Jio Store app

            -
              -
            1. Open the menu on your Jio phone and select the Jio Store app.
            2. -
            3. Scroll down to find the Social category and tap on it.
            4. -
            5. Look for the Instagram app icon and tap on it.
            6. -
            7. Tap on the Download button and wait for the download to finish.
            8. -
            -

            Method 2: Using the browser app

            -
              -
            1. Open the menu on your Jio phone and select the browser app.
            2. -
            3. Type in the URL in the address bar and press enter.
            4. -
            5. Tap on the Install button and wait for the download to finish.
            6. -
            -

            How to install Instagram on your Jio phone?

            -

            After downloading Instagram on your Jio phone, you need to install it before you can use it. Here are the steps to install Instagram on your Jio phone:

            -

            Step 1: Open the downloaded file

            -

            N

            Navigate to the folder where you saved the downloaded file. You can use the file manager app to do this.

            -

            Step 2: Accept the permissions

            -

            Tap on the file and you will see a pop-up asking for your permission to install the app. Tap on the Accept button and proceed.

            -

            Step 3: Wait for the installation to complete

            -

            The installation process will take a few seconds. You will see a progress bar showing the status of the installation. Once it is done, you will see a confirmation message saying that Instagram has been installed successfully.

            -

            How to use Instagram on your Jio phone?

            -

            Now that you have installed Instagram on your Jio phone, you can start using it and enjoy its features. Here are the steps to use Instagram on your Jio phone:

            -

            Step 1: Launch the app and sign in or sign up

            -

            Open the menu on your Jio phone and select the Instagram app. You will see a welcome screen asking you to sign in or sign up. If you already have an Instagram account, you can enter your username and password and tap on the Log In button. If you don't have an account, you can tap on the Sign Up button and follow the instructions to create one.

            -

            Step 2: Explore the features and functions of Instagram

            -

            Once you are logged in, you will see the main screen of Instagram. You can explore the different features and functions of Instagram by using the icons at the bottom of the screen. Here is what they do:

            -
              -
            • The Home icon takes you to your feed, where you can see the posts from the accounts you follow.
            • -
            • The Search icon takes you to the Explore page, where you can discover new content and accounts based on your interests.
            • -
            • The Plus icon takes you to the Camera page, where you can create and share your photos, stories, reels, and videos.
            • -
            • The Heart icon takes you to the Activity page, where you can see your notifications and interactions with other users.
            • -
            • The Profile icon takes you to your profile page, where you can edit your profile, view your posts, and access your settings.
            • -
            -

            Step 3: Create and share your photos, stories, reels, and videos

            -

            One of the main features of Instagram is that it allows you to create and share your photos, stories, reels, and videos with your friends and followers. Here are some tips on how to do that:

            -
              -
            • To create a photo or video post, tap on the Plus icon and select Photo or Video. You can either take a new photo or video using your Jio phone's camera or select one from your gallery. You can then edit your photo or video by adding filters, stickers, text, or music. You can also tag people, add a location, or write a caption. When you are done, tap on Share to post it.
            • -
            • To create a story, tap on the Plus icon and select Story. You can either take a new photo or video using your Jio phone's camera or select one from your gallery. You can then edit your story by adding filters, stickers, text, or music. You can also add polls, questions, quizzes, or countdowns to make your story more interactive. When you are done, tap on Your Story to share it with your followers. Your story will disappear after 24 hours unless you save it as a highlight on your profile.
            • -
            • To create a reel, tap on the Plus icon and select Reel. You can either take a new video using your Jio phone's camera or select one from your gallery. You can then edit your reel by adding filters, stickers, text, or music. You can also adjust the speed, length, or alignment of your reel. When you are done, tap on Share to post it. Your reel will appear on your profile and on the Reels tab for everyone to see.
            • -
            -

            Tips and tricks for using Instagram on your Jio phone

            -

            Here are some tips and tricks for using Instagram on your Jio phone:

            -

            Tip 1: Use filters and stickers to enhance your posts

            -

            Instagram offers a variety of filters and stickers that you can use to enhance your posts. Filters can change the mood, tone, or color of your photos or videos. Stickers can add fun elements such as emojis, GIFs, or hashtags to your posts. To use filters and stickers, tap on the icons at the top of the screen when editing your post.

            -

            Tip 2: Follow your favorite accounts and discover new ones

            -

            Instagram is a

            Instagram is a great platform to follow your favorite accounts and discover new ones that match your interests and preferences. You can follow accounts of celebrities, influencers, brands, organizations, or friends. You can also discover new accounts by using the Search icon or the Explore page. You can also use hashtags, locations, or tags to find relevant content and accounts.

            -

            Tip 3: Interact with your friends and followers through messages and comments

            -

            Instagram is not only a place to share your posts, but also a place to interact with your friends and followers. You can use the Heart icon to like their posts, or the Comment icon to leave a comment. You can also use the Message icon to send them a direct message or start a group chat. You can also use the Video Call icon to make a video call with up to four people at a time.

            -

            Conclusion

            -

            In this article, we have shown you how to download, install, and use Instagram on your Jio phone. We have also shared some tips and tricks for making the most of Instagram on your Jio phone. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy Instagramming!

            -

            FAQs

            -
              -
            • Q: Is Instagram free to use on Jio phone?
            • -
            • A: Yes, Instagram is free to use on Jio phone. However, you may incur data charges depending on your network plan.
            • -
            • Q: Can I use Instagram on any Jio phone model?
            • -
            • A: Yes, you can use Instagram on any Jio phone model that supports KaiOS and has internet connectivity.
            • -
            • Q: How can I update Instagram on my Jio phone?
            • -
            • A: You can update Instagram on your Jio phone by following the same steps as downloading it. You will see an Update button instead of a Download button if there is a new version available.
            • -
            • Q: How can I delete Instagram from my Jio phone?
            • -
            • A: You can delete Instagram from your Jio phone by following these steps:
            • -
                -
              1. Open the menu on your Jio phone and select the Settings app.
              2. -
              3. Select Apps and scroll down to find Instagram.
              4. -
              5. Select Instagram and tap on the Uninstall button.
              6. -
              7. Confirm your action and wait for the uninstallation to finish.
              8. -
              -
            • Q: How can I report a problem or give feedback on Instagram on my Jio phone?
            • -
            • A: You can report a problem or give feedback on Instagram on your Jio phone by following these steps:
            • -
                -
              1. Open the menu on your Jio phone and select the Instagram app.
              2. -
              3. Select the Profile icon and tap on the Settings icon.
              4. -
              5. Select Help and tap on Report a Problem or Send Feedback.
              6. -
              7. Choose the type of problem or feedback you want to report or send.
              8. -
              9. Write a detailed description of your issue or suggestion and attach a screenshot if possible.
              10. -
              11. Tap on Submit and wait for a response from Instagram.
              12. -
              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Tekken 3 APK 35 MB and Join the Iron Fist Tournament.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Tekken 3 APK 35 MB and Join the Iron Fist Tournament.md deleted file mode 100644 index 7162c13f033098a7d9254ad286e51ee1db114f4e..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Tekken 3 APK 35 MB and Join the Iron Fist Tournament.md +++ /dev/null @@ -1,87 +0,0 @@ -
            -

            Download Tekken 3 APK 35 MB: The Ultimate Guide

            -

            Do you love playing arcade fighting games on your mobile device? If yes, then you must have heard of Tekken 3, one of the most popular and classic games in the genre. Tekken 3 is a 3D fighting game that was released in 1997 by Namco. It features a diverse cast of characters, each with their own unique fighting style and moves. You can play the game in various modes, such as arcade, survival, and team fight. You can also challenge your friends or other players online in multiplayer mode.

            -

            download tekken 3 apk 35 mb


            Download File ····· https://gohhs.com/2uPnC2



            -

            However, if you want to enjoy the full potential of Tekken 3, you need to download Tekken 3 APK 35 MB. This is a modified version of the original game that is optimized for Android devices. It has many advantages over the original game, such as unlocking all characters, improving gameplay and graphics, and supporting offline and online modes. In this article, we will tell you everything you need to know about Tekken 3 APK 35 MB, including what it is, why you should download it, how to download and install it, and some frequently asked questions. Let's get started!

            -

            What is Tekken 3?

            -

            Tekken 3 is a 3D arcade fighting game that was developed and published by Namco in 1997. It is the third installment in the Tekken series, which is one of the most successful and influential fighting game franchises in history. Tekken 3 follows the story of the Mishima family and their struggle for power and dominance in the Iron Fist Tournament.

            -

            Tekken 3 has many features that make it stand out from other fighting games. Some of these features are:

            -
              -
            • 3D gameplay: Unlike its predecessors, Tekken 3 offers 3D gameplay, which adds more depth and strategy to the combat experience. You can move your character in different directions, sidestep attacks, and perform combos and special moves.
            • -
            • Engaging single-player modes: Tekken 3 has several single-player modes that keep you entertained for hours. You can play arcade mode, where you have to fight against different opponents until you reach the final boss. You can also play survival mode, where you have to survive as long as possible against waves of enemies. Or you can play team fight mode, where you can choose up to eight characters and fight against another team.
            • -
            • Multiplayer support: Tekken 3 also supports multiplayer mode, where you can play against your friends or other players online. You can either play local multiplayer using Bluetooth or Wi-Fi, or online multiplayer using an emulator or a third-party app.
            • -
            • Diverse cast of characters: Tekken 3 has a large and varied roster of characters, each with their own personality and fighting style. You can choose from old favorites like Jin Kazama, Nina Williams, Paul Phoenix, or Yoshimitsu, or new characters like Eddy Gordo, Ling Xiaoyu, Hwoarang, or Ogre. Each character has their own strengths and weaknesses, as well as unique moves and combos.
            • -
            -

            Why download Tekken 3 APK 35 MB?

            -

            Tekken 3 APK 35 MB is a modified version of the original game that is optimized for Android devices. It has many advantages over the original game, such as:

            All characters unlocked

            -

            One of the main benefits of downloading Tekken 3 APK 35 MB is that you can access all the characters in the game without having to unlock them. This means that you can play with any character you want, without having to complete certain tasks or spend money. You can also switch between characters easily and try different combinations and strategies.

            -

            Smooth gameplay and graphics

            -

            Another benefit of downloading Tekken 3 APK 35 MB is that it offers smooth gameplay and graphics on your Android device. The APK file is compressed to 35 MB, which makes it easy to download and install. It also does not require a lot of storage space or RAM to run. The game runs smoothly and without any lag or glitches. The graphics are also enhanced and adapted to the screen resolution of your device. You can enjoy the game in high quality and with no loss of detail.

            -

            tekken 3 apk mod all characters unlocked
            -tekken 3 apk download for android offline
            -tekken 3 apk free download full version
            -tekken 3 apk obb file download
            -tekken 3 apk latest version 2023
            -tekken 3 apk highly compressed 10mb
            -tekken 3 apk no emulator needed
            -tekken 3 apk unlimited money and gems
            -tekken 3 apk old version download
            -tekken 3 apk with cheats and codes
            -tekken 3 apk best graphics settings
            -tekken 3 apk multiplayer online mode
            -tekken 3 apk original game download
            -tekken 3 apk android requirements
            -tekken 3 apk installation guide and tips
            -tekken 3 apk gameplay and review
            -tekken 3 apk direct download link
            -tekken 3 apk mirror download site
            -tekken 3 apk safe and secure download
            -tekken 3 apk virus free and malware free
            -tekken 3 apk new features and updates
            -tekken 3 apk how to unlock all stages
            -tekken 3 apk how to play with controller
            -tekken 3 apk how to change language settings
            -tekken 3 apk how to fix lag and crash issues
            -tekken 3 apk best characters and moves list
            -tekken 3 apk story mode and endings explained
            -tekken 3 apk secrets and hidden characters revealed
            -tekken 3 apk tricks and hacks for beginners
            -tekken 3 apk pro tips and strategies for advanced players
            -tekken 3 apk comparison with other fighting games
            -tekken 3 apk fan made mods and customizations
            -tekken 3 apk fun facts and trivia questions
            -tekken 3 apk history and development behind the scenes
            -tekken 3 apk ratings and reviews from critics and users
            -tekken 3 apk download size and compatibility issues
            -tekken 3 apk alternatives and similar games to try out
            -tekken 3 apk challenges and achievements to complete
            -tekken 3 apk bonus content and extras to unlock
            -tekken 3 apk support and feedback from developers

            -

            Offline and online modes

            -

            A third benefit of downloading Tekken 3 APK 35 MB is that it supports both offline and online modes. You can play the game offline without any internet connection, which is great for when you are traveling or have limited data. You can also play the game online with your friends or other players, which adds more fun and challenge to the game. You can either use Bluetooth or Wi-Fi for local multiplayer, or use an emulator or a third-party app for online multiplayer.

            -

            How to download and install Tekken 3 APK 35 MB?

            -

            If you are convinced by the benefits of downloading Tekken 3 APK 35 MB, you might be wondering how to do it. Don't worry, it is very simple and easy. Just follow these steps:

            -

            Enable unknown sources

            -

            The first step is to enable unknown sources on your Android device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

            -

            Download the APK file from a trusted source

            -

            The next step is to download the APK file from a trusted source. There are many websites that offer Tekken 3 APK 35 MB, but not all of them are safe and reliable. Some of them might contain viruses or malware that can harm your device or steal your data. Therefore, you should be careful and choose a reputable website that has positive reviews and ratings from other users. You can also scan the APK file with an antivirus app before installing it.

            -

            Install the APK file and launch the game

            -

            The final step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device and tap on it. You might see a pop-up asking for permission to install the app. Tap on Install and wait for the installation process to finish. Once it is done, you can tap on Open and enjoy playing Tekken 3 on your Android device.

            -

            Frequently asked questions about Tekken 3 APK 35 MB

            -

            Here are some of the most common questions and answers about Tekken 3 APK 35 MB:

            -
              -
            1. Is Tekken 3 APK 35 MB safe to download and install?
              -Yes, Tekken 3 APK 35 MB is safe to download and install, as long as you get it from a trusted source. However, you should always be careful when downloading apps from unknown sources, as some of them might contain viruses or malware that can harm your device or steal your data. You should also scan the APK file with an antivirus app before installing it.
            2. -
            3. Is Tekken 3 APK 35 MB legal to use?
              -Yes, Tekken 3 APK 35 MB is legal to use, as long as you own a copy of the original game. Tekken 3 APK 35 MB is a modified version of the original game that is optimized for Android devices. It does not violate any copyright laws or terms of service of Namco.
            4. -
            5. Does Tekken 3 APK 35 MB require root access?
              -No, Tekken 3 APK 35 MB does not require root access to run on your Android device. You can install and play it without rooting your device.
            6. -
            7. Does Tekken 3 APK 35 MB work on all Android devices?
              -Yes, Tekken 3 APK 35 MB works on all Android devices that have Android version 4.0 or higher. However, some devices might have compatibility issues or performance problems due to different hardware specifications or software versions.
            8. How can I play Tekken 3 online with other players?
              -There are two ways to play Tekken 3 online with other players. One is to use Bluetooth or Wi-Fi for local multiplayer, where you can connect with nearby devices that have the game installed. The other is to use an emulator or a third-party app for online multiplayer, where you can connect with players from different locations. However, you should be aware that using an emulator or a third-party app might require additional steps and settings, and might not work properly or safely.
            9. -
            -

            Conclusion

            -

            Tekken 3 is one of the best arcade fighting games ever made, and you can enjoy it on your Android device by downloading Tekken 3 APK 35 MB. This is a modified version of the original game that is optimized for Android devices. It has many advantages over the original game, such as unlocking all characters, improving gameplay and graphics, and supporting offline and online modes. You can download and install Tekken 3 APK 35 MB easily and safely by following the steps in this article. You can also find answers to some of the most common questions about Tekken 3 APK 35 MB in this article.

            -

            If you are a fan of fighting games, you should not miss this opportunity to play Tekken 3 on your Android device. Download Tekken 3 APK 35 MB today and unleash your inner fighter!

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/CHANGELOG.md b/spaces/fffiloni/SplitTrack2MusicGen/CHANGELOG.md deleted file mode 100644 index a685bcae80d0c64e64f5f51a9b9aa9245cec4b9e..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/CHANGELOG.md +++ /dev/null @@ -1,9 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). - -## [0.0.1a] - TBD - -Initial release, with model evaluation only. \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/test.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/test.d.ts deleted file mode 100644 index 8c76d247765e8f6ce0b9a012275256117115d540..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/test.d.ts +++ /dev/null @@ -1,692 +0,0 @@ -/** - * The `node:test` module provides a standalone testing module. - * @see [source](https://github.com/nodejs/node/blob/v18.x/lib/test.js) - */ -declare module 'node:test' { - /** - * Programmatically start the test runner. - * @since v18.9.0 - * @param options Configuration options for running tests. - * @returns A {@link TapStream} that emits events about the test execution. - */ - function run(options?: RunOptions): TapStream; - - /** - * The `test()` function is the value imported from the test module. Each invocation of this - * function results in the creation of a test point in the TAP output. - * - * The {@link TestContext} object passed to the fn argument can be used to perform actions - * related to the current test. Examples include skipping the test, adding additional TAP - * diagnostic information, or creating subtests. - * - * `test()` returns a {@link Promise} that resolves once the test completes. The return value - * can usually be discarded for top level tests. However, the return value from subtests should - * be used to prevent the parent test from finishing first and cancelling the subtest as shown - * in the following example. - * - * ```js - * test('top level test', async (t) => { - * // The setTimeout() in the following subtest would cause it to outlive its - * // parent test if 'await' is removed on the next line. Once the parent test - * // completes, it will cancel any outstanding subtests. - * await t.test('longer running subtest', async (t) => { - * return new Promise((resolve, reject) => { - * setTimeout(resolve, 1000); - * }); - * }); - * }); - * ``` - * @since v18.0.0 - * @param name The name of the test, which is displayed when reporting test results. - * Default: The `name` property of fn, or `''` if `fn` does not have a name. - * @param options Configuration options for the test - * @param fn The function under test. The first argument to this function is a - * {@link TestContext} object. If the test uses callbacks, the callback function is - * passed as the second argument. Default: A no-op function. - * @returns A {@link Promise} resolved with `undefined` once the test completes. - */ - function test(name?: string, fn?: TestFn): Promise; - function test(name?: string, options?: TestOptions, fn?: TestFn): Promise; - function test(options?: TestOptions, fn?: TestFn): Promise; - function test(fn?: TestFn): Promise; - - /** - * @since v18.6.0 - * @param name The name of the suite, which is displayed when reporting suite results. - * Default: The `name` property of fn, or `''` if `fn` does not have a name. - * @param options Configuration options for the suite - * @param fn The function under suite. Default: A no-op function. - */ - function describe(name?: string, options?: TestOptions, fn?: SuiteFn): void; - function describe(name?: string, fn?: SuiteFn): void; - function describe(options?: TestOptions, fn?: SuiteFn): void; - function describe(fn?: SuiteFn): void; - namespace describe { - // Shorthand for skipping a suite, same as `describe([name], { skip: true }[, fn])`. - function skip(name?: string, options?: TestOptions, fn?: SuiteFn): void; - function skip(name?: string, fn?: SuiteFn): void; - function skip(options?: TestOptions, fn?: SuiteFn): void; - function skip(fn?: SuiteFn): void; - - // Shorthand for marking a suite as `TODO`, same as `describe([name], { todo: true }[, fn])`. - function todo(name?: string, options?: TestOptions, fn?: SuiteFn): void; - function todo(name?: string, fn?: SuiteFn): void; - function todo(options?: TestOptions, fn?: SuiteFn): void; - function todo(fn?: SuiteFn): void; - } - - /** - * @since v18.6.0 - * @param name The name of the test, which is displayed when reporting test results. - * Default: The `name` property of fn, or `''` if `fn` does not have a name. - * @param options Configuration options for the test - * @param fn The function under test. If the test uses callbacks, the callback function is - * passed as the second argument. Default: A no-op function. - */ - function it(name?: string, options?: TestOptions, fn?: ItFn): void; - function it(name?: string, fn?: ItFn): void; - function it(options?: TestOptions, fn?: ItFn): void; - function it(fn?: ItFn): void; - namespace it { - // Shorthand for skipping a test, same as `it([name], { skip: true }[, fn])`. - function skip(name?: string, options?: TestOptions, fn?: ItFn): void; - function skip(name?: string, fn?: ItFn): void; - function skip(options?: TestOptions, fn?: ItFn): void; - function skip(fn?: ItFn): void; - - // Shorthand for marking a test as `TODO`, same as `it([name], { todo: true }[, fn])`. - function todo(name?: string, options?: TestOptions, fn?: ItFn): void; - function todo(name?: string, fn?: ItFn): void; - function todo(options?: TestOptions, fn?: ItFn): void; - function todo(fn?: ItFn): void; - } - - /** - * The type of a function under test. The first argument to this function is a - * {@link TestContext} object. If the test uses callbacks, the callback function is passed as - * the second argument. - */ - type TestFn = (t: TestContext, done: (result?: any) => void) => any; - - /** - * The type of a function under Suite. - * If the test uses callbacks, the callback function is passed as an argument - */ - type SuiteFn = (done: (result?: any) => void) => void; - - /** - * The type of a function under test. - * If the test uses callbacks, the callback function is passed as an argument - */ - type ItFn = (done: (result?: any) => void) => any; - - interface RunOptions { - /** - * If a number is provided, then that many files would run in parallel. - * If truthy, it would run (number of cpu cores - 1) files in parallel. - * If falsy, it would only run one file at a time. - * If unspecified, subtests inherit this value from their parent. - * @default true - */ - concurrency?: number | boolean | undefined; - - /** - * An array containing the list of files to run. - * If unspecified, the test runner execution model will be used. - */ - files?: readonly string[] | undefined; - - /** - * Allows aborting an in-progress test execution. - * @default undefined - */ - signal?: AbortSignal | undefined; - - /** - * A number of milliseconds the test will fail after. - * If unspecified, subtests inherit this value from their parent. - * @default Infinity - */ - timeout?: number | undefined; - - /** - * Sets inspector port of test child process. - * If a nullish value is provided, each process gets its own port, - * incremented from the primary's `process.debugPort`. - */ - inspectPort?: number | (() => number) | undefined; - } - - /** - * A successful call of the `run()` method will return a new `TapStream` object, - * streaming a [TAP](https://testanything.org/) output. - * `TapStream` will emit events in the order of the tests' definitions. - * @since v18.9.0 - */ - interface TapStream extends NodeJS.ReadableStream { - addListener(event: 'test:diagnostic', listener: (message: string) => void): this; - addListener(event: 'test:fail', listener: (data: TestFail) => void): this; - addListener(event: 'test:pass', listener: (data: TestPass) => void): this; - addListener(event: string, listener: (...args: any[]) => void): this; - emit(event: 'test:diagnostic', message: string): boolean; - emit(event: 'test:fail', data: TestFail): boolean; - emit(event: 'test:pass', data: TestPass): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'test:diagnostic', listener: (message: string) => void): this; - on(event: 'test:fail', listener: (data: TestFail) => void): this; - on(event: 'test:pass', listener: (data: TestPass) => void): this; - on(event: string, listener: (...args: any[]) => void): this; - once(event: 'test:diagnostic', listener: (message: string) => void): this; - once(event: 'test:fail', listener: (data: TestFail) => void): this; - once(event: 'test:pass', listener: (data: TestPass) => void): this; - once(event: string, listener: (...args: any[]) => void): this; - prependListener(event: 'test:diagnostic', listener: (message: string) => void): this; - prependListener(event: 'test:fail', listener: (data: TestFail) => void): this; - prependListener(event: 'test:pass', listener: (data: TestPass) => void): this; - prependListener(event: string, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'test:diagnostic', listener: (message: string) => void): this; - prependOnceListener(event: 'test:fail', listener: (data: TestFail) => void): this; - prependOnceListener(event: 'test:pass', listener: (data: TestPass) => void): this; - prependOnceListener(event: string, listener: (...args: any[]) => void): this; - } - - interface TestFail { - /** - * The test duration. - */ - duration: number; - - /** - * The failure casing test to fail. - */ - error: Error; - - /** - * The test name. - */ - name: string; - - /** - * The ordinal number of the test. - */ - testNumber: number; - - /** - * Present if `context.todo` is called. - */ - todo?: string; - - /** - * Present if `context.skip` is called. - */ - skip?: string; - } - - interface TestPass { - /** - * The test duration. - */ - duration: number; - - /** - * The test name. - */ - name: string; - - /** - * The ordinal number of the test. - */ - testNumber: number; - - /** - * Present if `context.todo` is called. - */ - todo?: string; - - /** - * Present if `context.skip` is called. - */ - skip?: string; - } - - /** - * An instance of `TestContext` is passed to each test function in order to interact with the - * test runner. However, the `TestContext` constructor is not exposed as part of the API. - * @since v18.0.0 - */ - interface TestContext { - /** - * This function is used to create a hook running before each subtest of the current test. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - beforeEach: typeof beforeEach; - - /** - * This function is used to create a hook that runs after the current test finishes. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.13.0 - */ - after: typeof after; - - /** - * This function is used to create a hook running after each subtest of the current test. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - afterEach: typeof afterEach; - - /** - * This function is used to write TAP diagnostics to the output. Any diagnostic information is - * included at the end of the test's results. This function does not return a value. - * @param message Message to be displayed as a TAP diagnostic. - * @since v18.0.0 - */ - diagnostic(message: string): void; - - /** - * The name of the test. - * @since v18.8.0 - */ - readonly name: string; - - /** - * If `shouldRunOnlyTests` is truthy, the test context will only run tests that have the `only` - * option set. Otherwise, all tests are run. If Node.js was not started with the `--test-only` - * command-line option, this function is a no-op. - * @param shouldRunOnlyTests Whether or not to run `only` tests. - * @since v18.0.0 - */ - runOnly(shouldRunOnlyTests: boolean): void; - - /** - * Can be used to abort test subtasks when the test has been aborted. - * @since v18.7.0 - */ - readonly signal: AbortSignal; - - /** - * This function causes the test's output to indicate the test as skipped. If `message` is - * provided, it is included in the TAP output. Calling `skip()` does not terminate execution of - * the test function. This function does not return a value. - * @param message Optional skip message to be displayed in TAP output. - * @since v18.0.0 - */ - skip(message?: string): void; - - /** - * This function adds a `TODO` directive to the test's output. If `message` is provided, it is - * included in the TAP output. Calling `todo()` does not terminate execution of the test - * function. This function does not return a value. - * @param message Optional `TODO` message to be displayed in TAP output. - * @since v18.0.0 - */ - todo(message?: string): void; - - /** - * This function is used to create subtests under the current test. This function behaves in - * the same fashion as the top level {@link test} function. - * @since v18.0.0 - * @param name The name of the test, which is displayed when reporting test results. - * Default: The `name` property of fn, or `''` if `fn` does not have a name. - * @param options Configuration options for the test - * @param fn The function under test. This first argument to this function is a - * {@link TestContext} object. If the test uses callbacks, the callback function is - * passed as the second argument. Default: A no-op function. - * @returns A {@link Promise} resolved with `undefined` once the test completes. - */ - test: typeof test; - /** - * Each test provides its own MockTracker instance. - */ - readonly mock: MockTracker; - } - - interface TestOptions { - /** - * If a number is provided, then that many tests would run in parallel. - * If truthy, it would run (number of cpu cores - 1) tests in parallel. - * For subtests, it will be `Infinity` tests in parallel. - * If falsy, it would only run one test at a time. - * If unspecified, subtests inherit this value from their parent. - * @default false - */ - concurrency?: number | boolean | undefined; - - /** - * If truthy, and the test context is configured to run `only` tests, then this test will be - * run. Otherwise, the test is skipped. - * @default false - */ - only?: boolean | undefined; - - /** - * Allows aborting an in-progress test. - * @since v18.8.0 - */ - signal?: AbortSignal | undefined; - - /** - * If truthy, the test is skipped. If a string is provided, that string is displayed in the - * test results as the reason for skipping the test. - * @default false - */ - skip?: boolean | string | undefined; - - /** - * A number of milliseconds the test will fail after. If unspecified, subtests inherit this - * value from their parent. - * @default Infinity - * @since v18.7.0 - */ - timeout?: number | undefined; - - /** - * If truthy, the test marked as `TODO`. If a string is provided, that string is displayed in - * the test results as the reason why the test is `TODO`. - * @default false - */ - todo?: boolean | string | undefined; - } - - /** - * This function is used to create a hook running before running a suite. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - function before(fn?: HookFn, options?: HookOptions): void; - - /** - * This function is used to create a hook running after running a suite. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - function after(fn?: HookFn, options?: HookOptions): void; - - /** - * This function is used to create a hook running before each subtest of the current suite. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - function beforeEach(fn?: HookFn, options?: HookOptions): void; - - /** - * This function is used to create a hook running after each subtest of the current test. - * @param fn The hook function. If the hook uses callbacks, the callback function is passed as - * the second argument. Default: A no-op function. - * @param options Configuration options for the hook. - * @since v18.8.0 - */ - function afterEach(fn?: HookFn, options?: HookOptions): void; - - /** - * The hook function. If the hook uses callbacks, the callback function is passed as the - * second argument. - */ - type HookFn = (done: (result?: any) => void) => any; - - /** - * Configuration options for hooks. - * @since v18.8.0 - */ - interface HookOptions { - /** - * Allows aborting an in-progress hook. - */ - signal?: AbortSignal | undefined; - - /** - * A number of milliseconds the hook will fail after. If unspecified, subtests inherit this - * value from their parent. - * @default Infinity - */ - timeout?: number | undefined; - } - - interface MockFunctionOptions { - /** - * The number of times that the mock will use the behavior of `implementation`. - * Once the mock function has been called `times` times, - * it will automatically restore the behavior of `original`. - * This value must be an integer greater than zero. - * @default Infinity - */ - times?: number | undefined; - } - - interface MockMethodOptions extends MockFunctionOptions { - /** - * If `true`, `object[methodName]` is treated as a getter. - * This option cannot be used with the `setter` option. - */ - getter?: boolean | undefined; - - /** - * If `true`, `object[methodName]` is treated as a setter. - * This option cannot be used with the `getter` option. - */ - setter?: boolean | undefined; - } - - type Mock = F & { - mock: MockFunctionContext; - }; - - type NoOpFunction = (...args: any[]) => undefined; - - type FunctionPropertyNames = { - [K in keyof T]: T[K] extends Function ? K : never; - }[keyof T]; - - interface MockTracker { - /** - * This function is used to create a mock function. - * @param original An optional function to create a mock on. - * @param implementation An optional function used as the mock implementation for `original`. - * This is useful for creating mocks that exhibit one behavior for a specified number of calls and then restore the behavior of `original`. - * @param options Optional configuration options for the mock function. - */ - fn(original?: F, options?: MockFunctionOptions): Mock; - fn(original?: F, implementation?: Implementation, options?: MockFunctionOptions): Mock; - - /** - * This function is used to create a mock on an existing object method. - * @param object The object whose method is being mocked. - * @param methodName The identifier of the method on `object` to mock. If `object[methodName]` is not a function, an error is thrown. - * @param implementation An optional function used as the mock implementation for `object[methodName]`. - * @param options Optional configuration options for the mock method. - */ - method< - MockedObject extends object, - MethodName extends FunctionPropertyNames, - >( - object: MockedObject, - methodName: MethodName, - options?: MockFunctionOptions, - ): MockedObject[MethodName] extends Function - ? Mock - : never; - method< - MockedObject extends object, - MethodName extends FunctionPropertyNames, - Implementation extends Function, - >( - object: MockedObject, - methodName: MethodName, - implementation: Implementation, - options?: MockFunctionOptions, - ): MockedObject[MethodName] extends Function - ? Mock - : never; - method( - object: MockedObject, - methodName: keyof MockedObject, - options: MockMethodOptions, - ): Mock; - method( - object: MockedObject, - methodName: keyof MockedObject, - implementation: Function, - options: MockMethodOptions, - ): Mock; - - /** - * This function is syntax sugar for {@link MockTracker.method} with `options.getter` set to `true`. - */ - getter< - MockedObject extends object, - MethodName extends keyof MockedObject, - >( - object: MockedObject, - methodName: MethodName, - options?: MockFunctionOptions, - ): Mock<() => MockedObject[MethodName]>; - getter< - MockedObject extends object, - MethodName extends keyof MockedObject, - Implementation extends Function, - >( - object: MockedObject, - methodName: MethodName, - implementation?: Implementation, - options?: MockFunctionOptions, - ): Mock<(() => MockedObject[MethodName]) | Implementation>; - - /** - * This function is syntax sugar for {@link MockTracker.method} with `options.setter` set to `true`. - */ - setter< - MockedObject extends object, - MethodName extends keyof MockedObject, - >( - object: MockedObject, - methodName: MethodName, - options?: MockFunctionOptions, - ): Mock<(value: MockedObject[MethodName]) => void>; - setter< - MockedObject extends object, - MethodName extends keyof MockedObject, - Implementation extends Function, - >( - object: MockedObject, - methodName: MethodName, - implementation?: Implementation, - options?: MockFunctionOptions, - ): Mock<((value: MockedObject[MethodName]) => void) | Implementation>; - - /** - * This function restores the default behavior of all mocks that were previously created by this `MockTracker` - * and disassociates the mocks from the `MockTracker` instance. Once disassociated, the mocks can still be used, - * but the `MockTracker` instance can no longer be used to reset their behavior or otherwise interact with them. - * - * After each test completes, this function is called on the test context's `MockTracker`. - * If the global `MockTracker` is used extensively, calling this function manually is recommended. - */ - reset(): void; - - /** - * This function restores the default behavior of all mocks that were previously created by this `MockTracker`. - * Unlike `mock.reset()`, `mock.restoreAll()` does not disassociate the mocks from the `MockTracker` instance. - */ - restoreAll(): void; - } - - const mock: MockTracker; - - interface MockFunctionCall< - F extends Function, - ReturnType = F extends (...args: any) => infer T - ? T - : F extends abstract new (...args: any) => infer T - ? T - : unknown, - Args = F extends (...args: infer Y) => any - ? Y - : F extends abstract new (...args: infer Y) => any - ? Y - : unknown[], - > { - /** - * An array of the arguments passed to the mock function. - */ - arguments: Args; - /** - * If the mocked function threw then this property contains the thrown value. - */ - error: unknown | undefined; - /** - * The value returned by the mocked function. - * - * If the mocked function threw, it will be `undefined`. - */ - result: ReturnType | undefined; - /** - * An `Error` object whose stack can be used to determine the callsite of the mocked function invocation. - */ - stack: Error; - /** - * If the mocked function is a constructor, this field contains the class being constructed. - * Otherwise this will be `undefined`. - */ - target: F extends abstract new (...args: any) => any ? F : undefined; - /** - * The mocked function's `this` value. - */ - this: unknown; - } - - interface MockFunctionContext { - /** - * A getter that returns a copy of the internal array used to track calls to the mock. - */ - readonly calls: Array>; - - /** - * This function returns the number of times that this mock has been invoked. - * This function is more efficient than checking `ctx.calls.length` - * because `ctx.calls` is a getter that creates a copy of the internal call tracking array. - */ - callCount(): number; - - /** - * This function is used to change the behavior of an existing mock. - * @param implementation The function to be used as the mock's new implementation. - */ - mockImplementation(implementation: Function): void; - - /** - * This function is used to change the behavior of an existing mock for a single invocation. - * Once invocation `onCall` has occurred, the mock will revert to whatever behavior - * it would have used had `mockImplementationOnce()` not been called. - * @param implementation The function to be used as the mock's implementation for the invocation number specified by `onCall`. - * @param onCall The invocation number that will use `implementation`. - * If the specified invocation has already occurred then an exception is thrown. - */ - mockImplementationOnce(implementation: Function, onCall?: number): void; - - /** - * Resets the call history of the mock function. - */ - resetCalls(): void; - - /** - * Resets the implementation of the mock function to its original behavior. - * The mock can still be used after calling this function. - */ - restore(): void; - } - - export { test as default, run, test, describe, it, before, after, beforeEach, afterEach, mock }; -} diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_67.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_67.py deleted file mode 100644 index 6a1446e1950d4f30579dc5b0ef3e7ccb4ddf88b4..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_67.py +++ /dev/null @@ -1,25 +0,0 @@ - -import re - -def is_spam(text: str) -> bool: - # Check for excessive use of special characters - special_char_count = len(re.findall(r'[!@#$%^&*()_=+\[\]{}<>:;"''|\\,.?]', text)) - if special_char_count / len(text) > 0.1: - return True - - # Check for presence of financial numbers and shortening of amounts - if re.search(r'\d{1,3}(,|\.)\d{3}', text) or re.search(r'\d{1,3}(만원|천원)으로', text): - return True - - # Check for presence of URLs containing suspicious domain names - suspicious_domains = ["bit.ly", "me2.kr", "han.gl", "openkakao."] - for domain in suspicious_domains: - if domain in text.lower(): - return True - - # Check for excessive use of up arrow character - up_arrow_count = text.count('↑') - if up_arrow_count / len(text) > 0.05: - return True - - return False diff --git a/spaces/firzaelbuho/rvc-models/vc_infer_pipeline.py b/spaces/firzaelbuho/rvc-models/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/firzaelbuho/rvc-models/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/t5_vae.py b/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/t5_vae.py deleted file mode 100644 index 50c89237a88f53113379a262fbd15e3fc72a867c..0000000000000000000000000000000000000000 --- a/spaces/flax-community/t5-vae/t5_vae_flax_alt/src/t5_vae.py +++ /dev/null @@ -1,520 +0,0 @@ -from typing import Optional, Tuple - -import jax -import jax.numpy as jnp -from jax.random import PRNGKey -import flax.linen as nn -from flax.core.frozen_dict import FrozenDict, unfreeze - -from transformers.modeling_flax_outputs import FlaxCausalLMOutputWithCrossAttentions -from transformers.file_utils import add_start_docstrings -from transformers.modeling_flax_utils import FlaxPreTrainedModel -from transformers.models.t5.modeling_flax_t5 import FlaxT5ForConditionalGenerationModule - -from t5_vae_flax_alt.src.vae import VAE -from t5_vae_flax_alt.src.generate import VaeFlaxGenerationMixin -from t5_vae_flax_alt.src.outputs import TransformerVaeOutput -from t5_vae_flax_alt.src.config import T5VaeConfig - - -@add_start_docstrings("""T5 Model with a `language modeling` head on top converted into a VAE.""") -class FlaxT5VaeForAutoencodingModule(nn.Module): - config: T5VaeConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def _get_encoder_module(self): - return self.t5.encoder - - def _get_vae_encoder_module(self): - return self.vae.encoder - - def _get_vae_decoder_module(self): - return self.vae.decoder - - def _get_decoder_module(self): - return self.t5.decoder - - def setup(self): - self.t5 = FlaxT5ForConditionalGenerationModule(self.config.t5) - self.vae = VAE(self.config) - - def __call__( - self, - input_ids=None, - attention_mask=None, - decoder_input_ids=None, - decoder_attention_mask=None, - encoder_outputs=None, - latent_codes=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - deterministic: bool = True, - ): - """ - Adapted from `FlaxT5ForConditionalGenerationModule` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # Encode - encoder_outputs = self.t5.encoder( - input_ids=input_ids, - attention_mask=attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - hidden_states = encoder_outputs[0] - - # Autoencode - hidden_states, latent_codes = self.vae(hidden_states, latent_codes) - encoder_attention_mask = jnp.ones((hidden_states.shape[0], hidden_states.shape[1])) - - # Decode - decoder_outputs = self.t5.decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - encoder_hidden_states=hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - sequence_output = decoder_outputs[0] - - if self.config.tie_word_embeddings: - # Rescale output before projecting on vocab - # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586 - sequence_output = sequence_output * (self.config.t5.d_model ** -0.5) - - if self.t5.config.tie_word_embeddings: - shared_embedding = self.t5.shared.variables["params"]["embedding"] - lm_logits = self.t5.lm_head.apply({"params": {"kernel": shared_embedding.T}}, sequence_output) - else: - lm_logits = self.t5.lm_head(sequence_output) - - if not return_dict: - return [lm_logits, latent_codes] + decoder_outputs[1:] + encoder_outputs - - return TransformerVaeOutput( - logits=lm_logits, - latent_codes=latent_codes, - last_hidden_state=decoder_outputs.last_hidden_state, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - -class FlaxT5VaePreTrainedModel(FlaxPreTrainedModel, VaeFlaxGenerationMixin): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = T5VaeConfig - base_model_prefix = "transformer" - module_class: nn.Module = None - - def __init__( - self, - config: T5VaeConfig, - input_shape: Tuple[int] = (1, 1), - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs - ): - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensors - input_ids = jnp.zeros(input_shape, dtype="i4") - - attention_mask = jnp.ones_like(input_ids) - decoder_input_ids = jnp.ones_like(input_ids) - decoder_attention_mask = jnp.ones_like(input_ids) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init( - rngs, - input_ids, - attention_mask, - decoder_input_ids, - decoder_attention_mask, - )["params"] - - def __call__( - self, - input_ids: jnp.ndarray, - attention_mask: Optional[jnp.ndarray] = None, - decoder_input_ids: jnp.ndarray = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if decoder_input_ids is None: - raise ValueError( - "Make sure to provide both `input_ids` and `decoder_input_ids`. `decoder_input_ids` is not passed here." - ) - - # prepare encoder inputs - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # prepare decoder inputs - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - - # Handle any PRNG if needed - rngs = {"dropout": dropout_rng} if dropout_rng is not None else {} - - return self.module.apply( - {"params": params or self.params}, - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - ) - - def init_cache(self, batch_size, max_length, latent_codes): - r""" - Args: - batch_size (:obj:`int`): - batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. - max_length (:obj:`int`): - maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized - cache. - latent_codes (:obj:`Union[FlaxBaseModelOutput, tuple(tuple(jnp.ndarray)]`): - ``latent_codes`` consists of compressed hidden-states at the output of the last layer of the encoder. - Used in the cross-attention of the decoder. - """ - # init input variables to retrieve cache - decoder_input_ids = jnp.ones((batch_size, max_length), dtype="i4") - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - - def _decoder_forward(module, decoder_input_ids, decoder_attention_mask, **kwargs): - decoder_module = module._get_decoder_module() - return decoder_module( - decoder_input_ids, - decoder_attention_mask, - **kwargs, - ) - - init_variables = self.module.init( - jax.random.PRNGKey(0), - decoder_input_ids=decoder_input_ids, - decoder_attention_mask=decoder_attention_mask, - init_cache=True, - method=_decoder_forward, # we only need to call the decoder to init the cache - ) - return unfreeze(init_variables["cache"]) - - def encode( - self, - input_ids: jnp.ndarray, - attention_mask: Optional[jnp.ndarray] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - raise NotImplementedError() - - def decode( - self, - decoder_input_ids, - latent_codes, - encoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - raise NotImplementedError() - - -class FlaxT5VaeForAutoencoding(FlaxT5VaePreTrainedModel): - module_class = FlaxT5VaeForAutoencodingModule - - def __call__( - self, - input_ids: jnp.ndarray, - attention_mask: Optional[jnp.ndarray] = None, - decoder_input_ids=None, - decoder_attention_mask=None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - ''' - Adapted from `FlaxT5PreTrainedModel` - ''' - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if decoder_input_ids is None: - raise ValueError( - "Make sure to provide both `input_ids` and `decoder_input_ids`. `decoder_input_ids` is not passed here." - ) - - # prepare encoder inputs - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # prepare decoder inputs - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones_like(decoder_input_ids) - - # Handle any PRNG if needed - rngs = {"dropout": dropout_rng} if dropout_rng is not None else {} - - return self.module.apply( - {"params": params or self.params}, - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - ) - - def encode( - self, - input_ids: jnp.ndarray, - attention_mask: Optional[jnp.ndarray] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _encoder_forward(module, input_ids, attention_mask, **kwargs): - encode_module = module._get_encoder_module() - vae_encoder_module = module._get_vae_encoder_module() - return vae_encoder_module(encode_module(input_ids, attention_mask, **kwargs)[0]) - - return self.module.apply( - {"params": params or self.params}, - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - method=_encoder_forward, - ) - - def decode( - self, - decoder_input_ids, - latent_codes, - encoder_attention_mask: Optional[jnp.ndarray] = None, - decoder_attention_mask: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - r""" - Returns: - - Example:: - - >>> model = FlaxT5VaeForAutoencoding.from_pretrained('t5-small') - >>> tokenizer = T5Tokenizer.from_pretrained('t5-small') - - >>> text = "My friends are cool but they eat too many carbs." - >>> inputs = tokenizer(text, max_length=512, return_tensors='jax') - >>> latent_codes = model.encode(**inputs) - - >>> decoder_start_token_id = model.config.decoder_start_token_id - >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id - - >>> outputs = model.decode(decoder_input_ids, latent_codes) - >>> last_decoder_hidden_states = outputs.last_hidden_state - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if encoder_attention_mask is None: - batch_size, sequence_length = latent_codes.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = decoder_input_ids.shape - if decoder_attention_mask is None: - decoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be - # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that - # it can be changed by FlaxT5Attention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - def _decoder_forward(module, decoder_input_ids, latent_codes, decoder_attention_mask, **kwargs): - vae_decoder_module = module._get_vae_decoder_module() - decoder_module = module._get_decoder_module() - decoder_outputs = decoder_module( - decoder_input_ids, - decoder_attention_mask, - encoder_hidden_states=vae_decoder_module(latent_codes), - **kwargs, - ) - sequence_output = decoder_outputs[0] - - if self.config.tie_word_embeddings: - # Rescale output before projecting on vocab - # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586 - sequence_output = sequence_output * (self.config.t5.d_model ** -0.5) - - if self.config.tie_word_embeddings: - shared_embedding = module.t5.shared.variables["params"]["embedding"] - lm_logits = module.t5.lm_head.apply({"params": {"kernel": shared_embedding.T}}, sequence_output) - else: - lm_logits = module.t5.lm_head(sequence_output) - - return lm_logits, decoder_outputs - - outputs = self.module.apply( - inputs, - decoder_input_ids=jnp.array(decoder_input_ids, dtype="i4"), - latent_codes=latent_codes, - decoder_attention_mask=jnp.array(decoder_attention_mask, dtype="i4"), - encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - mutable=mutable, - method=_decoder_forward, - ) - - if past_key_values is None: - lm_logits, decoder_outputs = outputs - else: - (lm_logits, decoder_outputs), past = outputs - - if return_dict: - outputs = FlaxCausalLMOutputWithCrossAttentions( - logits=lm_logits, - hidden_states=decoder_outputs.hidden_states, - attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - ) - else: - outputs = (lm_logits,) + decoder_outputs[1:] - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs["past_key_values"] = unfreeze(past["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:] - - return outputs - - def prepare_inputs_for_generation( - self, - decoder_input_ids, - max_length, - attention_mask: Optional[jnp.DeviceArray] = None, - decoder_attention_mask: Optional[jnp.DeviceArray] = None, - latent_codes=None, - **kwargs - ): - # initializing the cache - batch_size, seq_length = decoder_input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length, latent_codes) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since the decoder uses a causal mask, those positions are masked anyways. - # Thus we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if decoder_attention_mask is not None: - extended_attention_mask = jax.lax.dynamic_update_slice( - extended_attention_mask, decoder_attention_mask, (0, 0) - ) - - return { - "past_key_values": past_key_values, - "latent_codes": latent_codes, - "encoder_attention_mask": attention_mask, - "decoder_attention_mask": extended_attention_mask, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - return model_kwargs diff --git a/spaces/florim/MedGPT/main.py b/spaces/florim/MedGPT/main.py deleted file mode 100644 index 160addc390b94a8b143a3a2e18991a560f9b032e..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/main.py +++ /dev/null @@ -1 +0,0 @@ -from autogpt import main diff --git a/spaces/foghuang/ChatGLM2-6B/resources/WECHAT.md b/spaces/foghuang/ChatGLM2-6B/resources/WECHAT.md deleted file mode 100644 index c9ee867ead5d818a0b4e2ba46103a6454537d143..0000000000000000000000000000000000000000 --- a/spaces/foghuang/ChatGLM2-6B/resources/WECHAT.md +++ /dev/null @@ -1,7 +0,0 @@ -
            - - -

            扫码关注公众号,加入「ChatGLM交流群」

            -

            Scan the QR code to follow the official account and join the "ChatGLM Discussion Group"

            -
            - diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_scroll/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_scroll/run.py deleted file mode 100644 index 2b2194dfb8050fc2a50b9491f2e57f672289ffe7..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_scroll/run.py +++ /dev/null @@ -1,24 +0,0 @@ -import gradio as gr - - -demo = gr.Blocks() - -with demo: - inp = gr.Textbox(placeholder="Enter text.") - scroll_btn = gr.Button("Scroll") - no_scroll_btn = gr.Button("No Scroll") - big_block = gr.HTML(""" -
            - """) - out = gr.Textbox() - - scroll_btn.click(lambda x: x, - inputs=inp, - outputs=out, - scroll_to_output=True) - no_scroll_btn.click(lambda x: x, - inputs=inp, - outputs=out) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/freddyaboulton/gradio-lite-sklearn/index.html b/spaces/freddyaboulton/gradio-lite-sklearn/index.html deleted file mode 100644 index 6197f70b73bb1a27ddd27d7e4ffe57d2d186478f..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio-lite-sklearn/index.html +++ /dev/null @@ -1,129 +0,0 @@ - - - - - - - - - - -

            - - Gradio and scikit-learn running entirely in your browser thanks to pyodide! -

            - -🔥 - -scikit-learn -plotly -numpy - - - -import numpy as np -import plotly.graph_objects as go - -from sklearn import decomposition -from sklearn import datasets - -import gradio as gr - -np.random.seed(5) - -## PCA -def PCA_Pred(x1, x2, x3, x4): - #Load Data from iris dataset: - iris = datasets.load_iris() - X = iris.data - Y = iris.target - label_data = [("Setosa", 0), ("Versicolour", 1), ("Virginica", 2)] - - #Create the model with 3 principal components: - pca = decomposition.PCA(n_components=3) - - #Fit model and transform (decrease dimensions) iris dataset: - pca.fit(X) - X = pca.transform(X) - - #Create figure with plotly - fig = go.Figure() - - for name, label in label_data: - fig.add_trace(go.Scatter3d( - x=X[Y == label, 0], - y=X[Y == label, 1], - z=X[Y == label, 2], - mode='markers', - marker=dict( - size=8, - color=label, - colorscale='Viridis', - opacity=0.8), - name=name - )) - - user_iris_data = np.array([[x1, x2, x3, x4]], ndmin=2) - - #Perform reduction to user data - pc_output = pca.transform(user_iris_data) - fig.add_traces([go.Scatter3d( - x=np.array(pc_output[0, 0]), - y=np.array(pc_output[0, 1]), - z=np.array(pc_output[0, 2]), - mode='markers', - marker=dict( - size=12, - color=4, # set color - colorscale='Viridis', # choose a colorscale - opacity=0.8), - name="User data" - )]) - fig.update_layout(scene = dict( - xaxis_title="1st PCA Axis", - yaxis_title="2nd PCA Axis", - zaxis_title="3th PCA Axis"), - legend_title="Species" - ) - - return [pc_output, fig] - -title = "PCA example with Iris Dataset 🌺" -with gr.Blocks(title=title) as demo: - gr.Markdown(f"## {title}") - gr.Markdown( - """ - The following app is a demo for PCA decomposition. It takes 4 dimensions as input, in reference \ - to the following image, and returns the transformed first three principal components (feature \ - reduction), taken from a pre-trained model with Iris dataset. - """) - with gr.Row(): - with gr.Column(): - inp1 = gr.Slider(0, 7, value=1, step=0.1, label="Sepal Length (cm)") - inp2 = gr.Slider(0, 5, value=1, step=0.1, label="Sepal Width (cm)") - inp3 = gr.Slider(0, 7, value=1, step=0.1, label="Petal Length (cm)") - inp4 = gr.Slider(0, 5, value=1, step=0.1, label="Petal Width (cm)") - output = gr.Textbox(label="PCA Axes") - with gr.Column(): - plot = gr.Plot(label="PCA 3D Space") - - Reduction = gr.Button("PCA Transform") - Reduction.click(fn=PCA_Pred, inputs=[inp1, inp2, inp3, inp4], outputs=[output, plot]) - demo.load(fn=PCA_Pred, inputs=[inp1, inp2, inp3, inp4], outputs=[output, plot]) - -demo.launch() - - - - - \ No newline at end of file diff --git a/spaces/fschramm21/fraudDetector/call_api.py b/spaces/fschramm21/fraudDetector/call_api.py deleted file mode 100644 index 1f70ed8d1de3af4dd60e88f07d792e1eceaa4888..0000000000000000000000000000000000000000 --- a/spaces/fschramm21/fraudDetector/call_api.py +++ /dev/null @@ -1,45 +0,0 @@ -import requests - -search_api_url = 'http://127.0.0.1:7860/run/prediccion' - -# CASO 1 -> Tipo de fraude: 0/False -data = { - "data": [ - 18.0, - "pending", - "True", - "card", - "JCB 16 digit", - "Citizens First Banks", - 18, - "False", - "com", - "yahoo", - "only_letters", - "yes" - ] -} - -# CASO 2 -> Tipo de fraude: 1/True -data2 = { -"data": [ - 26.0, - "fulfilled", - "True", - "bitcoin", - "VISA 16 digit", - "Solace Banks", - 26, - "False", - "com", - "yahoo", - "only_letters", - "no" - ] -} - -response = requests.post(search_api_url, json=data) -print(response.json()) - -response = requests.post(search_api_url, json=data2) -print(response.json()) diff --git a/spaces/fusing/celeba-diffusion/README.md b/spaces/fusing/celeba-diffusion/README.md deleted file mode 100644 index 141c21c03987bd708a20883f07645d38c5f07dc7..0000000000000000000000000000000000000000 --- a/spaces/fusing/celeba-diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multi-Scheduler Faces Generator -emoji: 🧨 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/base/modules.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/base/modules.py deleted file mode 100644 index 096541fc248cfef434e1a9ffc6cfe1ad7f0acbe5..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/base/modules.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -import torch.nn as nn - -try: - from inplace_abn import InPlaceABN -except ImportError: - InPlaceABN = None - - -class Conv2dReLU(nn.Sequential): - def __init__( - self, - in_channels, - out_channels, - kernel_size, - padding=0, - stride=1, - use_batchnorm=True, - ): - - if use_batchnorm == "inplace" and InPlaceABN is None: - raise RuntimeError( - "In order to use `use_batchnorm='inplace'` inplace_abn package must be installed. " - + "To install see: https://github.com/mapillary/inplace_abn" - ) - - conv = nn.Conv2d( - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - bias=not (use_batchnorm), - ) - relu = nn.ReLU(inplace=True) - - if use_batchnorm == "inplace": - bn = InPlaceABN(out_channels, activation="leaky_relu", activation_param=0.0) - relu = nn.Identity() - - elif use_batchnorm and use_batchnorm != "inplace": - bn = nn.BatchNorm2d(out_channels) - - else: - bn = nn.Identity() - - super(Conv2dReLU, self).__init__(conv, bn, relu) - - -class SCSEModule(nn.Module): - def __init__(self, in_channels, reduction=16): - super().__init__() - self.cSE = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, in_channels // reduction, 1), - nn.ReLU(inplace=True), - nn.Conv2d(in_channels // reduction, in_channels, 1), - nn.Sigmoid(), - ) - self.sSE = nn.Sequential(nn.Conv2d(in_channels, 1, 1), nn.Sigmoid()) - - def forward(self, x): - return x * self.cSE(x) + x * self.sSE(x) - - -class ArgMax(nn.Module): - def __init__(self, dim=None): - super().__init__() - self.dim = dim - - def forward(self, x): - return torch.argmax(x, dim=self.dim) - - -class Clamp(nn.Module): - def __init__(self, min=0, max=1): - super().__init__() - self.min, self.max = min, max - - def forward(self, x): - return torch.clamp(x, self.min, self.max) - - -class Activation(nn.Module): - def __init__(self, name, **params): - - super().__init__() - - if name is None or name == "identity": - self.activation = nn.Identity(**params) - elif name == "sigmoid": - self.activation = nn.Sigmoid() - elif name == "softmax2d": - self.activation = nn.Softmax(dim=1, **params) - elif name == "softmax": - self.activation = nn.Softmax(**params) - elif name == "logsoftmax": - self.activation = nn.LogSoftmax(**params) - elif name == "tanh": - self.activation = nn.Tanh() - elif name == "argmax": - self.activation = ArgMax(**params) - elif name == "argmax2d": - self.activation = ArgMax(dim=1, **params) - elif name == "clamp": - self.activation = Clamp(**params) - elif callable(name): - self.activation = name(**params) - else: - raise ValueError( - f"Activation should be callable/sigmoid/softmax/logsoftmax/tanh/" - f"argmax/argmax2d/clamp/None; got {name}" - ) - - def forward(self, x): - return self.activation(x) - - -class Attention(nn.Module): - def __init__(self, name, **params): - super().__init__() - - if name is None: - self.attention = nn.Identity(**params) - elif name == "scse": - self.attention = SCSEModule(**params) - else: - raise ValueError("Attention {} is not implemented".format(name)) - - def forward(self, x): - return self.attention(x) diff --git a/spaces/giseldo/story_point_estimator/README.md b/spaces/giseldo/story_point_estimator/README.md deleted file mode 100644 index 141915b587598ae35c9f8be51394f4de3d438b52..0000000000000000000000000000000000000000 --- a/spaces/giseldo/story_point_estimator/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Story Point Estimator -emoji: 💻 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: other ---- - -# NEOSP -NEOSP lets you use a preditive model to estimate issues from title and description. -The solution was developed as part of a phd thesis project at UFCG in 2023. -In the future maybe we will code without estimating. - - - - diff --git a/spaces/giswqs/Streamlit/apps/home.py b/spaces/giswqs/Streamlit/apps/home.py deleted file mode 100644 index 79ed39d791719cfb8999e4f76b143f4e27258d3e..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/apps/home.py +++ /dev/null @@ -1,34 +0,0 @@ -import streamlit as st -import leafmap.foliumap as leafmap - - -def app(): - st.title("Streamlit for Geospatial Applications") - - st.markdown( - """ - This multi-page web app demonstrates various interactive web apps created using [streamlit](https://streamlit.io) and open-source mapping libraries, - such as [leafmap](https://leafmap.org), [geemap](https://geemap.org), [pydeck](https://deckgl.readthedocs.io), and [kepler.gl](https://docs.kepler.gl/docs/keplergl-jupyter). - This is an open-source project and you are very welcome to contribute your comments, questions, resources, and apps as [issues](https://github.com/giswqs/streamlit-geospatial/issues) or - [pull requests](https://github.com/giswqs/streamlit-geospatial/pulls) to the [GitHub repository](https://github.com/giswqs/streamlit-geospatial). - - """ - ) - - st.info("Click on the left sidebar menu to navigate to the different apps.") - - st.subheader("Timelapse of Satellite Imagery") - st.markdown( - """ - The following timelapse animations were created using the Timelapse web app. Click `Create Timelapse` on the left sidebar menu to create your own timelapse for any location around the globe. - """ - ) - - row1_col1, row1_col2 = st.columns(2) - with row1_col1: - st.image("https://github.com/giswqs/data/raw/main/timelapse/spain.gif") - st.image("https://github.com/giswqs/data/raw/main/timelapse/las_vegas.gif") - - with row1_col2: - st.image("https://github.com/giswqs/data/raw/main/timelapse/goes.gif") - st.image("https://github.com/giswqs/data/raw/main/timelapse/fire.gif") diff --git a/spaces/godot-demo/godot-3d-trucks/index.audio.worklet.js b/spaces/godot-demo/godot-3d-trucks/index.audio.worklet.js deleted file mode 100644 index ea4d8cb22156435ac3c3d171390864140d0d54cd..0000000000000000000000000000000000000000 --- a/spaces/godot-demo/godot-3d-trucks/index.audio.worklet.js +++ /dev/null @@ -1,211 +0,0 @@ -/*************************************************************************/ -/* audio.worklet.js */ -/*************************************************************************/ -/* This file is part of: */ -/* GODOT ENGINE */ -/* https://godotengine.org */ -/*************************************************************************/ -/* Copyright (c) 2007-2022 Juan Linietsky, Ariel Manzur. */ -/* Copyright (c) 2014-2022 Godot Engine contributors (cf. AUTHORS.md). */ -/* */ -/* Permission is hereby granted, free of charge, to any person obtaining */ -/* a copy of this software and associated documentation files (the */ -/* "Software"), to deal in the Software without restriction, including */ -/* without limitation the rights to use, copy, modify, merge, publish, */ -/* distribute, sublicense, and/or sell copies of the Software, and to */ -/* permit persons to whom the Software is furnished to do so, subject to */ -/* the following conditions: */ -/* */ -/* The above copyright notice and this permission notice shall be */ -/* included in all copies or substantial portions of the Software. */ -/* */ -/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */ -/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */ -/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.*/ -/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */ -/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */ -/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */ -/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -/*************************************************************************/ - -class RingBuffer { - constructor(p_buffer, p_state, p_threads) { - this.buffer = p_buffer; - this.avail = p_state; - this.threads = p_threads; - this.rpos = 0; - this.wpos = 0; - } - - data_left() { - return this.threads ? Atomics.load(this.avail, 0) : this.avail; - } - - space_left() { - return this.buffer.length - this.data_left(); - } - - read(output) { - const size = this.buffer.length; - let from = 0; - let to_write = output.length; - if (this.rpos + to_write > size) { - const high = size - this.rpos; - output.set(this.buffer.subarray(this.rpos, size)); - from = high; - to_write -= high; - this.rpos = 0; - } - if (to_write) { - output.set(this.buffer.subarray(this.rpos, this.rpos + to_write), from); - } - this.rpos += to_write; - if (this.threads) { - Atomics.add(this.avail, 0, -output.length); - Atomics.notify(this.avail, 0); - } else { - this.avail -= output.length; - } - } - - write(p_buffer) { - const to_write = p_buffer.length; - const mw = this.buffer.length - this.wpos; - if (mw >= to_write) { - this.buffer.set(p_buffer, this.wpos); - this.wpos += to_write; - if (mw === to_write) { - this.wpos = 0; - } - } else { - const high = p_buffer.subarray(0, mw); - const low = p_buffer.subarray(mw); - this.buffer.set(high, this.wpos); - this.buffer.set(low); - this.wpos = low.length; - } - if (this.threads) { - Atomics.add(this.avail, 0, to_write); - Atomics.notify(this.avail, 0); - } else { - this.avail += to_write; - } - } -} - -class GodotProcessor extends AudioWorkletProcessor { - constructor() { - super(); - this.threads = false; - this.running = true; - this.lock = null; - this.notifier = null; - this.output = null; - this.output_buffer = new Float32Array(); - this.input = null; - this.input_buffer = new Float32Array(); - this.port.onmessage = (event) => { - const cmd = event.data['cmd']; - const data = event.data['data']; - this.parse_message(cmd, data); - }; - } - - process_notify() { - if (this.notifier) { - Atomics.add(this.notifier, 0, 1); - Atomics.notify(this.notifier, 0); - } - } - - parse_message(p_cmd, p_data) { - if (p_cmd === 'start' && p_data) { - const state = p_data[0]; - let idx = 0; - this.threads = true; - this.lock = state.subarray(idx, ++idx); - this.notifier = state.subarray(idx, ++idx); - const avail_in = state.subarray(idx, ++idx); - const avail_out = state.subarray(idx, ++idx); - this.input = new RingBuffer(p_data[1], avail_in, true); - this.output = new RingBuffer(p_data[2], avail_out, true); - } else if (p_cmd === 'stop') { - this.running = false; - this.output = null; - this.input = null; - } else if (p_cmd === 'start_nothreads') { - this.output = new RingBuffer(p_data[0], p_data[0].length, false); - } else if (p_cmd === 'chunk') { - this.output.write(p_data); - } - } - - static array_has_data(arr) { - return arr.length && arr[0].length && arr[0][0].length; - } - - process(inputs, outputs, parameters) { - if (!this.running) { - return false; // Stop processing. - } - if (this.output === null) { - return true; // Not ready yet, keep processing. - } - const process_input = GodotProcessor.array_has_data(inputs); - if (process_input) { - const input = inputs[0]; - const chunk = input[0].length * input.length; - if (this.input_buffer.length !== chunk) { - this.input_buffer = new Float32Array(chunk); - } - if (!this.threads) { - GodotProcessor.write_input(this.input_buffer, input); - this.port.postMessage({ 'cmd': 'input', 'data': this.input_buffer }); - } else if (this.input.space_left() >= chunk) { - GodotProcessor.write_input(this.input_buffer, input); - this.input.write(this.input_buffer); - } else { - this.port.postMessage('Input buffer is full! Skipping input frame.'); - } - } - const process_output = GodotProcessor.array_has_data(outputs); - if (process_output) { - const output = outputs[0]; - const chunk = output[0].length * output.length; - if (this.output_buffer.length !== chunk) { - this.output_buffer = new Float32Array(chunk); - } - if (this.output.data_left() >= chunk) { - this.output.read(this.output_buffer); - GodotProcessor.write_output(output, this.output_buffer); - if (!this.threads) { - this.port.postMessage({ 'cmd': 'read', 'data': chunk }); - } - } else { - this.port.postMessage('Output buffer has not enough frames! Skipping output frame.'); - } - } - this.process_notify(); - return true; - } - - static write_output(dest, source) { - const channels = dest.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < dest[ch].length; sample++) { - dest[ch][sample] = source[sample * channels + ch]; - } - } - } - - static write_input(dest, source) { - const channels = source.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < source[ch].length; sample++) { - dest[sample * channels + ch] = source[ch][sample]; - } - } - } -} - -registerProcessor('godot-processor', GodotProcessor); diff --git "a/spaces/gotiQspiryo/whisper-ui/Kaho Naa Pyaar Hai [2000-MP3-VBR-320Kbps] DS \302\200? Keyscity.net.md" "b/spaces/gotiQspiryo/whisper-ui/Kaho Naa Pyaar Hai [2000-MP3-VBR-320Kbps] DS \302\200? Keyscity.net.md" deleted file mode 100644 index f2fa5aafc7343019d644ff7170990cb4222b46e9..0000000000000000000000000000000000000000 --- "a/spaces/gotiQspiryo/whisper-ui/Kaho Naa Pyaar Hai [2000-MP3-VBR-320Kbps] DS \302\200? Keyscity.net.md" +++ /dev/null @@ -1,39 +0,0 @@ -## Kaho Naa Pyaar Hai [2000-MP3-VBR-320Kbps] DS €? Keyscity.net - - - -**Download File »»» [https://mauletnaci.blogspot.com/?download=2twuaB](https://mauletnaci.blogspot.com/?download=2twuaB)** - - - -# Kaho Naa Pyaar Hai: The Film That Launched Hrithik Roshan's Stardom - - - -Kaho Naa Pyaar Hai (Say It... You're In Love) is a 2000 Indian Hindi-language romantic action film written, directed and produced by Rakesh Roshan. It marks the debuts of his son Hrithik Roshan (in a double role) and actress Ameesha Patel. The film was released on 14 January 2000 and became an instant sensational blockbuster hit. It earned ₹800 million (US$17.8 million) worldwide, becoming the highest-grossing film of 2000. It received positive reviews from critics upon release, with particular praise directed towards Hrithik's performance, his dancing skills and looks, and the film's soundtrack. - - - -The film tells the story of Rohit, an aspiring singer who works as a car salesman, and Sonia, the beautiful daughter of a wealthy businessman. They fall in love but are separated by a twist of fate when Rohit is killed by Sonia's father's associates. Sonia travels to New Zealand to cope with her loss, where she meets Raj, a lookalike of Rohit who also happens to be a singer. Raj falls for Sonia but she is reluctant to reciprocate his feelings. However, when Raj learns about Rohit's murder, he decides to help Sonia find the culprits and bring them to justice. - - - -Kaho Naa Pyaar Hai was a milestone in Bollywood history, as it launched Hrithik Roshan's career as a superstar. His debut was termed 'Hrithik Mania' by the media, and he has been known as the "Millennial Superstar" ever since. He won both the Filmfare Award for Best Actor and the Filmfare Award for Best Debut for the same film, a feat that has never been repeated by any other actor. He also showcased his versatility by playing two different characters with distinct personalities and styles. His dance moves in songs like "Ek Pal Ka Jeena" and "Dil Ne Dil Ko Pukara" became iconic and inspired many youngsters to emulate him. - - - -The film also established Ameesha Patel as a leading lady in Bollywood. She played the role of Sonia with grace and charm, and matched Hrithik's screen presence with her own. She also won the Filmfare Award for Best Female Debut for her performance. The chemistry between Hrithik and Ameesha was sizzling and captivating, and they went on to star in several more films together. - - - -The film's soundtrack was composed by Rajesh Roshan, Rakesh Roshan's brother, and featured songs that became evergreen hits. The title track "Kaho Naa Pyaar Hai" was sung by Udit Narayan and Alka Yagnik, and became one of the most popular romantic songs of all time. Other songs like "Na Tum Jano Na Hum", "Chand Sitare", "Pyaar Ki Kashti Mein" and "Jaaneeman Jaaneeman" were also well-received by the audience and critics alike. - - - -Kaho Naa Pyaar Hai was not only a commercial success but also a critical success. It won a total of 92 awards in various ceremonies and categories, setting a Guinness World Record for being a feature film with the most awards won. It also won 11 Filmfare Awards, including Best Film, Best Director, Best Actor, Best Actress (Debut), Best Music Director, Best Lyricist, Best Playback Singer (Male), Best Playback Singer (Female), Best Choreography, Best Editing and Best Sound Recording. - - - -Kaho Naa Pyaar Hai is a film that will always be remembered as one of the best films of Bollywood. It is a film that made Hrithik Roshan a household name and a superstar overnight. It is a film that redefined romance and action in Hindi cinema. It is a film that touched millions of hearts with its story, music and performances. It is a film that you can watch again and again and say it... you're in love. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Aspen 8 4 Keygen 20l Download the Latest Version of Aspen HYSYS.md b/spaces/gotiQspiryo/whisper-ui/examples/Aspen 8 4 Keygen 20l Download the Latest Version of Aspen HYSYS.md deleted file mode 100644 index 83ecd41c895242374f6346000efef128c1023c32..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Aspen 8 4 Keygen 20l Download the Latest Version of Aspen HYSYS.md +++ /dev/null @@ -1,6 +0,0 @@ -

            ESET NOD32 Antivirus 12.1.34.0 Crack


            Download File 🗸🗸🗸 https://urlgoal.com/2uyNda



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Driver Conexant Smarthsfi V 9X 56K Speakerphone Modem- Download Special Version for Better Sound Quality.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Driver Conexant Smarthsfi V 9X 56K Speakerphone Modem- Download Special Version for Better Sound Quality.md deleted file mode 100644 index bbcba974b8c986789373c3c23699bce38514c55d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Driver Conexant Smarthsfi V 9X 56K Speakerphone Modem- Download Special Version for Better Sound Quality.md +++ /dev/null @@ -1,10 +0,0 @@ -
            -

            If you could not find the exact driver for your hardware device or you aren't sure which driver is right one, we have a program that will detect your hardware specifications and identify the correct driver for your needs. Please click here to download.

            -

            Once you have downloaded your new driver, you'll need to install it. In Windows, use a built-in utility called Device Manager, which allows you to see all of the devices recognized by your system, and the drivers associated with them.

            -

            Download Driver Conexant Smarthsfi V 9X 56K Speakerphone Modem- Download special version


            Download ✪✪✪ https://urlgoal.com/2uyMvM



            -

            DriverGuide maintains an extensive archive of Windows drivers available for free download. We employ a team from around the world which adds hundreds of new drivers to our site every day. How to Install Drivers Once you download your new driver, then you need to install it. To install a driver in Windows, you will need to use a built-in utility called Device Manager. It allows you to see all of the devices recognized by your system, and the drivers associated with them.

            -

            The Driver Update Utility automatically finds, downloads and installs the right driver for your hardware and operating system. It will Update all of your drivers in just a few clicks, and even backup your drivers before making any changes.

            -

            Once you download your new driver, then you need to install it. To install a driver in Windows, you will need to use a built-in utility called Device Manager. It allows you to see all of the devices recognized by your system, and the drivers associated with them.

            -

            aaccfb2cb3
            -
            -
            \ No newline at end of file diff --git a/spaces/gradio/gpt-neo/utils.py b/spaces/gradio/gpt-neo/utils.py deleted file mode 100644 index 3666b889c99477b71e35d2b84122fa7de564aecb..0000000000000000000000000000000000000000 --- a/spaces/gradio/gpt-neo/utils.py +++ /dev/null @@ -1,291 +0,0 @@ -import re -from urllib.parse import urlparse -from shutil import rmtree -import logging -import os -from pathlib import Path -import sys -import tensorflow.compat.v1 as tf -import tensorflow.compat.v2 as tf2 -import mesh_tensorflow as mtf -from data.encoders import fetch_encoder -import re - -def setup_logging(args): - Path("logs").mkdir(exist_ok=True) - tf.logging.set_verbosity(logging.INFO) - tf.get_logger().propagate = False # Remove double log on console - name = os.path.splitext(os.path.basename(args.model))[0] - handlers = [ - logging.FileHandler(f"logs/{name}.log"), - logging.StreamHandler(sys.stdout) - ] - logger = logging.getLogger("tensorflow") - logger.handlers = handlers - return logger - - -def get_batch_size(params): - return params[f"{params['mode']}_batch_size"] - - -def add_mode_to_params(params, mode): - if mode == tf.estimator.ModeKeys.PREDICT: - params["mode"] = "predict" - elif mode == tf.estimator.ModeKeys.EVAL: - params["mode"] = "eval" - elif mode == tf.estimator.ModeKeys.TRAIN: - params["mode"] = "train" - else: - raise ValueError(f"Invalid mode {mode}") - return params - - -def simd_mesh_setup(params, mesh_shape, layout_rules): - """Constructs SimdMesh function - instructions on how to evenly split tensors across all TPU cores""" - - num_hosts = params["context"].num_hosts - host_placement_fn = params["context"].tpu_host_placement_function - device_list = [host_placement_fn(host_id=i) for i in range(num_hosts)] - tf.logging.info(f"device_list = {device_list}") - - # TODO: Better estimation of replica cache size? - replica_cache_size = 300 * 1000000 # 300M per replica - - # Worker 0 caches all the TPU binaries - worker0_mem = replica_cache_size * params["context"].num_replicas - devices_memory_usage = [worker0_mem] + [0] * (num_hosts - 1) - var_placer = mtf.utils.BalancedVariablePlacer(device_list, devices_memory_usage) - mesh_devices = [""] * mesh_shape.size - mesh_impl = mtf.simd_mesh_impl.SimdMeshImpl( - mesh_shape, layout_rules, mesh_devices, params["context"].device_assignment) - - return var_placer, mesh_impl - - -def remove_batch_from_layout(layout): - """ - The tf-mesh layout splits across batch size, remove it. - Useful for prediction steps, when you no longer want large batches. - - :param layout: string describing tf-mesh layout - :return: layout minus batch dimension - """ - layout = layout.split(',') - ret_layout = "" - for i in layout: - if "batch" in i: - pass - else: - ret_layout += f"{i}," - return ret_layout[:-1] - - -def yes_or_no(question): - while True: - reply = str(input(question+' (y/n): ')).lower().strip() - if reply[:1] == 'y': - return True - if reply[:1] == 'n': - return False - - -def remove_gs_or_filepath(path): - parsed_url = urlparse(path) - if parsed_url.scheme == "gs": - os.system(f"gsutil rm -rf {path}") - return - rmtree(path) - - -def save_config(params_dict, logdir): - print(f"Saving config to {logdir}") - text = "{\n\n" - total_params = len(params_dict) - for count, key in enumerate(params_dict): - config_value = str(params_dict[key]) - if re.search('[a-zA-Z]', config_value): - if config_value.lower() != 'true': - if config_value.lower() != 'false': - if config_value[0] != '[': - # TODO: Making a manual exception for parsing epsilon right now since it's the only number in - # scientific notation. Should fix this. - if key != "epsilon": - config_value = f'"{config_value}"' - if count == total_params - 1: - text += f'"{str(key)}"' + ' : ' + config_value + '\n\n' - else: - text += f'"{str(key)}"' + ' : ' + config_value + ',\n\n' - text += '\n\n}' - sess = tf.InteractiveSession() - summary_op = tf.summary.text("run_config", tf.convert_to_tensor(text)) - summary_writer = tf.summary.FileWriter(f"{logdir}/config", sess.graph) - text = sess.run(summary_op) - summary_writer.add_summary(text, 0) - summary_writer.flush() - summary_writer.close() - tf.reset_default_graph() - print('Done!') - - -def expand_attention_types_params(params_list): - newlist = [] - for item in params_list: - for _ in range(item[1]): - newlist.extend(item[0]) - return newlist - - -def get_n_trainable_vars(graph): - """ - Gets number of trainable vars in a MTF model. - - :param graph: Mesh-Tensorflow graph - :return: None - """ - total_parameters = 0 - for variable in graph.trainable_variables: - shape = variable.shape.dims - variable_parameters = 1 - for dim in shape: - variable_parameters *= dim.size - total_parameters += variable_parameters - print(f"\n\nN TRAINABLE VARS:\n{total_parameters:,}\n\n") - - -def print_dim_names(graph): - """ - Print names of all Dimensions - :param graph: Mesh-Tensorflow graph - :return: None - """ - all_dim_names = [] - for variable in graph.all_variables: - names = variable.shape.dimension_names - all_dim_names.append(names) - - # Print all dim names in graph & write to file - all_dim_names = [item for sublist in all_dim_names for item in sublist] # Flatten all dims - unique_dims = list(set(all_dim_names)) - print("ALL DIM NAMES:") - for dim_name in unique_dims: - print(dim_name) - print('\n') - - -def get_graph_info(graph): - """ - Wrapper fn that calculates number of trainable vars in an MTF graph & prints all dim_names to file - TODO: how to get un-trainable dim-names too, batch etc. - - :param graph: Mesh-Tensorflow graph - :return: None - """ - get_n_trainable_vars(graph) - print_dim_names(graph) - - -def loss_denominator(targets, num_microbatches): - """Denominator applied to losses. - - This is usually the size of the targets tensor (omitting ensemble - dimensions). Alternatively, it is an override value passed to the - class constructor. - - Args: - targets: a mtf.Tensor - num_microbatches: an integer - greater than one if the step has been - serialized into multiple microbatches to save memory. - Returns: - a float - """ - ret = float(targets.shape.size) * num_microbatches - return float(ret) - -def check_dataset(input_fn, params, global_step=None): - tf.enable_eager_execution() - if global_step is not None: - dataset = input_fn(params, global_step=global_step) - else: - dataset = input_fn(params) - dataset_iter = dataset.make_one_shot_iterator() - tensor, _ = next(dataset_iter) - enc = fetch_encoder(params) - - for p in tensor[:1]: - txt = enc.decode(p) - - print('-' * 50) - print(txt[:500], '\n\n...\n\n', txt[-500:]) - print('-' * 50) - exit() - -def auto_layout(graph, mesh_shape, logits, loss): - layout_rules = mtf.auto_mtf.layout(graph, mesh_shape, [logits, loss]) - print(f"Auto-selected layout:\n{layout_rules}\nRe-initialize graph with selected layout") - quit() - -def auto_layout_and_mesh_shape(graph, num_cores, logits, loss): - layout_rules, mesh_shape = mtf.auto_mtf.layout_and_mesh_shape(graph, num_cores, - [logits, loss], max_mesh_shape_dimensions=4) - print(f"Num cores:\n{num_cores}\nAuto-selected layout:\n{layout_rules}\nAuto-selected mesh shape:\n{mesh_shape}" \ - f"\nRe-initialize graph with selected layout & mesh shape") - quit() - -def create_host_call(model_dir): - """Construct a host_call writing scalar summaries. - - Borrowed from t2t. - - Args: - model_dir: String containing path to train - Returns: - (fn, args) Pair to be called by TPUEstimator as the host_call. - """ - - graph = tf.get_default_graph() - # A list of (name, lowered tensor) tuples - summaries = graph.get_collection(mtf.utils.SCALAR_SUMMARIES_COLLECTION_KEY) - - def maybe_cast(tensor): - assert tensor.shape.is_compatible_with([]), tensor.name - if tensor.dtype == tf.int64: - return tf.to_int32(tensor) - if tensor.dtype == tf.bfloat16: - return tf.cast(tensor, tf.float32) - return tensor - - reshaped_tensors = [tf.reshape(maybe_cast(t), [1]) for _, t in summaries] - - # When no supported summaries are found, don't create host_call. Otherwise, - # TPU outfeed queue would enqueue global_step while host_call doesn't dequeue - # it, eventually causing hang. - if not reshaped_tensors: - return None - - def host_call_fn(global_step, *args): - """Training host call. Creates scalar summaries for training metrics.""" - # This function is executed on the CPU and should not directly reference - # any Tensors in the rest of the `model_fn`. To pass Tensors from the - # model to the `model_fn`, provide as part of the `host_call`. - global_step = tf.cast(global_step[0], tf.int64) - with tf2.summary.create_file_writer(model_dir).as_default(): - # We cannot directly use any tensor from summaries, because each - # tensor here must be a concat of multiple tensors from all shards. - # Therefore, we rely on the assumption that args wil have the same - # length as summaries, and all tensors in args will have the same - # order of self._tup_summaries. - assert len(args) == len(summaries) - for i, tensor in enumerate(args): - name = summaries[i][0] - tf2.summary.scalar(name, tf.reduce_mean(tensor), step=global_step) - return tf.summary.all_v2_summary_ops() - - global_step_t = tf.reshape(tf.to_int32(tf.train.get_global_step()), [1]) - return host_call_fn, [global_step_t] + reshaped_tensors - - -def natural_sort(l): - convert = lambda text: int(text) if text.isdigit() else text.lower() - alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ] - return sorted(l, key = alphanum_key) diff --git a/spaces/hannahaa/MovieAI/app.py b/spaces/hannahaa/MovieAI/app.py deleted file mode 100644 index b8e324b9c29780cc194b84219d4782bd519931d7..0000000000000000000000000000000000000000 --- a/spaces/hannahaa/MovieAI/app.py +++ /dev/null @@ -1,172 +0,0 @@ -### ----------------------------- ### -### libraries ### -### ----------------------------- ### - -import gradio as gr -import pandas as pd -import numpy as np -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - - -### ------------------------------ ### -### data transformation ### -### ------------------------------ ### - -# load dataset -uncleaned_data = pd.read_csv('data.csv') - -# remove timestamp from dataset (always first column) -uncleaned_data = uncleaned_data.iloc[: , 1:] -data = pd.DataFrame() - -# keep track of which columns are categorical and what -# those columns' value mappings are -# structure: {colname1: {...}, colname2: {...} } -cat_value_dicts = {} -final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1] - -# for each column... -for (colname, colval) in uncleaned_data.iteritems(): - - # check if col is already a number; if so, add col directly - # to new dataframe and skip to next column - if isinstance(colval.values[0], (np.integer, float)): - data[colname] = uncleaned_data[colname].copy() - continue - - # structure: {0: "lilac", 1: "blue", ...} - new_dict = {} - val = 0 # first index per column - transformed_col_vals = [] # new numeric datapoints - - # if not, for each item in that column... - for (row, item) in enumerate(colval.values): - - # if item is not in this col's dict... - if item not in new_dict: - new_dict[item] = val - val += 1 - - # then add numerical value to transformed dataframe - transformed_col_vals.append(new_dict[item]) - - # reverse dictionary only for final col (0, 1) => (vals) - if colname == final_colname: - new_dict = {value : key for (key, value) in new_dict.items()} - - cat_value_dicts[colname] = new_dict - data[colname] = transformed_col_vals - - -### -------------------------------- ### -### model training ### -### -------------------------------- ### - -# select features and predicton; automatically selects last column as prediction -cols = len(data.columns) -num_features = cols - 1 -x = data.iloc[: , :num_features] -y = data.iloc[: , num_features:] - -# split data into training and testing sets -x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25) - -# instantiate the model (using default parameters) -model = LogisticRegression() -model.fit(x_train, y_train.values.ravel()) -y_pred = model.predict(x_test) - - -### -------------------------------- ### -### article generation ### -### -------------------------------- ### -# borrow file reading function from reader.py - -def get_feat(): - feats = [abs(x) for x in model.coef_[0]] - max_val = max(feats) - idx = feats.index(max_val) - return data.columns[idx] - -acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%" -most_imp_feat = get_feat() -# info = get_article(acc, most_imp_feat) - - - -### ------------------------------- ### -### interface creation ### -### ------------------------------- ### - - -# predictor for generic number of features -def general_predictor(*args): - features = [] - - # transform categorical input - for colname, arg in zip(data.columns, args): - if (colname in cat_value_dicts): - features.append(cat_value_dicts[colname][arg]) - else: - features.append(arg) - - # predict single datapoint - new_input = [features] - result = model.predict(new_input) - return cat_value_dicts[final_colname][result[0]] - -# add data labels to replace those lost via star-args - - -block = gr.Blocks() - -with open('info.md') as f: - with block: - gr.Markdown(f.readline()) - gr.Markdown('Take the quiz to get a personalized recommendation using AI.') - - with gr.Row(): - with gr.Box(): - inputls = [] - for colname in data.columns: - # skip last column - if colname == final_colname: - continue - - # access categories dict if data is categorical - # otherwise, just use a number input - if colname in cat_value_dicts: - radio_options = list(cat_value_dicts[colname].keys()) - inputls.append(gr.inputs.Dropdown(choices=radio_options, type="value", label=colname)) - else: - # add numerical input - inputls.append(gr.inputs.Number(label=colname)) - gr.Markdown("
            ") - - submit = gr.Button("Click to see your personalized result!", variant="primary") - gr.Markdown("
            ") - output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here") - - submit.click(fn=general_predictor, inputs=inputls, outputs=output) - gr.Markdown("
            ") - - with gr.Row(): - with gr.Box(): - gr.Markdown(f"

            Accuracy:

            {acc}") - with gr.Box(): - gr.Markdown(f"

            Most important feature:

            {most_imp_feat}") - - gr.Markdown("
            ") - - with gr.Box(): - gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''') - - with gr.Box(): - with open('info.md') as f: - f.readline() - gr.Markdown(f.read()) - -# show the interface -block.launch() \ No newline at end of file diff --git a/spaces/henryz/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta/README.md b/spaces/henryz/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta/README.md deleted file mode 100644 index ec070803cf2adac1529d494d85ea7131134dad2d..0000000000000000000000000000000000000000 --- a/spaces/henryz/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Streaming Chat With Gpt-3.5-turbo Using Langchain Sorta -emoji: 📚 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: lukestanley/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hfmax/SpeciesChecker/README.md b/spaces/hfmax/SpeciesChecker/README.md deleted file mode 100644 index 6ec7929f29df4986dda8cad5df7be5948226c1b5..0000000000000000000000000000000000000000 --- a/spaces/hfmax/SpeciesChecker/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SpeciesChecker -emoji: 📊 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task035_ISBI_MSLesionSegmentationChallenge.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task035_ISBI_MSLesionSegmentationChallenge.py deleted file mode 100644 index a71b2e91c7120f9d3ef9df1055e69053254bf142..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task035_ISBI_MSLesionSegmentationChallenge.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import shutil -from collections import OrderedDict -import numpy as np -import SimpleITK as sitk -import multiprocessing -from batchgenerators.utilities.file_and_folder_operations import * - - -def convert_to_nii_gz(filename): - f = sitk.ReadImage(filename) - sitk.WriteImage(f, os.path.splitext(filename)[0] + ".nii.gz") - os.remove(filename) - - -def convert_for_submission(source_dir, target_dir): - files = subfiles(source_dir, suffix=".nii.gz", join=False) - maybe_mkdir_p(target_dir) - for f in files: - splitted = f.split("__") - case_id = int(splitted[1]) - timestep = int(splitted[2][:-7]) - t = join(target_dir, "test%02d_%02d_nnUNet.nii" % (case_id, timestep)) - img = sitk.ReadImage(join(source_dir, f)) - sitk.WriteImage(img, t) - - -if __name__ == "__main__": - # convert to nifti.gz - dirs = ['/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/imagesTr', - '/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/imagesTs', - '/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/labelsTr'] - - p = multiprocessing.Pool(3) - - for d in dirs: - nii_files = subfiles(d, suffix='.nii') - p.map(convert_to_nii_gz, nii_files) - - p.close() - p.join() - - - def rename_files(folder): - all_files = subfiles(folder, join=False) - # there are max 14 patients per folder, starting with 1 - for patientid in range(1, 15): - # there are certainly no more than 10 time steps per patient, starting with 1 - for t in range(1, 10): - patient_files = [i for i in all_files if i.find("%02.0d_%02.0d_" % (patientid, t)) != -1] - if not len(patient_files) == 4: - continue - - flair_file = [i for i in patient_files if i.endswith("_flair_pp.nii.gz")][0] - mprage_file = [i for i in patient_files if i.endswith("_mprage_pp.nii.gz")][0] - pd_file = [i for i in patient_files if i.endswith("_pd_pp.nii.gz")][0] - t2_file = [i for i in patient_files if i.endswith("_t2_pp.nii.gz")][0] - - os.rename(join(folder, flair_file), join(folder, "case__%02.0d__%02.0d_0000.nii.gz" % (patientid, t))) - os.rename(join(folder, mprage_file), join(folder, "case__%02.0d__%02.0d_0001.nii.gz" % (patientid, t))) - os.rename(join(folder, pd_file), join(folder, "case__%02.0d__%02.0d_0002.nii.gz" % (patientid, t))) - os.rename(join(folder, t2_file), join(folder, "case__%02.0d__%02.0d_0003.nii.gz" % (patientid, t))) - - - for d in dirs[:-1]: - rename_files(d) - - - # now we have to deal with the training masks, we do it the quick and dirty way here by just creating copies of the - # training data - - train_folder = '/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/imagesTr' - - for patientid in range(1, 6): - for t in range(1, 6): - fnames_original = subfiles(train_folder, prefix="case__%02.0d__%02.0d" % (patientid, t), suffix=".nii.gz", sort=True) - for f in fnames_original: - for mask in [1, 2]: - fname_target = f[:-12] + "__mask%d" % mask + f[-12:] - shutil.copy(f, fname_target) - os.remove(f) - - - labels_folder = '/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/labelsTr' - - for patientid in range(1, 6): - for t in range(1, 6): - for mask in [1, 2]: - f = join(labels_folder, "training%02d_%02d_mask%d.nii.gz" % (patientid, t, mask)) - if isfile(f): - os.rename(f, join(labels_folder, "case__%02.0d__%02.0d__mask%d.nii.gz" % (patientid, t, mask))) - - - - tr_files = [] - for patientid in range(1, 6): - for t in range(1, 6): - for mask in [1, 2]: - if isfile(join(labels_folder, "case__%02.0d__%02.0d__mask%d.nii.gz" % (patientid, t, mask))): - tr_files.append("case__%02.0d__%02.0d__mask%d.nii.gz" % (patientid, t, mask)) - - - ts_files = [] - for patientid in range(1, 20): - for t in range(1, 20): - if isfile(join("/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/imagesTs", - "case__%02.0d__%02.0d_0000.nii.gz" % (patientid, t))): - ts_files.append("case__%02.0d__%02.0d.nii.gz" % (patientid, t)) - - - out_base = '/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/' - - json_dict = OrderedDict() - json_dict['name'] = "ISBI_Lesion_Segmentation_Challenge_2015" - json_dict['description'] = "nothing" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "see challenge website" - json_dict['licence'] = "see challenge website" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "flair", - "1": "mprage", - "2": "pd", - "3": "t2" - } - json_dict['labels'] = { - "0": "background", - "1": "lesion" - } - json_dict['numTraining'] = len(subfiles(labels_folder)) - json_dict['numTest'] = len(subfiles('/media/fabian/My Book/MedicalDecathlon/Task035_ISBILesionSegmentation/imagesTs')) // 4 - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i[:-7], "label": "./labelsTr/%s.nii.gz" % i[:-7]} for i in - tr_files] - json_dict['test'] = ["./imagesTs/%s.nii.gz" % i[:-7] for i in ts_files] - - save_json(json_dict, join(out_base, "dataset.json")) - - case_identifiers = np.unique([i[:-12] for i in subfiles("/media/fabian/My Book/MedicalDecathlon/MedicalDecathlon_raw_splitted/Task035_ISBILesionSegmentation/imagesTr", suffix='.nii.gz', join=False)]) - - splits = [] - for f in range(5): - cases = [i for i in range(1, 6) if i != f+1] - splits.append(OrderedDict()) - splits[-1]['val'] = np.array([i for i in case_identifiers if i.startswith("case__%02d__" % (f + 1))]) - remaining = [i for i in case_identifiers if i not in splits[-1]['val']] - splits[-1]['train'] = np.array(remaining) - - maybe_mkdir_p("/media/fabian/nnunet/Task035_ISBILesionSegmentation") - save_pickle(splits, join("/media/fabian/nnunet/Task035_ISBILesionSegmentation", "splits_final.pkl")) diff --git a/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/aux.py b/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/aux.py deleted file mode 100644 index f566bf532405bdaeb350e7b50dcffb4d328835c3..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/AdaptiveWingLoss/aux.py +++ /dev/null @@ -1,4 +0,0 @@ -def detect_landmarks(inputs, model_ft): - outputs, _ = model_ft(inputs) - pred_heatmap = outputs[-1][:, :-1, :, :] - return pred_heatmap[:, 96, :, :], pred_heatmap[:, 97, :, :] diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/template_model.py b/spaces/iamironman4279/SadTalker/src/face3d/models/template_model.py deleted file mode 100644 index dac7b33d5889777eb63c9882a3b9fa094dcab293..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/models/template_model.py +++ /dev/null @@ -1,100 +0,0 @@ -"""Model class template - -This module provides a template for users to implement custom models. -You can specify '--model template' to use this model. -The class name should be consistent with both the filename and its model option. -The filename should be _dataset.py -The class name should be Dataset.py -It implements a simple image-to-image translation baseline based on regression loss. -Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss: - min_ ||netG(data_A) - data_B||_1 -You need to implement the following functions: - : Add model-specific options and rewrite default values for existing options. - <__init__>: Initialize this model class. - : Unpack input data and perform data pre-processing. - : Run forward pass. This will be called by both and . - : Update network weights; it will be called in every training iteration. -""" -import numpy as np -import torch -from .base_model import BaseModel -from . import networks - - -class TemplateModel(BaseModel): - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new model-specific options and rewrite default values for existing options. - - Parameters: - parser -- the option parser - is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset. - if is_train: - parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model. - - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk. - self.loss_names = ['loss_G'] - # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images. - self.visual_names = ['data_A', 'data_B', 'output'] - # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks. - # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them. - self.model_names = ['G'] - # define networks; you can use opt.isTrain to specify different behaviors for training and test. - self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids) - if self.isTrain: # only defined during training time - # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss. - # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device) - self.criterionLoss = torch.nn.L1Loss() - # define and initialize optimizers. You can define one optimizer for each network. - # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizers = [self.optimizer] - - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - AtoB = self.opt.direction == 'AtoB' # use to swap data_A and data_B - self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A - self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B - self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths - - def forward(self): - """Run forward pass. This will be called by both functions and .""" - self.output = self.netG(self.data_A) # generate output image given the input data_A - - def backward(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - # caculate the intermediate results if necessary; here self.output has been computed during function - # calculate loss given the input and intermediate results - self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression - self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G - - def optimize_parameters(self): - """Update network weights; it will be called in every training iteration.""" - self.forward() # first call forward to calculate intermediate results - self.optimizer.zero_grad() # clear network G's existing gradients - self.backward() # calculate gradients for network G - self.optimizer.step() # update gradients for network G diff --git a/spaces/inaccel/yolov3_adas_pruned_0_9/app.py b/spaces/inaccel/yolov3_adas_pruned_0_9/app.py deleted file mode 100644 index 72822be9be6ad0561b92b2a1e5d78fd440c75138..0000000000000000000000000000000000000000 --- a/spaces/inaccel/yolov3_adas_pruned_0_9/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import flask -import os - -app = flask.Flask(__name__) - - -@app.route('/') -def index(): - return ''.format( - os.getenv('INACCEL_URL'), - flask.request.args.get('__dark-theme', 'false')) - - -if __name__ == '__main__': - app.run(host='0.0.0.0', port=7860) diff --git a/spaces/inflaton/learn-ai/server.py b/spaces/inflaton/learn-ai/server.py deleted file mode 100644 index 5abb33a64c0e94800c19bcd7f99aaa1d3eed4f51..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/server.py +++ /dev/null @@ -1,99 +0,0 @@ -"""Main entrypoint for the app.""" -import json -import os -from timeit import default_timer as timer -from typing import List, Optional - -from lcserve import serving -from pydantic import BaseModel - -from app_modules.init import app_init -from app_modules.llm_chat_chain import ChatChain -from app_modules.utils import print_llm_response - -llm_loader, qa_chain = app_init() - -chat_history_enabled = os.environ.get("CHAT_HISTORY_ENABLED") == "true" - -uuid_to_chat_chain_mapping = dict() - - -class ChatResponse(BaseModel): - """Chat response schema.""" - - token: Optional[str] = None - error: Optional[str] = None - sourceDocs: Optional[List] = None - - -def do_chat( - question: str, - history: Optional[List] = [], - chat_id: Optional[str] = None, - streaming_handler: any = None, -): - if chat_id is None: - chat_history = [] - if chat_history_enabled: - for element in history: - item = (element[0] or "", element[1] or "") - chat_history.append(item) - - start = timer() - result = qa_chain.call_chain( - {"question": question, "chat_history": chat_history}, streaming_handler - ) - end = timer() - print(f"Completed in {end - start:.3f}s") - - print(f"qa_chain result: {result}") - return result - else: - if chat_id in uuid_to_chat_chain_mapping: - chat = uuid_to_chat_chain_mapping[chat_id] - else: - chat = ChatChain(llm_loader) - uuid_to_chat_chain_mapping[chat_id] = chat - result = chat.call_chain({"question": question}, streaming_handler) - print(f"chat result: {result}") - return result - - -@serving(websocket=True) -def chat( - question: str, history: Optional[List] = [], chat_id: Optional[str] = None, **kwargs -) -> str: - print("question@chat:", question) - streaming_handler = kwargs.get("streaming_handler") - result = do_chat(question, history, chat_id, streaming_handler) - resp = ChatResponse( - sourceDocs=result["source_documents"] if chat_id is None else [] - ) - return json.dumps(resp.dict()) - - -@serving -def chat_sync( - question: str, history: Optional[List] = [], chat_id: Optional[str] = None, **kwargs -) -> str: - print("question@chat_sync:", question) - result = do_chat(question, history, chat_id, None) - return result["response"] - - -if __name__ == "__main__": - # print_llm_response(json.loads(chat("What's deep learning?", []))) - chat_start = timer() - chat_sync("what's deep learning?", chat_id="test_user") - chat_sync("more on finance", chat_id="test_user") - chat_sync("more on Sentiment analysis", chat_id="test_user") - chat_sync("Write the game 'snake' in python", chat_id="test_user") - # chat_sync("给我讲一个年轻人奋斗创业最终取得成功的故事。", chat_id="test_user") - # chat_sync("给这个故事起一个标题", chat_id="test_user") - chat_end = timer() - total_time = chat_end - chat_start - print(f"Total time used: {total_time:.3f} s") - print(f"Number of tokens generated: {llm_loader.streamer.total_tokens}") - print( - f"Average generation speed: {llm_loader.streamer.total_tokens / total_time:.3f} tokens/s" - ) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Digital Juice Simplexity Collect Fixed.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Digital Juice Simplexity Collect Fixed.md deleted file mode 100644 index f24993366b0a81e83a0a555dd0c8b686d6f4e7d9..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Digital Juice Simplexity Collect Fixed.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Digital Juice Simplexity Collect


            Download Filehttps://urlin.us/2uEyuM



            -
            -Simplexity Collection 2 for After Effects | 432 MB Projects for After Effects | format: .djprojects | More than 20 different proje. 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/inreVtussa/clothingai/Examples/Addmefast Bot Ultimate Point Generator.md b/spaces/inreVtussa/clothingai/Examples/Addmefast Bot Ultimate Point Generator.md deleted file mode 100644 index 51b585c947abab9d3f7b8677530b41520e082b17..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Addmefast Bot Ultimate Point Generator.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Addmefast Bot Ultimate Point Generator


            Download File ✒ ✒ ✒ https://tiurll.com/2uClIA



            -
            -One of the most used and best apps for this purpose is AddMeFast. This is a web application that is used to increase social signals through a points system. You can give yourself a gift by increasing your social life. This can be explained with an example. If you want to have five friends, you can ask them to subscribe to AddMeFast and get 5 points. If you want to have ten friends, you must ask them to subscribe to AddMeFast and get 10 points. If you want to have a hundred friends, you must ask them to subscribe to AddMeFast and get 1000 points. AddMeFast has other benefits in addition to points. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/inreVtussa/clothingai/Examples/Crack Autodata 316 EXCLUSIVE.md b/spaces/inreVtussa/clothingai/Examples/Crack Autodata 316 EXCLUSIVE.md deleted file mode 100644 index e96e1b09ca5c5930d7feda25dc447caa45919a75..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crack Autodata 316 EXCLUSIVE.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Crack Autodata 316


            Download File ✔✔✔ https://tiurll.com/2uCkCD



            -
            -What is the fuel economy, BMW 3 Series Compact (E46, facelift 2001) 316i (116 hp .) Automatic? 92 mpg UK 11.6 km/hp 8a78ff9644
            -
            -
            -

            diff --git a/spaces/jayyd/Guess_famous_personalities_using_GPT-3/README.md b/spaces/jayyd/Guess_famous_personalities_using_GPT-3/README.md deleted file mode 100644 index 8967ef46f6fef2bc0f61e233f139432aa078a954..0000000000000000000000000000000000000000 --- a/spaces/jayyd/Guess_famous_personalities_using_GPT-3/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Guess Personality Using GPT-3 -emoji: 👁 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: openrail -duplicated_from: djaag/Guess_personality_using_GPT-3 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jiejiejie0420/bingo/src/components/chat-header.tsx b/spaces/jiejiejie0420/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
            - logo -
            欢迎使用新必应
            -
            由 AI 支持的网页版 Copilot
            -
            - ) -} diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/connector.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/connector.py deleted file mode 100644 index 2499a2dabe92a14413d7f4023477d4b9803da9bd..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/connector.py +++ /dev/null @@ -1,1456 +0,0 @@ -import asyncio -import functools -import random -import sys -import traceback -import warnings -from collections import defaultdict, deque -from contextlib import suppress -from http.cookies import SimpleCookie -from itertools import cycle, islice -from time import monotonic -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Awaitable, - Callable, - DefaultDict, - Dict, - Iterator, - List, - Optional, - Set, - Tuple, - Type, - Union, - cast, -) - -import attr - -from . import hdrs, helpers -from .abc import AbstractResolver -from .client_exceptions import ( - ClientConnectionError, - ClientConnectorCertificateError, - ClientConnectorError, - ClientConnectorSSLError, - ClientHttpProxyError, - ClientProxyConnectionError, - ServerFingerprintMismatch, - UnixClientConnectorError, - cert_errors, - ssl_errors, -) -from .client_proto import ResponseHandler -from .client_reqrep import ClientRequest, Fingerprint, _merge_ssl_params -from .helpers import ( - PY_36, - ceil_timeout, - get_running_loop, - is_ip_address, - noop, - sentinel, -) -from .http import RESPONSES -from .locks import EventResultOrError -from .resolver import DefaultResolver - -try: - import ssl - - SSLContext = ssl.SSLContext -except ImportError: # pragma: no cover - ssl = None # type: ignore[assignment] - SSLContext = object # type: ignore[misc,assignment] - - -__all__ = ("BaseConnector", "TCPConnector", "UnixConnector", "NamedPipeConnector") - - -if TYPE_CHECKING: # pragma: no cover - from .client import ClientTimeout - from .client_reqrep import ConnectionKey - from .tracing import Trace - - -class _DeprecationWaiter: - __slots__ = ("_awaitable", "_awaited") - - def __init__(self, awaitable: Awaitable[Any]) -> None: - self._awaitable = awaitable - self._awaited = False - - def __await__(self) -> Any: - self._awaited = True - return self._awaitable.__await__() - - def __del__(self) -> None: - if not self._awaited: - warnings.warn( - "Connector.close() is a coroutine, " - "please use await connector.close()", - DeprecationWarning, - ) - - -class Connection: - - _source_traceback = None - _transport = None - - def __init__( - self, - connector: "BaseConnector", - key: "ConnectionKey", - protocol: ResponseHandler, - loop: asyncio.AbstractEventLoop, - ) -> None: - self._key = key - self._connector = connector - self._loop = loop - self._protocol: Optional[ResponseHandler] = protocol - self._callbacks: List[Callable[[], None]] = [] - - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - def __repr__(self) -> str: - return f"Connection<{self._key}>" - - def __del__(self, _warnings: Any = warnings) -> None: - if self._protocol is not None: - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn(f"Unclosed connection {self!r}", ResourceWarning, **kwargs) - if self._loop.is_closed(): - return - - self._connector._release(self._key, self._protocol, should_close=True) - - context = {"client_connection": self, "message": "Unclosed connection"} - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - @property - def loop(self) -> asyncio.AbstractEventLoop: - warnings.warn( - "connector.loop property is deprecated", DeprecationWarning, stacklevel=2 - ) - return self._loop - - @property - def transport(self) -> Optional[asyncio.Transport]: - if self._protocol is None: - return None - return self._protocol.transport - - @property - def protocol(self) -> Optional[ResponseHandler]: - return self._protocol - - def add_callback(self, callback: Callable[[], None]) -> None: - if callback is not None: - self._callbacks.append(callback) - - def _notify_release(self) -> None: - callbacks, self._callbacks = self._callbacks[:], [] - - for cb in callbacks: - with suppress(Exception): - cb() - - def close(self) -> None: - self._notify_release() - - if self._protocol is not None: - self._connector._release(self._key, self._protocol, should_close=True) - self._protocol = None - - def release(self) -> None: - self._notify_release() - - if self._protocol is not None: - self._connector._release( - self._key, self._protocol, should_close=self._protocol.should_close - ) - self._protocol = None - - @property - def closed(self) -> bool: - return self._protocol is None or not self._protocol.is_connected() - - -class _TransportPlaceholder: - """placeholder for BaseConnector.connect function""" - - def close(self) -> None: - pass - - -class BaseConnector: - """Base connector class. - - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - enable_cleanup_closed - Enables clean-up closed ssl transports. - Disabled by default. - loop - Optional event loop. - """ - - _closed = True # prevent AttributeError in __del__ if ctor was failed - _source_traceback = None - - # abort transport after 2 seconds (cleanup broken connections) - _cleanup_closed_period = 2.0 - - def __init__( - self, - *, - keepalive_timeout: Union[object, None, float] = sentinel, - force_close: bool = False, - limit: int = 100, - limit_per_host: int = 0, - enable_cleanup_closed: bool = False, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - - if force_close: - if keepalive_timeout is not None and keepalive_timeout is not sentinel: - raise ValueError( - "keepalive_timeout cannot " "be set if force_close is True" - ) - else: - if keepalive_timeout is sentinel: - keepalive_timeout = 15.0 - - loop = get_running_loop(loop) - - self._closed = False - if loop.get_debug(): - self._source_traceback = traceback.extract_stack(sys._getframe(1)) - - self._conns: Dict[ConnectionKey, List[Tuple[ResponseHandler, float]]] = {} - self._limit = limit - self._limit_per_host = limit_per_host - self._acquired: Set[ResponseHandler] = set() - self._acquired_per_host: DefaultDict[ - ConnectionKey, Set[ResponseHandler] - ] = defaultdict(set) - self._keepalive_timeout = cast(float, keepalive_timeout) - self._force_close = force_close - - # {host_key: FIFO list of waiters} - self._waiters = defaultdict(deque) # type: ignore[var-annotated] - - self._loop = loop - self._factory = functools.partial(ResponseHandler, loop=loop) - - self.cookies: SimpleCookie[str] = SimpleCookie() - - # start keep-alive connection cleanup task - self._cleanup_handle: Optional[asyncio.TimerHandle] = None - - # start cleanup closed transports task - self._cleanup_closed_handle: Optional[asyncio.TimerHandle] = None - self._cleanup_closed_disabled = not enable_cleanup_closed - self._cleanup_closed_transports: List[Optional[asyncio.Transport]] = [] - self._cleanup_closed() - - def __del__(self, _warnings: Any = warnings) -> None: - if self._closed: - return - if not self._conns: - return - - conns = [repr(c) for c in self._conns.values()] - - self._close() - - if PY_36: - kwargs = {"source": self} - else: - kwargs = {} - _warnings.warn(f"Unclosed connector {self!r}", ResourceWarning, **kwargs) - context = { - "connector": self, - "connections": conns, - "message": "Unclosed connector", - } - if self._source_traceback is not None: - context["source_traceback"] = self._source_traceback - self._loop.call_exception_handler(context) - - def __enter__(self) -> "BaseConnector": - warnings.warn( - '"with Connector():" is deprecated, ' - 'use "async with Connector():" instead', - DeprecationWarning, - ) - return self - - def __exit__(self, *exc: Any) -> None: - self._close() - - async def __aenter__(self) -> "BaseConnector": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - exc_traceback: Optional[TracebackType] = None, - ) -> None: - await self.close() - - @property - def force_close(self) -> bool: - """Ultimately close connection on releasing if True.""" - return self._force_close - - @property - def limit(self) -> int: - """The total number for simultaneous connections. - - If limit is 0 the connector has no limit. - The default limit size is 100. - """ - return self._limit - - @property - def limit_per_host(self) -> int: - """The limit for simultaneous connections to the same endpoint. - - Endpoints are the same if they are have equal - (host, port, is_ssl) triple. - """ - return self._limit_per_host - - def _cleanup(self) -> None: - """Cleanup unused transports.""" - if self._cleanup_handle: - self._cleanup_handle.cancel() - # _cleanup_handle should be unset, otherwise _release() will not - # recreate it ever! - self._cleanup_handle = None - - now = self._loop.time() - timeout = self._keepalive_timeout - - if self._conns: - connections = {} - deadline = now - timeout - for key, conns in self._conns.items(): - alive = [] - for proto, use_time in conns: - if proto.is_connected(): - if use_time - deadline < 0: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - alive.append((proto, use_time)) - else: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - - if alive: - connections[key] = alive - - self._conns = connections - - if self._conns: - self._cleanup_handle = helpers.weakref_handle( - self, "_cleanup", timeout, self._loop - ) - - def _drop_acquired_per_host( - self, key: "ConnectionKey", val: ResponseHandler - ) -> None: - acquired_per_host = self._acquired_per_host - if key not in acquired_per_host: - return - conns = acquired_per_host[key] - conns.remove(val) - if not conns: - del self._acquired_per_host[key] - - def _cleanup_closed(self) -> None: - """Double confirmation for transport close. - - Some broken ssl servers may leave socket open without proper close. - """ - if self._cleanup_closed_handle: - self._cleanup_closed_handle.cancel() - - for transport in self._cleanup_closed_transports: - if transport is not None: - transport.abort() - - self._cleanup_closed_transports = [] - - if not self._cleanup_closed_disabled: - self._cleanup_closed_handle = helpers.weakref_handle( - self, "_cleanup_closed", self._cleanup_closed_period, self._loop - ) - - def close(self) -> Awaitable[None]: - """Close all opened transports.""" - self._close() - return _DeprecationWaiter(noop()) - - def _close(self) -> None: - if self._closed: - return - - self._closed = True - - try: - if self._loop.is_closed(): - return - - # cancel cleanup task - if self._cleanup_handle: - self._cleanup_handle.cancel() - - # cancel cleanup close task - if self._cleanup_closed_handle: - self._cleanup_closed_handle.cancel() - - for data in self._conns.values(): - for proto, t0 in data: - proto.close() - - for proto in self._acquired: - proto.close() - - for transport in self._cleanup_closed_transports: - if transport is not None: - transport.abort() - - finally: - self._conns.clear() - self._acquired.clear() - self._waiters.clear() - self._cleanup_handle = None - self._cleanup_closed_transports.clear() - self._cleanup_closed_handle = None - - @property - def closed(self) -> bool: - """Is connector closed. - - A readonly property. - """ - return self._closed - - def _available_connections(self, key: "ConnectionKey") -> int: - """ - Return number of available connections. - - The limit, limit_per_host and the connection key are taken into account. - - If it returns less than 1 means that there are no connections - available. - """ - if self._limit: - # total calc available connections - available = self._limit - len(self._acquired) - - # check limit per host - if ( - self._limit_per_host - and available > 0 - and key in self._acquired_per_host - ): - acquired = self._acquired_per_host.get(key) - assert acquired is not None - available = self._limit_per_host - len(acquired) - - elif self._limit_per_host and key in self._acquired_per_host: - # check limit per host - acquired = self._acquired_per_host.get(key) - assert acquired is not None - available = self._limit_per_host - len(acquired) - else: - available = 1 - - return available - - async def connect( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> Connection: - """Get from pool or create new connection.""" - key = req.connection_key - available = self._available_connections(key) - - # Wait if there are no available connections or if there are/were - # waiters (i.e. don't steal connection from a waiter about to wake up) - if available <= 0 or key in self._waiters: - fut = self._loop.create_future() - - # This connection will now count towards the limit. - self._waiters[key].append(fut) - - if traces: - for trace in traces: - await trace.send_connection_queued_start() - - try: - await fut - except BaseException as e: - if key in self._waiters: - # remove a waiter even if it was cancelled, normally it's - # removed when it's notified - try: - self._waiters[key].remove(fut) - except ValueError: # fut may no longer be in list - pass - - raise e - finally: - if key in self._waiters and not self._waiters[key]: - del self._waiters[key] - - if traces: - for trace in traces: - await trace.send_connection_queued_end() - - proto = self._get(key) - if proto is None: - placeholder = cast(ResponseHandler, _TransportPlaceholder()) - self._acquired.add(placeholder) - self._acquired_per_host[key].add(placeholder) - - if traces: - for trace in traces: - await trace.send_connection_create_start() - - try: - proto = await self._create_connection(req, traces, timeout) - if self._closed: - proto.close() - raise ClientConnectionError("Connector is closed.") - except BaseException: - if not self._closed: - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - self._release_waiter() - raise - else: - if not self._closed: - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - - if traces: - for trace in traces: - await trace.send_connection_create_end() - else: - if traces: - # Acquire the connection to prevent race conditions with limits - placeholder = cast(ResponseHandler, _TransportPlaceholder()) - self._acquired.add(placeholder) - self._acquired_per_host[key].add(placeholder) - for trace in traces: - await trace.send_connection_reuseconn() - self._acquired.remove(placeholder) - self._drop_acquired_per_host(key, placeholder) - - self._acquired.add(proto) - self._acquired_per_host[key].add(proto) - return Connection(self, key, proto, self._loop) - - def _get(self, key: "ConnectionKey") -> Optional[ResponseHandler]: - try: - conns = self._conns[key] - except KeyError: - return None - - t1 = self._loop.time() - while conns: - proto, t0 = conns.pop() - if proto.is_connected(): - if t1 - t0 > self._keepalive_timeout: - transport = proto.transport - proto.close() - # only for SSL transports - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - if not conns: - # The very last connection was reclaimed: drop the key - del self._conns[key] - return proto - else: - transport = proto.transport - proto.close() - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - - # No more connections: drop the key - del self._conns[key] - return None - - def _release_waiter(self) -> None: - """ - Iterates over all waiters until one to be released is found. - - The one to be released is not finsihed and - belongs to a host that has available connections. - """ - if not self._waiters: - return - - # Having the dict keys ordered this avoids to iterate - # at the same order at each call. - queues = list(self._waiters.keys()) - random.shuffle(queues) - - for key in queues: - if self._available_connections(key) < 1: - continue - - waiters = self._waiters[key] - while waiters: - waiter = waiters.popleft() - if not waiter.done(): - waiter.set_result(None) - return - - def _release_acquired(self, key: "ConnectionKey", proto: ResponseHandler) -> None: - if self._closed: - # acquired connection is already released on connector closing - return - - try: - self._acquired.remove(proto) - self._drop_acquired_per_host(key, proto) - except KeyError: # pragma: no cover - # this may be result of undetermenistic order of objects - # finalization due garbage collection. - pass - else: - self._release_waiter() - - def _release( - self, - key: "ConnectionKey", - protocol: ResponseHandler, - *, - should_close: bool = False, - ) -> None: - if self._closed: - # acquired connection is already released on connector closing - return - - self._release_acquired(key, protocol) - - if self._force_close: - should_close = True - - if should_close or protocol.should_close: - transport = protocol.transport - protocol.close() - - if key.is_ssl and not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transport) - else: - conns = self._conns.get(key) - if conns is None: - conns = self._conns[key] = [] - conns.append((protocol, self._loop.time())) - - if self._cleanup_handle is None: - self._cleanup_handle = helpers.weakref_handle( - self, "_cleanup", self._keepalive_timeout, self._loop - ) - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - raise NotImplementedError() - - -class _DNSCacheTable: - def __init__(self, ttl: Optional[float] = None) -> None: - self._addrs_rr: Dict[Tuple[str, int], Tuple[Iterator[Dict[str, Any]], int]] = {} - self._timestamps: Dict[Tuple[str, int], float] = {} - self._ttl = ttl - - def __contains__(self, host: object) -> bool: - return host in self._addrs_rr - - def add(self, key: Tuple[str, int], addrs: List[Dict[str, Any]]) -> None: - self._addrs_rr[key] = (cycle(addrs), len(addrs)) - - if self._ttl: - self._timestamps[key] = monotonic() - - def remove(self, key: Tuple[str, int]) -> None: - self._addrs_rr.pop(key, None) - - if self._ttl: - self._timestamps.pop(key, None) - - def clear(self) -> None: - self._addrs_rr.clear() - self._timestamps.clear() - - def next_addrs(self, key: Tuple[str, int]) -> List[Dict[str, Any]]: - loop, length = self._addrs_rr[key] - addrs = list(islice(loop, length)) - # Consume one more element to shift internal state of `cycle` - next(loop) - return addrs - - def expired(self, key: Tuple[str, int]) -> bool: - if self._ttl is None: - return False - - return self._timestamps[key] + self._ttl < monotonic() - - -class TCPConnector(BaseConnector): - """TCP connector. - - verify_ssl - Set to True to check ssl certifications. - fingerprint - Pass the binary sha256 - digest of the expected certificate in DER format to verify - that the certificate the server presents matches. See also - https://en.wikipedia.org/wiki/Transport_Layer_Security#Certificate_pinning - resolver - Enable DNS lookups and use this - resolver - use_dns_cache - Use memory cache for DNS lookups. - ttl_dns_cache - Max seconds having cached a DNS entry, None forever. - family - socket address family - local_addr - local tuple of (host, port) to bind socket to - - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - enable_cleanup_closed - Enables clean-up closed ssl transports. - Disabled by default. - loop - Optional event loop. - """ - - def __init__( - self, - *, - verify_ssl: bool = True, - fingerprint: Optional[bytes] = None, - use_dns_cache: bool = True, - ttl_dns_cache: Optional[int] = 10, - family: int = 0, - ssl_context: Optional[SSLContext] = None, - ssl: Union[None, bool, Fingerprint, SSLContext] = None, - local_addr: Optional[Tuple[str, int]] = None, - resolver: Optional[AbstractResolver] = None, - keepalive_timeout: Union[None, float, object] = sentinel, - force_close: bool = False, - limit: int = 100, - limit_per_host: int = 0, - enable_cleanup_closed: bool = False, - loop: Optional[asyncio.AbstractEventLoop] = None, - ): - super().__init__( - keepalive_timeout=keepalive_timeout, - force_close=force_close, - limit=limit, - limit_per_host=limit_per_host, - enable_cleanup_closed=enable_cleanup_closed, - loop=loop, - ) - - self._ssl = _merge_ssl_params(ssl, verify_ssl, ssl_context, fingerprint) - if resolver is None: - resolver = DefaultResolver(loop=self._loop) - self._resolver = resolver - - self._use_dns_cache = use_dns_cache - self._cached_hosts = _DNSCacheTable(ttl=ttl_dns_cache) - self._throttle_dns_events: Dict[Tuple[str, int], EventResultOrError] = {} - self._family = family - self._local_addr = local_addr - - def close(self) -> Awaitable[None]: - """Close all ongoing DNS calls.""" - for ev in self._throttle_dns_events.values(): - ev.cancel() - - return super().close() - - @property - def family(self) -> int: - """Socket family like AF_INET.""" - return self._family - - @property - def use_dns_cache(self) -> bool: - """True if local DNS caching is enabled.""" - return self._use_dns_cache - - def clear_dns_cache( - self, host: Optional[str] = None, port: Optional[int] = None - ) -> None: - """Remove specified host/port or clear all dns local cache.""" - if host is not None and port is not None: - self._cached_hosts.remove((host, port)) - elif host is not None or port is not None: - raise ValueError("either both host and port " "or none of them are allowed") - else: - self._cached_hosts.clear() - - async def _resolve_host( - self, host: str, port: int, traces: Optional[List["Trace"]] = None - ) -> List[Dict[str, Any]]: - if is_ip_address(host): - return [ - { - "hostname": host, - "host": host, - "port": port, - "family": self._family, - "proto": 0, - "flags": 0, - } - ] - - if not self._use_dns_cache: - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_start(host) - - res = await self._resolver.resolve(host, port, family=self._family) - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_end(host) - - return res - - key = (host, port) - - if (key in self._cached_hosts) and (not self._cached_hosts.expired(key)): - # get result early, before any await (#4014) - result = self._cached_hosts.next_addrs(key) - - if traces: - for trace in traces: - await trace.send_dns_cache_hit(host) - return result - - if key in self._throttle_dns_events: - # get event early, before any await (#4014) - event = self._throttle_dns_events[key] - if traces: - for trace in traces: - await trace.send_dns_cache_hit(host) - await event.wait() - else: - # update dict early, before any await (#4014) - self._throttle_dns_events[key] = EventResultOrError(self._loop) - if traces: - for trace in traces: - await trace.send_dns_cache_miss(host) - try: - - if traces: - for trace in traces: - await trace.send_dns_resolvehost_start(host) - - addrs = await self._resolver.resolve(host, port, family=self._family) - if traces: - for trace in traces: - await trace.send_dns_resolvehost_end(host) - - self._cached_hosts.add(key, addrs) - self._throttle_dns_events[key].set() - except BaseException as e: - # any DNS exception, independently of the implementation - # is set for the waiters to raise the same exception. - self._throttle_dns_events[key].set(exc=e) - raise - finally: - self._throttle_dns_events.pop(key) - - return self._cached_hosts.next_addrs(key) - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - """Create connection. - - Has same keyword arguments as BaseEventLoop.create_connection. - """ - if req.proxy: - _, proto = await self._create_proxy_connection(req, traces, timeout) - else: - _, proto = await self._create_direct_connection(req, traces, timeout) - - return proto - - @staticmethod - @functools.lru_cache(None) - def _make_ssl_context(verified: bool) -> SSLContext: - if verified: - return ssl.create_default_context() - else: - sslcontext = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) - sslcontext.options |= ssl.OP_NO_SSLv2 - sslcontext.options |= ssl.OP_NO_SSLv3 - sslcontext.check_hostname = False - sslcontext.verify_mode = ssl.CERT_NONE - try: - sslcontext.options |= ssl.OP_NO_COMPRESSION - except AttributeError as attr_err: - warnings.warn( - "{!s}: The Python interpreter is compiled " - "against OpenSSL < 1.0.0. Ref: " - "https://docs.python.org/3/library/ssl.html" - "#ssl.OP_NO_COMPRESSION".format(attr_err), - ) - sslcontext.set_default_verify_paths() - return sslcontext - - def _get_ssl_context(self, req: "ClientRequest") -> Optional[SSLContext]: - """Logic to get the correct SSL context - - 0. if req.ssl is false, return None - - 1. if ssl_context is specified in req, use it - 2. if _ssl_context is specified in self, use it - 3. otherwise: - 1. if verify_ssl is not specified in req, use self.ssl_context - (will generate a default context according to self.verify_ssl) - 2. if verify_ssl is True in req, generate a default SSL context - 3. if verify_ssl is False in req, generate a SSL context that - won't verify - """ - if req.is_ssl(): - if ssl is None: # pragma: no cover - raise RuntimeError("SSL is not supported.") - sslcontext = req.ssl - if isinstance(sslcontext, ssl.SSLContext): - return sslcontext - if sslcontext is not None: - # not verified or fingerprinted - return self._make_ssl_context(False) - sslcontext = self._ssl - if isinstance(sslcontext, ssl.SSLContext): - return sslcontext - if sslcontext is not None: - # not verified or fingerprinted - return self._make_ssl_context(False) - return self._make_ssl_context(True) - else: - return None - - def _get_fingerprint(self, req: "ClientRequest") -> Optional["Fingerprint"]: - ret = req.ssl - if isinstance(ret, Fingerprint): - return ret - ret = self._ssl - if isinstance(ret, Fingerprint): - return ret - return None - - async def _wrap_create_connection( - self, - *args: Any, - req: "ClientRequest", - timeout: "ClientTimeout", - client_error: Type[Exception] = ClientConnectorError, - **kwargs: Any, - ) -> Tuple[asyncio.Transport, ResponseHandler]: - try: - async with ceil_timeout(timeout.sock_connect): - return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa - except cert_errors as exc: - raise ClientConnectorCertificateError(req.connection_key, exc) from exc - except ssl_errors as exc: - raise ClientConnectorSSLError(req.connection_key, exc) from exc - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise client_error(req.connection_key, exc) from exc - - def _fail_on_no_start_tls(self, req: "ClientRequest") -> None: - """Raise a :py:exc:`RuntimeError` on missing ``start_tls()``. - - One case is that :py:meth:`asyncio.loop.start_tls` is not yet - implemented under Python 3.6. It is necessary for TLS-in-TLS so - that it is possible to send HTTPS queries through HTTPS proxies. - - This doesn't affect regular HTTP requests, though. - """ - if not req.is_ssl(): - return - - proxy_url = req.proxy - assert proxy_url is not None - if proxy_url.scheme != "https": - return - - self._check_loop_for_start_tls() - - def _check_loop_for_start_tls(self) -> None: - try: - self._loop.start_tls - except AttributeError as attr_exc: - raise RuntimeError( - "An HTTPS request is being sent through an HTTPS proxy. " - "This needs support for TLS in TLS but it is not implemented " - "in your runtime for the stdlib asyncio.\n\n" - "Please upgrade to Python 3.7 or higher. For more details, " - "please see:\n" - "* https://bugs.python.org/issue37179\n" - "* https://github.com/python/cpython/pull/28073\n" - "* https://docs.aiohttp.org/en/stable/" - "client_advanced.html#proxy-support\n" - "* https://github.com/aio-libs/aiohttp/discussions/6044\n", - ) from attr_exc - - def _loop_supports_start_tls(self) -> bool: - try: - self._check_loop_for_start_tls() - except RuntimeError: - return False - else: - return True - - def _warn_about_tls_in_tls( - self, - underlying_transport: asyncio.Transport, - req: "ClientRequest", - ) -> None: - """Issue a warning if the requested URL has HTTPS scheme.""" - if req.request_info.url.scheme != "https": - return - - asyncio_supports_tls_in_tls = getattr( - underlying_transport, - "_start_tls_compatible", - False, - ) - - if asyncio_supports_tls_in_tls: - return - - warnings.warn( - "An HTTPS request is being sent through an HTTPS proxy. " - "This support for TLS in TLS is known to be disabled " - "in the stdlib asyncio. This is why you'll probably see " - "an error in the log below.\n\n" - "It is possible to enable it via monkeypatching under " - "Python 3.7 or higher. For more details, see:\n" - "* https://bugs.python.org/issue37179\n" - "* https://github.com/python/cpython/pull/28073\n\n" - "You can temporarily patch this as follows:\n" - "* https://docs.aiohttp.org/en/stable/client_advanced.html#proxy-support\n" - "* https://github.com/aio-libs/aiohttp/discussions/6044\n", - RuntimeWarning, - source=self, - # Why `4`? At least 3 of the calls in the stack originate - # from the methods in this class. - stacklevel=3, - ) - - async def _start_tls_connection( - self, - underlying_transport: asyncio.Transport, - req: "ClientRequest", - timeout: "ClientTimeout", - client_error: Type[Exception] = ClientConnectorError, - ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: - """Wrap the raw TCP transport with TLS.""" - tls_proto = self._factory() # Create a brand new proto for TLS - - # Safety of the `cast()` call here is based on the fact that - # internally `_get_ssl_context()` only returns `None` when - # `req.is_ssl()` evaluates to `False` which is never gonna happen - # in this code path. Of course, it's rather fragile - # maintainability-wise but this is to be solved separately. - sslcontext = cast(ssl.SSLContext, self._get_ssl_context(req)) - - try: - async with ceil_timeout(timeout.sock_connect): - try: - tls_transport = await self._loop.start_tls( - underlying_transport, - tls_proto, - sslcontext, - server_hostname=req.host, - ssl_handshake_timeout=timeout.total, - ) - except BaseException: - # We need to close the underlying transport since - # `start_tls()` probably failed before it had a - # chance to do this: - underlying_transport.close() - raise - except cert_errors as exc: - raise ClientConnectorCertificateError(req.connection_key, exc) from exc - except ssl_errors as exc: - raise ClientConnectorSSLError(req.connection_key, exc) from exc - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise client_error(req.connection_key, exc) from exc - except TypeError as type_err: - # Example cause looks like this: - # TypeError: transport is not supported by start_tls() - - raise ClientConnectionError( - "Cannot initialize a TLS-in-TLS connection to host " - f"{req.host!s}:{req.port:d} through an underlying connection " - f"to an HTTPS proxy {req.proxy!s} ssl:{req.ssl or 'default'} " - f"[{type_err!s}]" - ) from type_err - else: - if tls_transport is None: - msg = "Failed to start TLS (possibly caused by closing transport)" - raise client_error(req.connection_key, OSError(msg)) - tls_proto.connection_made( - tls_transport - ) # Kick the state machine of the new TLS protocol - - return tls_transport, tls_proto - - async def _create_direct_connection( - self, - req: "ClientRequest", - traces: List["Trace"], - timeout: "ClientTimeout", - *, - client_error: Type[Exception] = ClientConnectorError, - ) -> Tuple[asyncio.Transport, ResponseHandler]: - sslcontext = self._get_ssl_context(req) - fingerprint = self._get_fingerprint(req) - - host = req.url.raw_host - assert host is not None - port = req.port - assert port is not None - host_resolved = asyncio.ensure_future( - self._resolve_host(host, port, traces=traces), loop=self._loop - ) - try: - # Cancelling this lookup should not cancel the underlying lookup - # or else the cancel event will get broadcast to all the waiters - # across all connections. - hosts = await asyncio.shield(host_resolved) - except asyncio.CancelledError: - - def drop_exception(fut: "asyncio.Future[List[Dict[str, Any]]]") -> None: - with suppress(Exception, asyncio.CancelledError): - fut.result() - - host_resolved.add_done_callback(drop_exception) - raise - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - # in case of proxy it is not ClientProxyConnectionError - # it is problem of resolving proxy ip itself - raise ClientConnectorError(req.connection_key, exc) from exc - - last_exc: Optional[Exception] = None - - for hinfo in hosts: - host = hinfo["host"] - port = hinfo["port"] - - try: - transp, proto = await self._wrap_create_connection( - self._factory, - host, - port, - timeout=timeout, - ssl=sslcontext, - family=hinfo["family"], - proto=hinfo["proto"], - flags=hinfo["flags"], - server_hostname=hinfo["hostname"] if sslcontext else None, - local_addr=self._local_addr, - req=req, - client_error=client_error, - ) - except ClientConnectorError as exc: - last_exc = exc - continue - - if req.is_ssl() and fingerprint: - try: - fingerprint.check(transp) - except ServerFingerprintMismatch as exc: - transp.close() - if not self._cleanup_closed_disabled: - self._cleanup_closed_transports.append(transp) - last_exc = exc - continue - - return transp, proto - else: - assert last_exc is not None - raise last_exc - - async def _create_proxy_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> Tuple[asyncio.BaseTransport, ResponseHandler]: - self._fail_on_no_start_tls(req) - runtime_has_start_tls = self._loop_supports_start_tls() - - headers: Dict[str, str] = {} - if req.proxy_headers is not None: - headers = req.proxy_headers # type: ignore[assignment] - headers[hdrs.HOST] = req.headers[hdrs.HOST] - - url = req.proxy - assert url is not None - proxy_req = ClientRequest( - hdrs.METH_GET, - url, - headers=headers, - auth=req.proxy_auth, - loop=self._loop, - ssl=req.ssl, - ) - - # create connection to proxy server - transport, proto = await self._create_direct_connection( - proxy_req, [], timeout, client_error=ClientProxyConnectionError - ) - - # Many HTTP proxies has buggy keepalive support. Let's not - # reuse connection but close it after processing every - # response. - proto.force_close() - - auth = proxy_req.headers.pop(hdrs.AUTHORIZATION, None) - if auth is not None: - if not req.is_ssl(): - req.headers[hdrs.PROXY_AUTHORIZATION] = auth - else: - proxy_req.headers[hdrs.PROXY_AUTHORIZATION] = auth - - if req.is_ssl(): - if runtime_has_start_tls: - self._warn_about_tls_in_tls(transport, req) - - # For HTTPS requests over HTTP proxy - # we must notify proxy to tunnel connection - # so we send CONNECT command: - # CONNECT www.python.org:443 HTTP/1.1 - # Host: www.python.org - # - # next we must do TLS handshake and so on - # to do this we must wrap raw socket into secure one - # asyncio handles this perfectly - proxy_req.method = hdrs.METH_CONNECT - proxy_req.url = req.url - key = attr.evolve( - req.connection_key, proxy=None, proxy_auth=None, proxy_headers_hash=None - ) - conn = Connection(self, key, proto, self._loop) - proxy_resp = await proxy_req.send(conn) - try: - protocol = conn._protocol - assert protocol is not None - - # read_until_eof=True will ensure the connection isn't closed - # once the response is received and processed allowing - # START_TLS to work on the connection below. - protocol.set_response_params(read_until_eof=runtime_has_start_tls) - resp = await proxy_resp.start(conn) - except BaseException: - proxy_resp.close() - conn.close() - raise - else: - conn._protocol = None - conn._transport = None - try: - if resp.status != 200: - message = resp.reason - if message is None: - message = RESPONSES[resp.status][0] - raise ClientHttpProxyError( - proxy_resp.request_info, - resp.history, - status=resp.status, - message=message, - headers=resp.headers, - ) - if not runtime_has_start_tls: - rawsock = transport.get_extra_info("socket", default=None) - if rawsock is None: - raise RuntimeError( - "Transport does not expose socket instance" - ) - # Duplicate the socket, so now we can close proxy transport - rawsock = rawsock.dup() - except BaseException: - # It shouldn't be closed in `finally` because it's fed to - # `loop.start_tls()` and the docs say not to touch it after - # passing there. - transport.close() - raise - finally: - if not runtime_has_start_tls: - transport.close() - - if not runtime_has_start_tls: - # HTTP proxy with support for upgrade to HTTPS - sslcontext = self._get_ssl_context(req) - return await self._wrap_create_connection( - self._factory, - timeout=timeout, - ssl=sslcontext, - sock=rawsock, - server_hostname=req.host, - req=req, - ) - - return await self._start_tls_connection( - # Access the old transport for the last time before it's - # closed and forgotten forever: - transport, - req=req, - timeout=timeout, - ) - finally: - proxy_resp.close() - - return transport, proto - - -class UnixConnector(BaseConnector): - """Unix socket connector. - - path - Unix socket path. - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - loop - Optional event loop. - """ - - def __init__( - self, - path: str, - force_close: bool = False, - keepalive_timeout: Union[object, float, None] = sentinel, - limit: int = 100, - limit_per_host: int = 0, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__( - force_close=force_close, - keepalive_timeout=keepalive_timeout, - limit=limit, - limit_per_host=limit_per_host, - loop=loop, - ) - self._path = path - - @property - def path(self) -> str: - """Path to unix socket.""" - return self._path - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - try: - async with ceil_timeout(timeout.sock_connect): - _, proto = await self._loop.create_unix_connection( - self._factory, self._path - ) - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise UnixClientConnectorError(self.path, req.connection_key, exc) from exc - - return cast(ResponseHandler, proto) - - -class NamedPipeConnector(BaseConnector): - """Named pipe connector. - - Only supported by the proactor event loop. - See also: https://docs.python.org/3.7/library/asyncio-eventloop.html - - path - Windows named pipe path. - keepalive_timeout - (optional) Keep-alive timeout. - force_close - Set to True to force close and do reconnect - after each request (and between redirects). - limit - The total number of simultaneous connections. - limit_per_host - Number of simultaneous connections to one host. - loop - Optional event loop. - """ - - def __init__( - self, - path: str, - force_close: bool = False, - keepalive_timeout: Union[object, float, None] = sentinel, - limit: int = 100, - limit_per_host: int = 0, - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> None: - super().__init__( - force_close=force_close, - keepalive_timeout=keepalive_timeout, - limit=limit, - limit_per_host=limit_per_host, - loop=loop, - ) - if not isinstance( - self._loop, asyncio.ProactorEventLoop # type: ignore[attr-defined] - ): - raise RuntimeError( - "Named Pipes only available in proactor " "loop under windows" - ) - self._path = path - - @property - def path(self) -> str: - """Path to the named pipe.""" - return self._path - - async def _create_connection( - self, req: "ClientRequest", traces: List["Trace"], timeout: "ClientTimeout" - ) -> ResponseHandler: - try: - async with ceil_timeout(timeout.sock_connect): - _, proto = await self._loop.create_pipe_connection( # type: ignore[attr-defined] # noqa: E501 - self._factory, self._path - ) - # the drain is required so that the connection_made is called - # and transport is set otherwise it is not set before the - # `assert conn.transport is not None` - # in client.py's _request method - await asyncio.sleep(0) - # other option is to manually set transport like - # `proto.transport = trans` - except OSError as exc: - if exc.errno is None and isinstance(exc, asyncio.TimeoutError): - raise - raise ClientConnectorError(req.connection_key, exc) from exc - - return cast(ResponseHandler, proto) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/__init__.py deleted file mode 100644 index 3d2ab09aacdf58f59fc13f43d66726e7a6e134ee..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/__init__.py +++ /dev/null @@ -1,840 +0,0 @@ -"""Beautiful Soup Elixir and Tonic - "The Screen-Scraper's Friend". - -http://www.crummy.com/software/BeautifulSoup/ - -Beautiful Soup uses a pluggable XML or HTML parser to parse a -(possibly invalid) document into a tree representation. Beautiful Soup -provides methods and Pythonic idioms that make it easy to navigate, -search, and modify the parse tree. - -Beautiful Soup works with Python 3.6 and up. It works better if lxml -and/or html5lib is installed. - -For more than you ever wanted to know about Beautiful Soup, see the -documentation: http://www.crummy.com/software/BeautifulSoup/bs4/doc/ -""" - -__author__ = "Leonard Richardson (leonardr@segfault.org)" -__version__ = "4.12.2" -__copyright__ = "Copyright (c) 2004-2023 Leonard Richardson" -# Use of this source code is governed by the MIT license. -__license__ = "MIT" - -__all__ = ['BeautifulSoup'] - -from collections import Counter -import os -import re -import sys -import traceback -import warnings - -# The very first thing we do is give a useful error if someone is -# running this code under Python 2. -if sys.version_info.major < 3: - raise ImportError('You are trying to use a Python 3-specific version of Beautiful Soup under Python 2. This will not work. The final version of Beautiful Soup to support Python 2 was 4.9.3.') - -from .builder import ( - builder_registry, - ParserRejectedMarkup, - XMLParsedAsHTMLWarning, - HTMLParserTreeBuilder -) -from .dammit import UnicodeDammit -from .element import ( - CData, - Comment, - CSS, - DEFAULT_OUTPUT_ENCODING, - Declaration, - Doctype, - NavigableString, - PageElement, - ProcessingInstruction, - PYTHON_SPECIFIC_ENCODINGS, - ResultSet, - Script, - Stylesheet, - SoupStrainer, - Tag, - TemplateString, - ) - -# Define some custom warnings. -class GuessedAtParserWarning(UserWarning): - """The warning issued when BeautifulSoup has to guess what parser to - use -- probably because no parser was specified in the constructor. - """ - -class MarkupResemblesLocatorWarning(UserWarning): - """The warning issued when BeautifulSoup is given 'markup' that - actually looks like a resource locator -- a URL or a path to a file - on disk. - """ - - -class BeautifulSoup(Tag): - """A data structure representing a parsed HTML or XML document. - - Most of the methods you'll call on a BeautifulSoup object are inherited from - PageElement or Tag. - - Internally, this class defines the basic interface called by the - tree builders when converting an HTML/XML document into a data - structure. The interface abstracts away the differences between - parsers. To write a new tree builder, you'll need to understand - these methods as a whole. - - These methods will be called by the BeautifulSoup constructor: - * reset() - * feed(markup) - - The tree builder may call these methods from its feed() implementation: - * handle_starttag(name, attrs) # See note about return value - * handle_endtag(name) - * handle_data(data) # Appends to the current data node - * endData(containerClass) # Ends the current data node - - No matter how complicated the underlying parser is, you should be - able to build a tree using 'start tag' events, 'end tag' events, - 'data' events, and "done with data" events. - - If you encounter an empty-element tag (aka a self-closing tag, - like HTML's
            tag), call handle_starttag and then - handle_endtag. - """ - - # Since BeautifulSoup subclasses Tag, it's possible to treat it as - # a Tag with a .name. This name makes it clear the BeautifulSoup - # object isn't a real markup tag. - ROOT_TAG_NAME = '[document]' - - # If the end-user gives no indication which tree builder they - # want, look for one with these features. - DEFAULT_BUILDER_FEATURES = ['html', 'fast'] - - # A string containing all ASCII whitespace characters, used in - # endData() to detect data chunks that seem 'empty'. - ASCII_SPACES = '\x20\x0a\x09\x0c\x0d' - - NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features=\"%(parser)s\"' to the BeautifulSoup constructor.\n" - - def __init__(self, markup="", features=None, builder=None, - parse_only=None, from_encoding=None, exclude_encodings=None, - element_classes=None, **kwargs): - """Constructor. - - :param markup: A string or a file-like object representing - markup to be parsed. - - :param features: Desirable features of the parser to be - used. This may be the name of a specific parser ("lxml", - "lxml-xml", "html.parser", or "html5lib") or it may be the - type of markup to be used ("html", "html5", "xml"). It's - recommended that you name a specific parser, so that - Beautiful Soup gives you the same results across platforms - and virtual environments. - - :param builder: A TreeBuilder subclass to instantiate (or - instance to use) instead of looking one up based on - `features`. You only need to use this if you've implemented a - custom TreeBuilder. - - :param parse_only: A SoupStrainer. Only parts of the document - matching the SoupStrainer will be considered. This is useful - when parsing part of a document that would otherwise be too - large to fit into memory. - - :param from_encoding: A string indicating the encoding of the - document to be parsed. Pass this in if Beautiful Soup is - guessing wrongly about the document's encoding. - - :param exclude_encodings: A list of strings indicating - encodings known to be wrong. Pass this in if you don't know - the document's encoding but you know Beautiful Soup's guess is - wrong. - - :param element_classes: A dictionary mapping BeautifulSoup - classes like Tag and NavigableString, to other classes you'd - like to be instantiated instead as the parse tree is - built. This is useful for subclassing Tag or NavigableString - to modify default behavior. - - :param kwargs: For backwards compatibility purposes, the - constructor accepts certain keyword arguments used in - Beautiful Soup 3. None of these arguments do anything in - Beautiful Soup 4; they will result in a warning and then be - ignored. - - Apart from this, any keyword arguments passed into the - BeautifulSoup constructor are propagated to the TreeBuilder - constructor. This makes it possible to configure a - TreeBuilder by passing in arguments, not just by saying which - one to use. - """ - if 'convertEntities' in kwargs: - del kwargs['convertEntities'] - warnings.warn( - "BS4 does not respect the convertEntities argument to the " - "BeautifulSoup constructor. Entities are always converted " - "to Unicode characters.") - - if 'markupMassage' in kwargs: - del kwargs['markupMassage'] - warnings.warn( - "BS4 does not respect the markupMassage argument to the " - "BeautifulSoup constructor. The tree builder is responsible " - "for any necessary markup massage.") - - if 'smartQuotesTo' in kwargs: - del kwargs['smartQuotesTo'] - warnings.warn( - "BS4 does not respect the smartQuotesTo argument to the " - "BeautifulSoup constructor. Smart quotes are always converted " - "to Unicode characters.") - - if 'selfClosingTags' in kwargs: - del kwargs['selfClosingTags'] - warnings.warn( - "BS4 does not respect the selfClosingTags argument to the " - "BeautifulSoup constructor. The tree builder is responsible " - "for understanding self-closing tags.") - - if 'isHTML' in kwargs: - del kwargs['isHTML'] - warnings.warn( - "BS4 does not respect the isHTML argument to the " - "BeautifulSoup constructor. Suggest you use " - "features='lxml' for HTML and features='lxml-xml' for " - "XML.") - - def deprecated_argument(old_name, new_name): - if old_name in kwargs: - warnings.warn( - 'The "%s" argument to the BeautifulSoup constructor ' - 'has been renamed to "%s."' % (old_name, new_name), - DeprecationWarning, stacklevel=3 - ) - return kwargs.pop(old_name) - return None - - parse_only = parse_only or deprecated_argument( - "parseOnlyThese", "parse_only") - - from_encoding = from_encoding or deprecated_argument( - "fromEncoding", "from_encoding") - - if from_encoding and isinstance(markup, str): - warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.") - from_encoding = None - - self.element_classes = element_classes or dict() - - # We need this information to track whether or not the builder - # was specified well enough that we can omit the 'you need to - # specify a parser' warning. - original_builder = builder - original_features = features - - if isinstance(builder, type): - # A builder class was passed in; it needs to be instantiated. - builder_class = builder - builder = None - elif builder is None: - if isinstance(features, str): - features = [features] - if features is None or len(features) == 0: - features = self.DEFAULT_BUILDER_FEATURES - builder_class = builder_registry.lookup(*features) - if builder_class is None: - raise FeatureNotFound( - "Couldn't find a tree builder with the features you " - "requested: %s. Do you need to install a parser library?" - % ",".join(features)) - - # At this point either we have a TreeBuilder instance in - # builder, or we have a builder_class that we can instantiate - # with the remaining **kwargs. - if builder is None: - builder = builder_class(**kwargs) - if not original_builder and not ( - original_features == builder.NAME or - original_features in builder.ALTERNATE_NAMES - ) and markup: - # The user did not tell us which TreeBuilder to use, - # and we had to guess. Issue a warning. - if builder.is_xml: - markup_type = "XML" - else: - markup_type = "HTML" - - # This code adapted from warnings.py so that we get the same line - # of code as our warnings.warn() call gets, even if the answer is wrong - # (as it may be in a multithreading situation). - caller = None - try: - caller = sys._getframe(1) - except ValueError: - pass - if caller: - globals = caller.f_globals - line_number = caller.f_lineno - else: - globals = sys.__dict__ - line_number= 1 - filename = globals.get('__file__') - if filename: - fnl = filename.lower() - if fnl.endswith((".pyc", ".pyo")): - filename = filename[:-1] - if filename: - # If there is no filename at all, the user is most likely in a REPL, - # and the warning is not necessary. - values = dict( - filename=filename, - line_number=line_number, - parser=builder.NAME, - markup_type=markup_type - ) - warnings.warn( - self.NO_PARSER_SPECIFIED_WARNING % values, - GuessedAtParserWarning, stacklevel=2 - ) - else: - if kwargs: - warnings.warn("Keyword arguments to the BeautifulSoup constructor will be ignored. These would normally be passed into the TreeBuilder constructor, but a TreeBuilder instance was passed in as `builder`.") - - self.builder = builder - self.is_xml = builder.is_xml - self.known_xml = self.is_xml - self._namespaces = dict() - self.parse_only = parse_only - - if hasattr(markup, 'read'): # It's a file-type object. - markup = markup.read() - elif len(markup) <= 256 and ( - (isinstance(markup, bytes) and not b'<' in markup) - or (isinstance(markup, str) and not '<' in markup) - ): - # Issue warnings for a couple beginner problems - # involving passing non-markup to Beautiful Soup. - # Beautiful Soup will still parse the input as markup, - # since that is sometimes the intended behavior. - if not self._markup_is_url(markup): - self._markup_resembles_filename(markup) - - rejections = [] - success = False - for (self.markup, self.original_encoding, self.declared_html_encoding, - self.contains_replacement_characters) in ( - self.builder.prepare_markup( - markup, from_encoding, exclude_encodings=exclude_encodings)): - self.reset() - self.builder.initialize_soup(self) - try: - self._feed() - success = True - break - except ParserRejectedMarkup as e: - rejections.append(e) - pass - - if not success: - other_exceptions = [str(e) for e in rejections] - raise ParserRejectedMarkup( - "The markup you provided was rejected by the parser. Trying a different parser or a different encoding may help.\n\nOriginal exception(s) from parser:\n " + "\n ".join(other_exceptions) - ) - - # Clear out the markup and remove the builder's circular - # reference to this object. - self.markup = None - self.builder.soup = None - - def _clone(self): - """Create a new BeautifulSoup object with the same TreeBuilder, - but not associated with any markup. - - This is the first step of the deepcopy process. - """ - clone = type(self)("", None, self.builder) - - # Keep track of the encoding of the original document, - # since we won't be parsing it again. - clone.original_encoding = self.original_encoding - return clone - - def __getstate__(self): - # Frequently a tree builder can't be pickled. - d = dict(self.__dict__) - if 'builder' in d and d['builder'] is not None and not self.builder.picklable: - d['builder'] = type(self.builder) - # Store the contents as a Unicode string. - d['contents'] = [] - d['markup'] = self.decode() - - # If _most_recent_element is present, it's a Tag object left - # over from initial parse. It might not be picklable and we - # don't need it. - if '_most_recent_element' in d: - del d['_most_recent_element'] - return d - - def __setstate__(self, state): - # If necessary, restore the TreeBuilder by looking it up. - self.__dict__ = state - if isinstance(self.builder, type): - self.builder = self.builder() - elif not self.builder: - # We don't know which builder was used to build this - # parse tree, so use a default we know is always available. - self.builder = HTMLParserTreeBuilder() - self.builder.soup = self - self.reset() - self._feed() - return state - - - @classmethod - def _decode_markup(cls, markup): - """Ensure `markup` is bytes so it's safe to send into warnings.warn. - - TODO: warnings.warn had this problem back in 2010 but it might not - anymore. - """ - if isinstance(markup, bytes): - decoded = markup.decode('utf-8', 'replace') - else: - decoded = markup - return decoded - - @classmethod - def _markup_is_url(cls, markup): - """Error-handling method to raise a warning if incoming markup looks - like a URL. - - :param markup: A string. - :return: Whether or not the markup resembles a URL - closely enough to justify a warning. - """ - if isinstance(markup, bytes): - space = b' ' - cant_start_with = (b"http:", b"https:") - elif isinstance(markup, str): - space = ' ' - cant_start_with = ("http:", "https:") - else: - return False - - if any(markup.startswith(prefix) for prefix in cant_start_with): - if not space in markup: - warnings.warn( - 'The input looks more like a URL than markup. You may want to use' - ' an HTTP client like requests to get the document behind' - ' the URL, and feed that document to Beautiful Soup.', - MarkupResemblesLocatorWarning, - stacklevel=3 - ) - return True - return False - - @classmethod - def _markup_resembles_filename(cls, markup): - """Error-handling method to raise a warning if incoming markup - resembles a filename. - - :param markup: A bytestring or string. - :return: Whether or not the markup resembles a filename - closely enough to justify a warning. - """ - path_characters = '/\\' - extensions = ['.html', '.htm', '.xml', '.xhtml', '.txt'] - if isinstance(markup, bytes): - path_characters = path_characters.encode("utf8") - extensions = [x.encode('utf8') for x in extensions] - filelike = False - if any(x in markup for x in path_characters): - filelike = True - else: - lower = markup.lower() - if any(lower.endswith(ext) for ext in extensions): - filelike = True - if filelike: - warnings.warn( - 'The input looks more like a filename than markup. You may' - ' want to open this file and pass the filehandle into' - ' Beautiful Soup.', - MarkupResemblesLocatorWarning, stacklevel=3 - ) - return True - return False - - def _feed(self): - """Internal method that parses previously set markup, creating a large - number of Tag and NavigableString objects. - """ - # Convert the document to Unicode. - self.builder.reset() - - self.builder.feed(self.markup) - # Close out any unfinished strings and close all the open tags. - self.endData() - while self.currentTag.name != self.ROOT_TAG_NAME: - self.popTag() - - def reset(self): - """Reset this object to a state as though it had never parsed any - markup. - """ - Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME) - self.hidden = 1 - self.builder.reset() - self.current_data = [] - self.currentTag = None - self.tagStack = [] - self.open_tag_counter = Counter() - self.preserve_whitespace_tag_stack = [] - self.string_container_stack = [] - self._most_recent_element = None - self.pushTag(self) - - def new_tag(self, name, namespace=None, nsprefix=None, attrs={}, - sourceline=None, sourcepos=None, **kwattrs): - """Create a new Tag associated with this BeautifulSoup object. - - :param name: The name of the new Tag. - :param namespace: The URI of the new Tag's XML namespace, if any. - :param prefix: The prefix for the new Tag's XML namespace, if any. - :param attrs: A dictionary of this Tag's attribute values; can - be used instead of `kwattrs` for attributes like 'class' - that are reserved words in Python. - :param sourceline: The line number where this tag was - (purportedly) found in its source document. - :param sourcepos: The character position within `sourceline` where this - tag was (purportedly) found. - :param kwattrs: Keyword arguments for the new Tag's attribute values. - - """ - kwattrs.update(attrs) - return self.element_classes.get(Tag, Tag)( - None, self.builder, name, namespace, nsprefix, kwattrs, - sourceline=sourceline, sourcepos=sourcepos - ) - - def string_container(self, base_class=None): - container = base_class or NavigableString - - # There may be a general override of NavigableString. - container = self.element_classes.get( - container, container - ) - - # On top of that, we may be inside a tag that needs a special - # container class. - if self.string_container_stack and container is NavigableString: - container = self.builder.string_containers.get( - self.string_container_stack[-1].name, container - ) - return container - - def new_string(self, s, subclass=None): - """Create a new NavigableString associated with this BeautifulSoup - object. - """ - container = self.string_container(subclass) - return container(s) - - def insert_before(self, *args): - """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement - it because there is nothing before or after it in the parse tree. - """ - raise NotImplementedError("BeautifulSoup objects don't support insert_before().") - - def insert_after(self, *args): - """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement - it because there is nothing before or after it in the parse tree. - """ - raise NotImplementedError("BeautifulSoup objects don't support insert_after().") - - def popTag(self): - """Internal method called by _popToTag when a tag is closed.""" - tag = self.tagStack.pop() - if tag.name in self.open_tag_counter: - self.open_tag_counter[tag.name] -= 1 - if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]: - self.preserve_whitespace_tag_stack.pop() - if self.string_container_stack and tag == self.string_container_stack[-1]: - self.string_container_stack.pop() - #print("Pop", tag.name) - if self.tagStack: - self.currentTag = self.tagStack[-1] - return self.currentTag - - def pushTag(self, tag): - """Internal method called by handle_starttag when a tag is opened.""" - #print("Push", tag.name) - if self.currentTag is not None: - self.currentTag.contents.append(tag) - self.tagStack.append(tag) - self.currentTag = self.tagStack[-1] - if tag.name != self.ROOT_TAG_NAME: - self.open_tag_counter[tag.name] += 1 - if tag.name in self.builder.preserve_whitespace_tags: - self.preserve_whitespace_tag_stack.append(tag) - if tag.name in self.builder.string_containers: - self.string_container_stack.append(tag) - - def endData(self, containerClass=None): - """Method called by the TreeBuilder when the end of a data segment - occurs. - """ - if self.current_data: - current_data = ''.join(self.current_data) - # If whitespace is not preserved, and this string contains - # nothing but ASCII spaces, replace it with a single space - # or newline. - if not self.preserve_whitespace_tag_stack: - strippable = True - for i in current_data: - if i not in self.ASCII_SPACES: - strippable = False - break - if strippable: - if '\n' in current_data: - current_data = '\n' - else: - current_data = ' ' - - # Reset the data collector. - self.current_data = [] - - # Should we add this string to the tree at all? - if self.parse_only and len(self.tagStack) <= 1 and \ - (not self.parse_only.text or \ - not self.parse_only.search(current_data)): - return - - containerClass = self.string_container(containerClass) - o = containerClass(current_data) - self.object_was_parsed(o) - - def object_was_parsed(self, o, parent=None, most_recent_element=None): - """Method called by the TreeBuilder to integrate an object into the parse tree.""" - if parent is None: - parent = self.currentTag - if most_recent_element is not None: - previous_element = most_recent_element - else: - previous_element = self._most_recent_element - - next_element = previous_sibling = next_sibling = None - if isinstance(o, Tag): - next_element = o.next_element - next_sibling = o.next_sibling - previous_sibling = o.previous_sibling - if previous_element is None: - previous_element = o.previous_element - - fix = parent.next_element is not None - - o.setup(parent, previous_element, next_element, previous_sibling, next_sibling) - - self._most_recent_element = o - parent.contents.append(o) - - # Check if we are inserting into an already parsed node. - if fix: - self._linkage_fixer(parent) - - def _linkage_fixer(self, el): - """Make sure linkage of this fragment is sound.""" - - first = el.contents[0] - child = el.contents[-1] - descendant = child - - if child is first and el.parent is not None: - # Parent should be linked to first child - el.next_element = child - # We are no longer linked to whatever this element is - prev_el = child.previous_element - if prev_el is not None and prev_el is not el: - prev_el.next_element = None - # First child should be linked to the parent, and no previous siblings. - child.previous_element = el - child.previous_sibling = None - - # We have no sibling as we've been appended as the last. - child.next_sibling = None - - # This index is a tag, dig deeper for a "last descendant" - if isinstance(child, Tag) and child.contents: - descendant = child._last_descendant(False) - - # As the final step, link last descendant. It should be linked - # to the parent's next sibling (if found), else walk up the chain - # and find a parent with a sibling. It should have no next sibling. - descendant.next_element = None - descendant.next_sibling = None - target = el - while True: - if target is None: - break - elif target.next_sibling is not None: - descendant.next_element = target.next_sibling - target.next_sibling.previous_element = child - break - target = target.parent - - def _popToTag(self, name, nsprefix=None, inclusivePop=True): - """Pops the tag stack up to and including the most recent - instance of the given tag. - - If there are no open tags with the given name, nothing will be - popped. - - :param name: Pop up to the most recent tag with this name. - :param nsprefix: The namespace prefix that goes with `name`. - :param inclusivePop: It this is false, pops the tag stack up - to but *not* including the most recent instqance of the - given tag. - - """ - #print("Popping to %s" % name) - if name == self.ROOT_TAG_NAME: - # The BeautifulSoup object itself can never be popped. - return - - most_recently_popped = None - - stack_size = len(self.tagStack) - for i in range(stack_size - 1, 0, -1): - if not self.open_tag_counter.get(name): - break - t = self.tagStack[i] - if (name == t.name and nsprefix == t.prefix): - if inclusivePop: - most_recently_popped = self.popTag() - break - most_recently_popped = self.popTag() - - return most_recently_popped - - def handle_starttag(self, name, namespace, nsprefix, attrs, sourceline=None, - sourcepos=None, namespaces=None): - """Called by the tree builder when a new tag is encountered. - - :param name: Name of the tag. - :param nsprefix: Namespace prefix for the tag. - :param attrs: A dictionary of attribute values. - :param sourceline: The line number where this tag was found in its - source document. - :param sourcepos: The character position within `sourceline` where this - tag was found. - :param namespaces: A dictionary of all namespace prefix mappings - currently in scope in the document. - - If this method returns None, the tag was rejected by an active - SoupStrainer. You should proceed as if the tag had not occurred - in the document. For instance, if this was a self-closing tag, - don't call handle_endtag. - """ - # print("Start tag %s: %s" % (name, attrs)) - self.endData() - - if (self.parse_only and len(self.tagStack) <= 1 - and (self.parse_only.text - or not self.parse_only.search_tag(name, attrs))): - return None - - tag = self.element_classes.get(Tag, Tag)( - self, self.builder, name, namespace, nsprefix, attrs, - self.currentTag, self._most_recent_element, - sourceline=sourceline, sourcepos=sourcepos, - namespaces=namespaces - ) - if tag is None: - return tag - if self._most_recent_element is not None: - self._most_recent_element.next_element = tag - self._most_recent_element = tag - self.pushTag(tag) - return tag - - def handle_endtag(self, name, nsprefix=None): - """Called by the tree builder when an ending tag is encountered. - - :param name: Name of the tag. - :param nsprefix: Namespace prefix for the tag. - """ - #print("End tag: " + name) - self.endData() - self._popToTag(name, nsprefix) - - def handle_data(self, data): - """Called by the tree builder when a chunk of textual data is encountered.""" - self.current_data.append(data) - - def decode(self, pretty_print=False, - eventual_encoding=DEFAULT_OUTPUT_ENCODING, - formatter="minimal", iterator=None): - """Returns a string or Unicode representation of the parse tree - as an HTML or XML document. - - :param pretty_print: If this is True, indentation will be used to - make the document more readable. - :param eventual_encoding: The encoding of the final document. - If this is None, the document will be a Unicode string. - """ - if self.is_xml: - # Print the XML declaration - encoding_part = '' - if eventual_encoding in PYTHON_SPECIFIC_ENCODINGS: - # This is a special Python encoding; it can't actually - # go into an XML document because it means nothing - # outside of Python. - eventual_encoding = None - if eventual_encoding != None: - encoding_part = ' encoding="%s"' % eventual_encoding - prefix = '\n' % encoding_part - else: - prefix = '' - if not pretty_print: - indent_level = None - else: - indent_level = 0 - return prefix + super(BeautifulSoup, self).decode( - indent_level, eventual_encoding, formatter, iterator) - -# Aliases to make it easier to get started quickly, e.g. 'from bs4 import _soup' -_s = BeautifulSoup -_soup = BeautifulSoup - -class BeautifulStoneSoup(BeautifulSoup): - """Deprecated interface to an XML parser.""" - - def __init__(self, *args, **kwargs): - kwargs['features'] = 'xml' - warnings.warn( - 'The BeautifulStoneSoup class is deprecated. Instead of using ' - 'it, pass features="xml" into the BeautifulSoup constructor.', - DeprecationWarning, stacklevel=2 - ) - super(BeautifulStoneSoup, self).__init__(*args, **kwargs) - - -class StopParsing(Exception): - """Exception raised by a TreeBuilder if it's unable to continue parsing.""" - pass - -class FeatureNotFound(ValueError): - """Exception raised by the BeautifulSoup constructor if no parser with the - requested features is found. - """ - pass - - -#If this file is run as a script, act as an HTML pretty-printer. -if __name__ == '__main__': - import sys - soup = BeautifulSoup(sys.stdin) - print((soup.prettify())) diff --git a/spaces/johnslegers/epic-diffusion/style.css b/spaces/johnslegers/epic-diffusion/style.css deleted file mode 100644 index 4ee5f75708f3b3a41180b51e0850f0ff90c80af8..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/epic-diffusion/style.css +++ /dev/null @@ -1,31 +0,0 @@ -.epic-diffusion-div div { - display:inline-flex; - align-items:center; - gap:.8rem; - font-size:1.75rem -} -.epic-diffusion-div div h1 { - font-weight:900; - margin-bottom:7px -} -.epic-diffusion-div p { - margin-bottom:10px; - font-size:94% -} - -.social img { - display:inline; -} - -a{ - text-decoration:underline -} - -.tabs{ - margin-top:0; - margin-bottom:0 -} - -#gallery{ - min-height:20rem -} diff --git a/spaces/junjunn/rvc-models/README.md b/spaces/junjunn/rvc-models/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/junjunn/rvc-models/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jvcanavarro/emotion-recognition/src/dnn.py b/spaces/jvcanavarro/emotion-recognition/src/dnn.py deleted file mode 100644 index 7666aa73a1958ed2a40d2b634c0ae2f495100cc4..0000000000000000000000000000000000000000 --- a/spaces/jvcanavarro/emotion-recognition/src/dnn.py +++ /dev/null @@ -1,109 +0,0 @@ -import sys - -import numpy as np -from keras import Sequential -from keras.layers import ( - LSTM as KERAS_LSTM, - Dense, - Dropout, - Conv2D, - Flatten, - BatchNormalization, - Activation, - MaxPooling2D, -) - -from . import Model - - -class DNN(Model): - def __init__(self, input_shape, num_classes, **params): - super(DNN, self).__init__(**params) - self.input_shape = input_shape - self.model = Sequential() - self.make_default_model() - self.model.add(Dense(num_classes, activation="softmax")) - self.model.compile( - loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"] - ) - print(self.model.summary(), file=sys.stderr) - self.save_path = self.save_path or f"{self.name}_best_model.h5" - - def load_model(self, to_load): - try: - self.model.load_weights(to_load) - except Exception: - sys.stderr.write("Invalid saved file provided") - sys.exit(-1) - - def save_model(self): - self.model.save_weights(self.save_path) - - def train(self, x_train, y_train, x_val=None, y_val=None, n_epochs=50): - best_acc = 0 - if x_val is None or y_val is None: - x_val, y_val = x_train, y_train - for _ in range(n_epochs): - p = np.random.permutation(len(x_train)) - x_train = x_train[p] - y_train = y_train[p] - self.model.fit(x_train, y_train, batch_size=32, epochs=1) - _, acc = self.model.evaluate(x_val, y_val) - if acc > best_acc: - best_acc = acc - self.trained = True - - def predict_one(self, sample): - if not self.trained: - sys.stderr.write("Model should be trained or loaded before doing predict\n") - sys.exit(-1) - return np.argmax(self.model.predict(np.array([sample]))) - - def make_default_model(self) -> None: - - raise NotImplementedError() - - -class CNN(DNN): - def __init__(self, **params): - params["name"] = "CNN" - super(CNN, self).__init__(**params) - - def make_default_model(self): - self.model.add( - Conv2D( - 8, (13, 13), input_shape=(self.input_shape[0], self.input_shape[1], 1) - ) - ) - self.model.add(BatchNormalization(axis=-1)) - self.model.add(Activation("relu")) - self.model.add(Conv2D(8, (13, 13))) - self.model.add(BatchNormalization(axis=-1)) - self.model.add(Activation("relu")) - self.model.add(MaxPooling2D(pool_size=(2, 1))) - self.model.add(Conv2D(8, (13, 13))) - self.model.add(BatchNormalization(axis=-1)) - self.model.add(Activation("relu")) - self.model.add(Conv2D(8, (2, 2))) - self.model.add(BatchNormalization(axis=-1)) - self.model.add(Activation("relu")) - self.model.add(MaxPooling2D(pool_size=(2, 1))) - self.model.add(Flatten()) - self.model.add(Dense(64)) - self.model.add(BatchNormalization()) - self.model.add(Activation("relu")) - self.model.add(Dropout(0.2)) - - -class LSTM(DNN): - def __init__(self, **params): - params["name"] = "LSTM" - super(LSTM, self).__init__(**params) - - def make_default_model(self): - self.model.add( - KERAS_LSTM(128, input_shape=(self.input_shape[0], self.input_shape[1])) - ) - self.model.add(Dropout(0.5)) - self.model.add(Dense(32, activation="relu")) - self.model.add(Dense(16, activation="tanh")) diff --git a/spaces/katanaml-org/sparrow-ui/tools/data_review.py b/spaces/katanaml-org/sparrow-ui/tools/data_review.py deleted file mode 100644 index 5bf822c0ae2c31451a2e7c27f6fb5774f5937a39..0000000000000000000000000000000000000000 --- a/spaces/katanaml-org/sparrow-ui/tools/data_review.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -from natsort import natsorted -import json - - -def annotation_review(): - # get list of files in json directory - processed_file_names = get_processed_file_names('../docs/json/') - for file_name in processed_file_names: - # open json file - with open('../docs/json/' + file_name + '.json') as json_file: - json_file_data = json.load(json_file) - version = json_file_data['meta']['version'] - if version == "v0.1": - print(file_name + " is v0.1") - -def get_processed_file_names(dir_name): - # get ordered list of files without file extension, excluding hidden files, with JSON extension only - file_names = [os.path.splitext(f)[0] for f in os.listdir(dir_name) if - os.path.isfile(os.path.join(dir_name, f)) and not f.startswith('.') and f.endswith('.json')] - file_names = natsorted(file_names) - return file_names - -def main(): - annotation_review() - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/kaveh/radiology-image-retrieval/app.py b/spaces/kaveh/radiology-image-retrieval/app.py deleted file mode 100644 index cadb416a239e3e0fa4afab9dc03def4136fc06f9..0000000000000000000000000000000000000000 --- a/spaces/kaveh/radiology-image-retrieval/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import gradio as gr -import torch -import pickle -import numpy as np -import pandas as pd -from transformers import CLIPProcessor, CLIPModel -from transformers import VisionTextDualEncoderModel, VisionTextDualEncoderProcessor -from sklearn.metrics.pairwise import cosine_similarity -import csv -from PIL import Image - -model_path_rclip = "kaveh/rclip" -embeddings_file_rclip = './image_embeddings_rclip.pkl' - -model_path_pubmedclip = "flaviagiammarino/pubmed-clip-vit-base-patch32" -embeddings_file_pubmedclip = './image_embeddings_pubmedclip.pkl' - -csv_path = "./captions.txt" - -def load_image_ids(csv_file): - ids = [] - captions = [] - with open(csv_file, 'r') as f: - reader = csv.reader(f, delimiter='\t') - for row in reader: - ids.append(row[0]) - captions.append(row[1]) - return ids, captions - -def load_embeddings(embeddings_file): - with open(embeddings_file, 'rb') as f: - image_embeddings = pickle.load(f) - return image_embeddings - - -def find_similar_images(query_embedding, image_embeddings, k=2): - similarities = cosine_similarity(query_embedding.reshape(1, -1), image_embeddings) - closest_indices = np.argsort(similarities[0])[::-1][:k] - scores = sorted(similarities[0])[::-1][:k] - return closest_indices, scores - - -def main(query, model_id="RCLIP", k=2): - if model_id=="RCLIP": - # Load RCLIP model - model = VisionTextDualEncoderModel.from_pretrained(model_path_rclip) - processor = VisionTextDualEncoderProcessor.from_pretrained(model_path_rclip) - # Load image embeddings - image_embeddings = load_embeddings(embeddings_file_rclip) - elif model_id=="PubMedCLIP": - model = CLIPModel.from_pretrained(model_path_pubmedclip) - processor = CLIPProcessor.from_pretrained(model_path_pubmedclip) - # Load image embeddings - image_embeddings = load_embeddings(embeddings_file_pubmedclip) - - # Embed the query - inputs = processor(text=query, images=None, return_tensors="pt", padding=True) - with torch.no_grad(): - query_embedding = model.get_text_features(**inputs)[0].numpy() - - # Get image names - ids, captions = load_image_ids(csv_path) - - # Find similar images - similar_image_indices, scores = find_similar_images(query_embedding, image_embeddings, k=int(k)) - - # Return the results - similar_image_names = [f"./images/{ids[index]}.jpg" for index in similar_image_indices] - similar_image_captions = [captions[index] for index in similar_image_indices] - similar_images = [Image.open(i) for i in similar_image_names] - - return similar_images, pd.DataFrame([[t+1 for t in range(k)], similar_image_names, similar_image_captions, scores], index=["#", "path", "caption", "score"]).T - - -# Define the Gradio interface -examples = [ - ["Chest X-ray photos", "RCLIP", 5], - ["Chest X-ray photos", "PubMedCLIP", 5], - ["Orthopantogram (OPG)", "RCLIP",5], - ["Brain MRI", "RCLIP",5], - ["Ultrasound", "RCLIP",5], -] - -title="RCLIP Image Retrieval" -description = "CLIP model fine-tuned on the ROCO dataset" - -with gr.Blocks(title=title) as demo: - with gr.Row(): - with gr.Column(scale=5): - gr.Markdown("# "+title) - gr.Markdown(description) - gr.HTML(value="\"teesside", show_label=False,scale=1) - #Image.open("./data/teesside university logo.png"), height=70, show_label=False, container=False) - with gr.Row(variant="compact"): - query = gr.Textbox(value="Chest X-Ray Photos", label="Enter your query", show_label=False, placeholder= "Enter your query" , scale=5) - btn = gr.Button("Search query", variant="primary", scale=1) - - with gr.Row(variant="compact"): - model_id = gr.Dropdown(["RCLIP", "PubMedCLIP"], value="RCLIP", label="Model", type="value", scale=1) - n_s = gr.Slider(2, 10, label='Number of Top Results', value=5, step=1.0, show_label=True, scale=1) - - - with gr.Column(variant="compact"): - gr.Markdown("## Results") - gallery = gr.Gallery(label="found images", show_label=True, elem_id="gallery", columns=[2], rows=[4], object_fit="contain", height="400px", preview=True) - gr.Markdown("Information of the found images") - df = gr.DataFrame() - btn.click(main, [query, model_id, n_s], [gallery, df]) - - with gr.Column(variant="compact"): - gr.Markdown("## Examples") - gr.Examples(examples, [query, model_id, n_s]) - - -demo.launch(debug='True') - diff --git a/spaces/kazuk/youtube-whisper-15/app.py b/spaces/kazuk/youtube-whisper-15/app.py deleted file mode 100644 index 4a61dc561a016c53ad93a3c556b0ef7bafa964eb..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-15/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - -def get_audio(url): - yt = YouTube(url) - return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - -def get_transcript(url, model_size, lang, format): - - model = whisper.load_model(model_size) - - if lang == "None": - lang = None - - result = model.transcribe(get_audio(url), fp16=False, language=lang) - - if format == "None": - return result["text"] - elif format == ".srt": - return format_to_srt(result["segments"]) - -def format_to_srt(segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i + 1}\n" - output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - -def format_timestamp(t): - hh = t//3600 - mm = (t - hh*3600)//60 - ss = t - hh*3600 - mm*60 - mi = (t - int(t))*1000 - return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}" - - -langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) -model_size = list(whisper._MODELS.keys()) - -with gr.Blocks() as demo: - - with gr.Row(): - - with gr.Column(): - - with gr.Row(): - url = gr.Textbox(placeholder='Youtube video URL', label='URL') - - with gr.Row(): - - model_size = gr.Dropdown(choices=model_size, value='tiny', label="Model") - lang = gr.Dropdown(choices=langs, value="None", label="Language (Optional)") - format = gr.Dropdown(choices=["None", ".srt"], value="None", label="Timestamps? (Optional)") - - with gr.Row(): - gr.Markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.") - transcribe_btn = gr.Button('Transcribe') - - with gr.Column(): - outputs = gr.Textbox(placeholder='Transcription of the video', label='Transcription') - - transcribe_btn.click(get_transcript, inputs=[url, model_size, lang, format], outputs=outputs) - -demo.launch(debug=True) diff --git a/spaces/kepl/gpt/g4f/utils.py b/spaces/kepl/gpt/g4f/utils.py deleted file mode 100644 index d5ab41c79b44ab81e1843d209cb342bd83dafb42..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/utils.py +++ /dev/null @@ -1,49 +0,0 @@ -import browser_cookie3 - - -class Utils: - browsers = [ - browser_cookie3.chrome, # 62.74% market share - browser_cookie3.safari, # 24.12% market share - browser_cookie3.firefox, # 4.56% market share - browser_cookie3.edge, # 2.85% market share - browser_cookie3.opera, # 1.69% market share - browser_cookie3.brave, # 0.96% market share - browser_cookie3.opera_gx, # 0.64% market share - browser_cookie3.vivaldi, # 0.32% market share - ] - - def get_cookies(domain: str, setName: str = None, setBrowser: str = False) -> dict: - cookies = {} - - if setBrowser != False: - for browser in Utils.browsers: - if browser.__name__ == setBrowser: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - else: - for browser in Utils.browsers: - try: - for c in browser(domain_name=domain): - if c.name not in cookies: - cookies = cookies | {c.name: c.value} - - except Exception as e: - pass - - if setName: - try: - return {setName: cookies[setName]} - - except ValueError: - print(f'Error: could not find {setName} cookie in any browser.') - exit(1) - - else: - return cookies diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/train.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/train.py deleted file mode 100644 index 55eca2d0ad9463415970e09bccab8b722e496704..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/train.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import logging -import os - -import torch -import torch.distributed as dist -import torch.nn.functional as F -import torch.utils.data.distributed -from torch.nn.utils import clip_grad_norm_ - -import losses -from backbones import get_model -from dataset import MXFaceDataset, SyntheticDataset, DataLoaderX -from partial_fc import PartialFC -from utils.utils_amp import MaxClipGradScaler -from utils.utils_callbacks import CallBackVerification, CallBackLogging, CallBackModelCheckpoint -from utils.utils_config import get_config -from utils.utils_logging import AverageMeter, init_logging - - -def main(args): - cfg = get_config(args.config) - try: - world_size = int(os.environ['WORLD_SIZE']) - rank = int(os.environ['RANK']) - dist.init_process_group('nccl') - except KeyError: - world_size = 1 - rank = 0 - dist.init_process_group(backend='nccl', init_method="tcp://127.0.0.1:12584", rank=rank, world_size=world_size) - - local_rank = args.local_rank - torch.cuda.set_device(local_rank) - os.makedirs(cfg.output, exist_ok=True) - init_logging(rank, cfg.output) - - if cfg.rec == "synthetic": - train_set = SyntheticDataset(local_rank=local_rank) - else: - train_set = MXFaceDataset(root_dir=cfg.rec, local_rank=local_rank) - - train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) - train_loader = DataLoaderX( - local_rank=local_rank, dataset=train_set, batch_size=cfg.batch_size, - sampler=train_sampler, num_workers=2, pin_memory=True, drop_last=True) - backbone = get_model(cfg.network, dropout=0.0, fp16=cfg.fp16, num_features=cfg.embedding_size).to(local_rank) - - if cfg.resume: - try: - backbone_pth = os.path.join(cfg.output, "backbone.pth") - backbone.load_state_dict(torch.load(backbone_pth, map_location=torch.device(local_rank))) - if rank == 0: - logging.info("backbone resume successfully!") - except (FileNotFoundError, KeyError, IndexError, RuntimeError): - if rank == 0: - logging.info("resume fail, backbone init successfully!") - - backbone = torch.nn.parallel.DistributedDataParallel( - module=backbone, broadcast_buffers=False, device_ids=[local_rank]) - backbone.train() - margin_softmax = losses.get_loss(cfg.loss) - module_partial_fc = PartialFC( - rank=rank, local_rank=local_rank, world_size=world_size, resume=cfg.resume, - batch_size=cfg.batch_size, margin_softmax=margin_softmax, num_classes=cfg.num_classes, - sample_rate=cfg.sample_rate, embedding_size=cfg.embedding_size, prefix=cfg.output) - - opt_backbone = torch.optim.SGD( - params=[{'params': backbone.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - opt_pfc = torch.optim.SGD( - params=[{'params': module_partial_fc.parameters()}], - lr=cfg.lr / 512 * cfg.batch_size * world_size, - momentum=0.9, weight_decay=cfg.weight_decay) - - num_image = len(train_set) - total_batch_size = cfg.batch_size * world_size - cfg.warmup_step = num_image // total_batch_size * cfg.warmup_epoch - cfg.total_step = num_image // total_batch_size * cfg.num_epoch - - def lr_step_func(current_step): - cfg.decay_step = [x * num_image // total_batch_size for x in cfg.decay_epoch] - if current_step < cfg.warmup_step: - return current_step / cfg.warmup_step - else: - return 0.1 ** len([m for m in cfg.decay_step if m <= current_step]) - - scheduler_backbone = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_backbone, lr_lambda=lr_step_func) - scheduler_pfc = torch.optim.lr_scheduler.LambdaLR( - optimizer=opt_pfc, lr_lambda=lr_step_func) - - for key, value in cfg.items(): - num_space = 25 - len(key) - logging.info(": " + key + " " * num_space + str(value)) - - val_target = cfg.val_targets - callback_verification = CallBackVerification(2000, rank, val_target, cfg.rec) - callback_logging = CallBackLogging(50, rank, cfg.total_step, cfg.batch_size, world_size, None) - callback_checkpoint = CallBackModelCheckpoint(rank, cfg.output) - - loss = AverageMeter() - start_epoch = 0 - global_step = 0 - grad_amp = MaxClipGradScaler(cfg.batch_size, 128 * cfg.batch_size, growth_interval=100) if cfg.fp16 else None - for epoch in range(start_epoch, cfg.num_epoch): - train_sampler.set_epoch(epoch) - for step, (img, label) in enumerate(train_loader): - global_step += 1 - features = F.normalize(backbone(img)) - x_grad, loss_v = module_partial_fc.forward_backward(label, features, opt_pfc) - if cfg.fp16: - features.backward(grad_amp.scale(x_grad)) - grad_amp.unscale_(opt_backbone) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - grad_amp.step(opt_backbone) - grad_amp.update() - else: - features.backward(x_grad) - clip_grad_norm_(backbone.parameters(), max_norm=5, norm_type=2) - opt_backbone.step() - - opt_pfc.step() - module_partial_fc.update() - opt_backbone.zero_grad() - opt_pfc.zero_grad() - loss.update(loss_v, 1) - callback_logging(global_step, loss, epoch, cfg.fp16, scheduler_backbone.get_last_lr()[0], grad_amp) - callback_verification(global_step, backbone) - scheduler_backbone.step() - scheduler_pfc.step() - callback_checkpoint(global_step, backbone, module_partial_fc) - dist.destroy_process_group() - - -if __name__ == "__main__": - torch.backends.cudnn.benchmark = True - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('config', type=str, help='py config file') - parser.add_argument('--local_rank', type=int, default=0, help='local_rank') - main(parser.parse_args()) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/sync_batchnorm/unittest.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/sync_batchnorm/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval_ijbc.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval_ijbc.py deleted file mode 100644 index 9c5a650d486d18eb02d6f60d448fc3b315261f5d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/eval_ijbc.py +++ /dev/null @@ -1,483 +0,0 @@ -# coding: utf-8 - -import os -import pickle - -import matplotlib -import pandas as pd - -matplotlib.use('Agg') -import matplotlib.pyplot as plt -import timeit -import sklearn -import argparse -import cv2 -import numpy as np -import torch -from skimage import transform as trans -from backbones import get_model -from sklearn.metrics import roc_curve, auc - -from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap -from prettytable import PrettyTable -from pathlib import Path - -import sys -import warnings - -sys.path.insert(0, "../") -warnings.filterwarnings("ignore") - -parser = argparse.ArgumentParser(description='do ijb test') -# general -parser.add_argument('--model-prefix', default='', help='path to load model.') -parser.add_argument('--image-path', default='', type=str, help='') -parser.add_argument('--result-dir', default='.', type=str, help='') -parser.add_argument('--batch-size', default=128, type=int, help='') -parser.add_argument('--network', default='iresnet50', type=str, help='') -parser.add_argument('--job', default='insightface', type=str, help='job name') -parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB') -args = parser.parse_args() - -target = args.target -model_path = args.model_prefix -image_path = args.image_path -result_dir = args.result_dir -gpu_id = None -use_norm_score = True # if Ture, TestMode(N1) -use_detector_score = True # if Ture, TestMode(D1) -use_flip_test = True # if Ture, TestMode(F1) -job = args.job -batch_size = args.batch_size - - -class Embedding(object): - def __init__(self, prefix, data_shape, batch_size=1): - image_size = (112, 112) - self.image_size = image_size - weight = torch.load(prefix) - resnet = get_model(args.network, dropout=0, fp16=False).cuda() - resnet.load_state_dict(weight) - model = torch.nn.DataParallel(resnet) - self.model = model - self.model.eval() - src = np.array([ - [30.2946, 51.6963], - [65.5318, 51.5014], - [48.0252, 71.7366], - [33.5493, 92.3655], - [62.7299, 92.2041]], dtype=np.float32) - src[:, 0] += 8.0 - self.src = src - self.batch_size = batch_size - self.data_shape = data_shape - - def get(self, rimg, landmark): - - assert landmark.shape[0] == 68 or landmark.shape[0] == 5 - assert landmark.shape[1] == 2 - if landmark.shape[0] == 68: - landmark5 = np.zeros((5, 2), dtype=np.float32) - landmark5[0] = (landmark[36] + landmark[39]) / 2 - landmark5[1] = (landmark[42] + landmark[45]) / 2 - landmark5[2] = landmark[30] - landmark5[3] = landmark[48] - landmark5[4] = landmark[54] - else: - landmark5 = landmark - tform = trans.SimilarityTransform() - tform.estimate(landmark5, self.src) - M = tform.params[0:2, :] - img = cv2.warpAffine(rimg, - M, (self.image_size[1], self.image_size[0]), - borderValue=0.0) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img_flip = np.fliplr(img) - img = np.transpose(img, (2, 0, 1)) # 3*112*112, RGB - img_flip = np.transpose(img_flip, (2, 0, 1)) - input_blob = np.zeros((2, 3, self.image_size[1], self.image_size[0]), dtype=np.uint8) - input_blob[0] = img - input_blob[1] = img_flip - return input_blob - - @torch.no_grad() - def forward_db(self, batch_data): - imgs = torch.Tensor(batch_data).cuda() - imgs.div_(255).sub_(0.5).div_(0.5) - feat = self.model(imgs) - feat = feat.reshape([self.batch_size, 2 * feat.shape[1]]) - return feat.cpu().numpy() - - -# 将一个list尽量均分成n份,限制len(list)==n,份数大于原list内元素个数则分配空list[] -def divideIntoNstrand(listTemp, n): - twoList = [[] for i in range(n)] - for i, e in enumerate(listTemp): - twoList[i % n].append(e) - return twoList - - -def read_template_media_list(path): - # ijb_meta = np.loadtxt(path, dtype=str) - ijb_meta = pd.read_csv(path, sep=' ', header=None).values - templates = ijb_meta[:, 1].astype(np.int) - medias = ijb_meta[:, 2].astype(np.int) - return templates, medias - - -# In[ ]: - - -def read_template_pair_list(path): - # pairs = np.loadtxt(path, dtype=str) - pairs = pd.read_csv(path, sep=' ', header=None).values - # print(pairs.shape) - # print(pairs[:, 0].astype(np.int)) - t1 = pairs[:, 0].astype(np.int) - t2 = pairs[:, 1].astype(np.int) - label = pairs[:, 2].astype(np.int) - return t1, t2, label - - -# In[ ]: - - -def read_image_feature(path): - with open(path, 'rb') as fid: - img_feats = pickle.load(fid) - return img_feats - - -# In[ ]: - - -def get_image_feature(img_path, files_list, model_path, epoch, gpu_id): - batch_size = args.batch_size - data_shape = (3, 112, 112) - - files = files_list - print('files:', len(files)) - rare_size = len(files) % batch_size - faceness_scores = [] - batch = 0 - img_feats = np.empty((len(files), 1024), dtype=np.float32) - - batch_data = np.empty((2 * batch_size, 3, 112, 112)) - embedding = Embedding(model_path, data_shape, batch_size) - for img_index, each_line in enumerate(files[:len(files) - rare_size]): - name_lmk_score = each_line.strip().split(' ') - img_name = os.path.join(img_path, name_lmk_score[0]) - img = cv2.imread(img_name) - lmk = np.array([float(x) for x in name_lmk_score[1:-1]], - dtype=np.float32) - lmk = lmk.reshape((5, 2)) - input_blob = embedding.get(img, lmk) - - batch_data[2 * (img_index - batch * batch_size)][:] = input_blob[0] - batch_data[2 * (img_index - batch * batch_size) + 1][:] = input_blob[1] - if (img_index + 1) % batch_size == 0: - print('batch', batch) - img_feats[batch * batch_size:batch * batch_size + - batch_size][:] = embedding.forward_db(batch_data) - batch += 1 - faceness_scores.append(name_lmk_score[-1]) - - batch_data = np.empty((2 * rare_size, 3, 112, 112)) - embedding = Embedding(model_path, data_shape, rare_size) - for img_index, each_line in enumerate(files[len(files) - rare_size:]): - name_lmk_score = each_line.strip().split(' ') - img_name = os.path.join(img_path, name_lmk_score[0]) - img = cv2.imread(img_name) - lmk = np.array([float(x) for x in name_lmk_score[1:-1]], - dtype=np.float32) - lmk = lmk.reshape((5, 2)) - input_blob = embedding.get(img, lmk) - batch_data[2 * img_index][:] = input_blob[0] - batch_data[2 * img_index + 1][:] = input_blob[1] - if (img_index + 1) % rare_size == 0: - print('batch', batch) - img_feats[len(files) - - rare_size:][:] = embedding.forward_db(batch_data) - batch += 1 - faceness_scores.append(name_lmk_score[-1]) - faceness_scores = np.array(faceness_scores).astype(np.float32) - # img_feats = np.ones( (len(files), 1024), dtype=np.float32) * 0.01 - # faceness_scores = np.ones( (len(files), ), dtype=np.float32 ) - return img_feats, faceness_scores - - -# In[ ]: - - -def image2template_feature(img_feats=None, templates=None, medias=None): - # ========================================================== - # 1. face image feature l2 normalization. img_feats:[number_image x feats_dim] - # 2. compute media feature. - # 3. compute template feature. - # ========================================================== - unique_templates = np.unique(templates) - template_feats = np.zeros((len(unique_templates), img_feats.shape[1])) - - for count_template, uqt in enumerate(unique_templates): - - (ind_t,) = np.where(templates == uqt) - face_norm_feats = img_feats[ind_t] - face_medias = medias[ind_t] - unique_medias, unique_media_counts = np.unique(face_medias, - return_counts=True) - media_norm_feats = [] - for u, ct in zip(unique_medias, unique_media_counts): - (ind_m,) = np.where(face_medias == u) - if ct == 1: - media_norm_feats += [face_norm_feats[ind_m]] - else: # image features from the same video will be aggregated into one feature - media_norm_feats += [ - np.mean(face_norm_feats[ind_m], axis=0, keepdims=True) - ] - media_norm_feats = np.array(media_norm_feats) - # media_norm_feats = media_norm_feats / np.sqrt(np.sum(media_norm_feats ** 2, -1, keepdims=True)) - template_feats[count_template] = np.sum(media_norm_feats, axis=0) - if count_template % 2000 == 0: - print('Finish Calculating {} template features.'.format( - count_template)) - # template_norm_feats = template_feats / np.sqrt(np.sum(template_feats ** 2, -1, keepdims=True)) - template_norm_feats = sklearn.preprocessing.normalize(template_feats) - # print(template_norm_feats.shape) - return template_norm_feats, unique_templates - - -# In[ ]: - - -def verification(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - # ========================================================== - # Compute set-to-set Similarity Score. - # ========================================================== - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - - score = np.zeros((len(p1),)) # save cosine distance between pairs - - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [ - total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize) - ] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -# In[ ]: -def verification2(template_norm_feats=None, - unique_templates=None, - p1=None, - p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) # save cosine distance between pairs - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [ - total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize) - ] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print('Finish {}/{} pairs.'.format(c, total_sublists)) - return score - - -def read_score(path): - with open(path, 'rb') as fid: - img_feats = pickle.load(fid) - return img_feats - - -# # Step1: Load Meta Data - -# In[ ]: - -assert target == 'IJBC' or target == 'IJBB' - -# ============================================================= -# load image and template relationships for template feature embedding -# tid --> template id, mid --> media id -# format: -# image_name tid mid -# ============================================================= -start = timeit.default_timer() -templates, medias = read_template_media_list( - os.path.join('%s/meta' % image_path, - '%s_face_tid_mid.txt' % target.lower())) -stop = timeit.default_timer() -print('Time: %.2f s. ' % (stop - start)) - -# In[ ]: - -# ============================================================= -# load template pairs for template-to-template verification -# tid : template id, label : 1/0 -# format: -# tid_1 tid_2 label -# ============================================================= -start = timeit.default_timer() -p1, p2, label = read_template_pair_list( - os.path.join('%s/meta' % image_path, - '%s_template_pair_label.txt' % target.lower())) -stop = timeit.default_timer() -print('Time: %.2f s. ' % (stop - start)) - -# # Step 2: Get Image Features - -# In[ ]: - -# ============================================================= -# load image features -# format: -# img_feats: [image_num x feats_dim] (227630, 512) -# ============================================================= -start = timeit.default_timer() -img_path = '%s/loose_crop' % image_path -img_list_path = '%s/meta/%s_name_5pts_score.txt' % (image_path, target.lower()) -img_list = open(img_list_path) -files = img_list.readlines() -# files_list = divideIntoNstrand(files, rank_size) -files_list = files - -# img_feats -# for i in range(rank_size): -img_feats, faceness_scores = get_image_feature(img_path, files_list, - model_path, 0, gpu_id) -stop = timeit.default_timer() -print('Time: %.2f s. ' % (stop - start)) -print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], - img_feats.shape[1])) - -# # Step3: Get Template Features - -# In[ ]: - -# ============================================================= -# compute template features from image features. -# ============================================================= -start = timeit.default_timer() -# ========================================================== -# Norm feature before aggregation into template feature? -# Feature norm from embedding network and faceness score are able to decrease weights for noise samples (not face). -# ========================================================== -# 1. FaceScore (Feature Norm) -# 2. FaceScore (Detector) - -if use_flip_test: - # concat --- F1 - # img_input_feats = img_feats - # add --- F2 - img_input_feats = img_feats[:, 0:img_feats.shape[1] // - 2] + img_feats[:, img_feats.shape[1] // 2:] -else: - img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] - -if use_norm_score: - img_input_feats = img_input_feats -else: - # normalise features to remove norm information - img_input_feats = img_input_feats / np.sqrt( - np.sum(img_input_feats ** 2, -1, keepdims=True)) - -if use_detector_score: - print(img_input_feats.shape, faceness_scores.shape) - img_input_feats = img_input_feats * faceness_scores[:, np.newaxis] -else: - img_input_feats = img_input_feats - -template_norm_feats, unique_templates = image2template_feature( - img_input_feats, templates, medias) -stop = timeit.default_timer() -print('Time: %.2f s. ' % (stop - start)) - -# # Step 4: Get Template Similarity Scores - -# In[ ]: - -# ============================================================= -# compute verification scores between template pairs. -# ============================================================= -start = timeit.default_timer() -score = verification(template_norm_feats, unique_templates, p1, p2) -stop = timeit.default_timer() -print('Time: %.2f s. ' % (stop - start)) - -# In[ ]: -save_path = os.path.join(result_dir, args.job) -# save_path = result_dir + '/%s_result' % target - -if not os.path.exists(save_path): - os.makedirs(save_path) - -score_save_file = os.path.join(save_path, "%s.npy" % target.lower()) -np.save(score_save_file, score) - -# # Step 5: Get ROC Curves and TPR@FPR Table - -# In[ ]: - -files = [score_save_file] -methods = [] -scores = [] -for file in files: - methods.append(Path(file).stem) - scores.append(np.load(file)) - -methods = np.array(methods) -scores = dict(zip(methods, scores)) -colours = dict( - zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2'))) -x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1] -tpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels]) -fig = plt.figure() -for method in methods: - fpr, tpr, _ = roc_curve(label, scores[method]) - roc_auc = auc(fpr, tpr) - fpr = np.flipud(fpr) - tpr = np.flipud(tpr) # select largest tpr at same fpr - plt.plot(fpr, - tpr, - color=colours[method], - lw=1, - label=('[%s (AUC = %0.4f %%)]' % - (method.split('-')[-1], roc_auc * 100))) - tpr_fpr_row = [] - tpr_fpr_row.append("%s-%s" % (method, target)) - for fpr_iter in np.arange(len(x_labels)): - _, min_index = min( - list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr))))) - tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100)) - tpr_fpr_table.add_row(tpr_fpr_row) -plt.xlim([10 ** -6, 0.1]) -plt.ylim([0.3, 1.0]) -plt.grid(linestyle='--', linewidth=1) -plt.xticks(x_labels) -plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True)) -plt.xscale('log') -plt.xlabel('False Positive Rate') -plt.ylabel('True Positive Rate') -plt.title('ROC on IJB') -plt.legend(loc="lower right") -fig.savefig(os.path.join(save_path, '%s.pdf' % target.lower())) -print(tpr_fpr_table) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/base_model.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/base_model.py deleted file mode 100644 index cfe64a7f739ad8f8cfbf3073a2bf49e1468127fd..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/base_model.py +++ /dev/null @@ -1,316 +0,0 @@ -"""This script defines the base network model for Deep3DFaceRecon_pytorch -""" - -import os -import numpy as np -import torch -from collections import OrderedDict -from abc import ABC, abstractmethod -from . import networks - - -class BaseModel(ABC): - """This class is an abstract base class (ABC) for models. - To create a subclass, you need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate losses, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - """ - - def __init__(self, opt): - """Initialize the BaseModel class. - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - - When creating your custom class, you need to implement your own initialization. - In this fucntion, you should first call - Then, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): specify the images that you want to display and save. - -- self.visual_names (str list): define networks used in our training. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - """ - self.opt = opt - self.isTrain = False - self.device = torch.device('cpu') - self.save_dir = " " # os.path.join(opt.checkpoints_dir, opt.name) # save all the checkpoints to save_dir - self.loss_names = [] - self.model_names = [] - self.visual_names = [] - self.parallel_names = [] - self.optimizers = [] - self.image_paths = [] - self.metric = 0 # used for learning rate policy 'plateau' - - @staticmethod - def dict_grad_hook_factory(add_func=lambda x: x): - saved_dict = dict() - - def hook_gen(name): - def grad_hook(grad): - saved_vals = add_func(grad) - saved_dict[name] = saved_vals - return grad_hook - return hook_gen, saved_dict - - @staticmethod - def modify_commandline_options(parser, is_train): - """Add new model-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - return parser - - @abstractmethod - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input (dict): includes the data itself and its metadata information. - """ - pass - - @abstractmethod - def forward(self): - """Run forward pass; called by both functions and .""" - pass - - @abstractmethod - def optimize_parameters(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - pass - - def setup(self, opt): - """Load and print networks; create schedulers - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - if self.isTrain: - self.schedulers = [networks.get_scheduler(optimizer, opt) for optimizer in self.optimizers] - - if not self.isTrain or opt.continue_train: - load_suffix = opt.epoch - self.load_networks(load_suffix) - - - # self.print_networks(opt.verbose) - - def parallelize(self, convert_sync_batchnorm=True): - if not self.opt.use_ddp: - for name in self.parallel_names: - if isinstance(name, str): - module = getattr(self, name) - setattr(self, name, module.to(self.device)) - else: - for name in self.model_names: - if isinstance(name, str): - module = getattr(self, name) - if convert_sync_batchnorm: - module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module) - setattr(self, name, torch.nn.parallel.DistributedDataParallel(module.to(self.device), - device_ids=[self.device.index], - find_unused_parameters=True, broadcast_buffers=True)) - - # DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient. - for name in self.parallel_names: - if isinstance(name, str) and name not in self.model_names: - module = getattr(self, name) - setattr(self, name, module.to(self.device)) - - # put state_dict of optimizer to gpu device - if self.opt.phase != 'test': - if self.opt.continue_train: - for optim in self.optimizers: - for state in optim.state.values(): - for k, v in state.items(): - if isinstance(v, torch.Tensor): - state[k] = v.to(self.device) - - def data_dependent_initialize(self, data): - pass - - def train(self): - """Make models train mode""" - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - net.train() - - def eval(self): - """Make models eval mode""" - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - net.eval() - - def test(self): - """Forward function used in test time. - - This function wraps function in no_grad() so we don't save intermediate steps for backprop - It also calls to produce additional visualization results - """ - with torch.no_grad(): - self.forward() - self.compute_visuals() - - def compute_visuals(self): - """Calculate additional output images for visdom and HTML visualization""" - pass - - def get_image_paths(self, name='A'): - """ Return image paths that are used to load current data""" - return self.image_paths if name =='A' else self.image_paths_B - - def update_learning_rate(self): - """Update learning rates for all the networks; called at the end of every epoch""" - for scheduler in self.schedulers: - if self.opt.lr_policy == 'plateau': - scheduler.step(self.metric) - else: - scheduler.step() - - lr = self.optimizers[0].param_groups[0]['lr'] - print('learning rate = %.7f' % lr) - - def get_current_visuals(self): - """Return visualization images. train.py will display these images with visdom, and save the images to a HTML""" - visual_ret = OrderedDict() - for name in self.visual_names: - if isinstance(name, str): - visual_ret[name] = getattr(self, name)[:, :3, ...] - return visual_ret - - def get_current_losses(self): - """Return traning losses / errors. train.py will print out these errors on console, and save them to a file""" - errors_ret = OrderedDict() - for name in self.loss_names: - if isinstance(name, str): - errors_ret[name] = float(getattr(self, 'loss_' + name)) # float(...) works for both scalar tensor and float number - return errors_ret - - def save_networks(self, epoch): - """Save all the networks to the disk. - - Parameters: - epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name) - """ - if not os.path.isdir(self.save_dir): - os.makedirs(self.save_dir) - - save_filename = 'epoch_%s.pth' % (epoch) - save_path = os.path.join(self.save_dir, save_filename) - - save_dict = {} - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - if isinstance(net, torch.nn.DataParallel) or isinstance(net, - torch.nn.parallel.DistributedDataParallel): - net = net.module - save_dict[name] = net.state_dict() - - - for i, optim in enumerate(self.optimizers): - save_dict['opt_%02d'%i] = optim.state_dict() - - for i, sched in enumerate(self.schedulers): - save_dict['sched_%02d'%i] = sched.state_dict() - - torch.save(save_dict, save_path) - - def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0): - """Fix InstanceNorm checkpoints incompatibility (prior to 0.4)""" - key = keys[i] - if i + 1 == len(keys): # at the end, pointing to a parameter/buffer - if module.__class__.__name__.startswith('InstanceNorm') and \ - (key == 'running_mean' or key == 'running_var'): - if getattr(module, key) is None: - state_dict.pop('.'.join(keys)) - if module.__class__.__name__.startswith('InstanceNorm') and \ - (key == 'num_batches_tracked'): - state_dict.pop('.'.join(keys)) - else: - self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1) - - def load_networks(self, epoch): - """Load all the networks from the disk. - - Parameters: - epoch (int) -- current epoch; used in the file name '%s_net_%s.pth' % (epoch, name) - """ - if self.opt.isTrain and self.opt.pretrained_name is not None: - load_dir = os.path.join(self.opt.checkpoints_dir, self.opt.pretrained_name) - else: - load_dir = self.save_dir - load_filename = 'epoch_%s.pth' % (epoch) - load_path = os.path.join(load_dir, load_filename) - state_dict = torch.load(load_path, map_location=self.device) - print('loading the model from %s' % load_path) - - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - if isinstance(net, torch.nn.DataParallel): - net = net.module - net.load_state_dict(state_dict[name]) - - if self.opt.phase != 'test': - if self.opt.continue_train: - print('loading the optim from %s' % load_path) - for i, optim in enumerate(self.optimizers): - optim.load_state_dict(state_dict['opt_%02d'%i]) - - try: - print('loading the sched from %s' % load_path) - for i, sched in enumerate(self.schedulers): - sched.load_state_dict(state_dict['sched_%02d'%i]) - except: - print('Failed to load schedulers, set schedulers according to epoch count manually') - for i, sched in enumerate(self.schedulers): - sched.last_epoch = self.opt.epoch_count - 1 - - - - - def print_networks(self, verbose): - """Print the total number of parameters in the network and (if verbose) network architecture - - Parameters: - verbose (bool) -- if verbose: print the network architecture - """ - print('---------- Networks initialized -------------') - for name in self.model_names: - if isinstance(name, str): - net = getattr(self, name) - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - if verbose: - print(net) - print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6)) - print('-----------------------------------------------') - - def set_requires_grad(self, nets, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary computations - Parameters: - nets (network list) -- a list of networks - requires_grad (bool) -- whether the networks require gradients or not - """ - if not isinstance(nets, list): - nets = [nets] - for net in nets: - if net is not None: - for param in net.parameters(): - param.requires_grad = requires_grad - - def generate_visuals_for_evaluation(self, data, mode): - return {} diff --git a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/main.py b/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/main.py deleted file mode 100644 index b609b96362a5f59d1dd1704bfaea33d866b97d4b..0000000000000000000000000000000000000000 --- a/spaces/kidcoconut/spcdkr_omdenasaudi_liverhccxai/main.py +++ /dev/null @@ -1,92 +0,0 @@ -''' - purpose: fastAPI routing -''' - -from fastapi import FastAPI -from fastapi.responses import HTMLResponse -from fastapi import APIRouter, Request, Response -from fastapi.templating import Jinja2Templates -import uvicorn - -#--- import custom libraries -import lib.utils as libUtils - - -#--- imported route handlers -from routes.api.rte_api import rteApi -from routes.api.rte_wsi import rteWsi -from routes.api.rte_tiles import rteTiles - - -#--- fastAPI self doc descriptors -description = """ - Omdena Saudi Arabia: Liver Cancer HCC Diagnosis with XAI - - - - ## key business benefit #1 - ## key business benefit #2 - ## key business benefit #3 - - You will be able to: - * key feature #1 - * key feature #2 - * key feature #3 -""" - -app = FastAPI( - title="App: Omdena Saudi Arabia - Liver Cancer HCC Diagnosis with XAI", - description=description, - version="0.0.1", - terms_of_service="http://example.com/terms/", - contact={ - "name": "Iain McKone", - "email": "iain.mckone@gmail.com", - }, - license_info={ - "name": "Apache 2.0", - "url": "https://www.apache.org/licenses/LICENSE-2.0.html", - }, -) - - -#--- configure route handlers -app.include_router(rteWsi, prefix="/api/wsi") -app.include_router(rteTiles, prefix="/api/tiles") -app.include_router(rteApi, prefix="/api") - -#app.include_router(rteQa, prefix="/qa") - - -m_kstrPath_templ = libUtils.pth_templ -m_templRef = Jinja2Templates(directory=str(m_kstrPath_templ)) - - -def get_jinja2Templ(request: Request, pdfResults, strParamTitle, lngNumRecords, blnIsTrain=False, blnIsSample=False): - lngNumRecords = min(lngNumRecords, libUtils.m_klngMaxRecords) - if (blnIsTrain): strParamTitle = strParamTitle + " - Training Data" - if (not blnIsTrain): strParamTitle = strParamTitle + " - Test Data" - if (blnIsSample): lngNumRecords = libUtils.m_klngSampleSize - strParamTitle = strParamTitle + " - max " + str(lngNumRecords) + " rows" - - kstrTempl = 'templ_showDataframe.html' - jsonContext = {'request': request, - 'paramTitle': strParamTitle, - 'paramDataframe': pdfResults.sample(lngNumRecords).to_html(classes='table table-striped') - } - result = m_templRef.TemplateResponse(kstrTempl, jsonContext) - return result - - -#--- get main ui/ux entry point -@app.get('/') -def index(): - return { - "message": "Landing page: Omdena Saudi Arabia - Liver HCC Diagnosis with XAI" - } - - - -if __name__ == '__main__': - uvicorn.run("main:app", host="0.0.0.0", port=49300, reload=True) -#CMD ["uvicorn", "main:app", "--host=0.0.0.0", "--reload"] diff --git a/spaces/kkinc/gsdf-Counterfeit-V2.5/README.md b/spaces/kkinc/gsdf-Counterfeit-V2.5/README.md deleted file mode 100644 index cf9d9837f1761b5e03ae2fb4dac8b310709af2a6..0000000000000000000000000000000000000000 --- a/spaces/kkinc/gsdf-Counterfeit-V2.5/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gsdf Counterfeit V2.5 -emoji: 🏢 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kotstantinovskii/YSDA_arxiv_classification/lables.py b/spaces/kotstantinovskii/YSDA_arxiv_classification/lables.py deleted file mode 100644 index 3b9db0910ea1bd3128d22dcbacf1409ab13c6cd1..0000000000000000000000000000000000000000 --- a/spaces/kotstantinovskii/YSDA_arxiv_classification/lables.py +++ /dev/null @@ -1,465 +0,0 @@ -num_to_classes = {0: 'astro-ph.CO', - 1: 'astro-ph.EP', - 2: 'astro-ph.GA', - 3: 'astro-ph.HE', - 4: 'astro-ph.IM', - 5: 'astro-ph.SR', - 6: 'astro-ph.all', - 7: 'cond-mat.dis-nn', - 8: 'cond-mat.mes-hall', - 9: 'cond-mat.mtrl-sci', - 10: 'cond-mat.other', - 11: 'cond-mat.quant-gas', - 12: 'cond-mat.soft', - 13: 'cond-mat.stat-mech', - 14: 'cond-mat.str-el', - 15: 'cond-mat.supr-con', - 16: 'cs.AI', - 17: 'cs.AR', - 18: 'cs.CC', - 19: 'cs.CE', - 20: 'cs.CG', - 21: 'cs.CL', - 22: 'cs.CR', - 23: 'cs.CV', - 24: 'cs.CY', - 25: 'cs.DB', - 26: 'cs.DC', - 27: 'cs.DL', - 28: 'cs.DM', - 29: 'cs.DS', - 30: 'cs.ET', - 31: 'cs.FL', - 32: 'cs.GL', - 33: 'cs.GR', - 34: 'cs.GT', - 35: 'cs.HC', - 36: 'cs.IR', - 37: 'cs.IT', - 38: 'cs.LG', - 39: 'cs.LO', - 40: 'cs.MA', - 41: 'cs.MM', - 42: 'cs.MS', - 43: 'cs.NA', - 44: 'cs.NE', - 45: 'cs.NI', - 46: 'cs.OH', - 47: 'cs.OS', - 48: 'cs.PF', - 49: 'cs.PL', - 50: 'cs.RO', - 51: 'cs.SC', - 52: 'cs.SD', - 53: 'cs.SE', - 54: 'cs.SI', - 55: 'cs.SY', - 56: 'econ.EM', - 57: 'econ.GN', - 58: 'econ.TH', - 59: 'eess.AS', - 60: 'eess.IV', - 61: 'eess.SP', - 62: 'eess.SY', - 63: 'gr-qc', - 64: 'hep-ex', - 65: 'hep-lat', - 66: 'hep-ph', - 67: 'hep-th', - 68: 'math-ph', - 69: 'math.AC', - 70: 'math.AG', - 71: 'math.AP', - 72: 'math.AT', - 73: 'math.CA', - 74: 'math.CO', - 75: 'math.CT', - 76: 'math.CV', - 77: 'math.DG', - 78: 'math.DS', - 79: 'math.FA', - 80: 'math.GM', - 81: 'math.GN', - 82: 'math.GR', - 83: 'math.GT', - 84: 'math.HO', - 85: 'math.KT', - 86: 'math.LO', - 87: 'math.MG', - 88: 'math.NA', - 89: 'math.NT', - 90: 'math.OA', - 91: 'math.OC', - 92: 'math.PR', - 93: 'math.QA', - 94: 'math.RA', - 95: 'math.RT', - 96: 'math.SG', - 97: 'math.SP', - 98: 'math.ST', - 99: 'nlin.AO', - 100: 'nlin.CD', - 101: 'nlin.CG', - 102: 'nlin.PS', - 103: 'nlin.SI', - 104: 'nucl-ex', - 105: 'nucl-th', - 106: 'physics.acc-ph', - 107: 'physics.ao-ph', - 108: 'physics.app-ph', - 109: 'physics.atm-clus', - 110: 'physics.atom-ph', - 111: 'physics.bio-ph', - 112: 'physics.chem-ph', - 113: 'physics.class-ph', - 114: 'physics.comp-ph', - 115: 'physics.data-an', - 116: 'physics.ed-ph', - 117: 'physics.flu-dyn', - 118: 'physics.gen-ph', - 119: 'physics.geo-ph', - 120: 'physics.hist-ph', - 121: 'physics.ins-det', - 122: 'physics.med-ph', - 123: 'physics.optics', - 124: 'physics.plasm-ph', - 125: 'physics.pop-ph', - 126: 'physics.soc-ph', - 127: 'physics.space-ph', - 128: 'q-bio.BM', - 129: 'q-bio.CB', - 130: 'q-bio.GN', - 131: 'q-bio.MN', - 132: 'q-bio.NC', - 133: 'q-bio.OT', - 134: 'q-bio.PE', - 135: 'q-bio.QM', - 136: 'q-bio.SC', - 137: 'q-bio.TO', - 138: 'q-fin.CP', - 139: 'q-fin.EC', - 140: 'q-fin.GN', - 141: 'q-fin.MF', - 142: 'q-fin.PM', - 143: 'q-fin.PR', - 144: 'q-fin.RM', - 145: 'q-fin.ST', - 146: 'q-fin.TR', - 147: 'quant-ph', - 148: 'stat.AP', - 149: 'stat.CO', - 150: 'stat.ME', - 151: 'stat.ML', - 152: 'stat.OT'} - -classes_to_num = {'astro-ph.CO': 0, - 'astro-ph.EP': 1, - 'astro-ph.GA': 2, - 'astro-ph.HE': 3, - 'astro-ph.IM': 4, - 'astro-ph.SR': 5, - 'astro-ph.all': 6, - 'cond-mat.dis-nn': 7, - 'cond-mat.mes-hall': 8, - 'cond-mat.mtrl-sci': 9, - 'cond-mat.other': 10, - 'cond-mat.quant-gas': 11, - 'cond-mat.soft': 12, - 'cond-mat.stat-mech': 13, - 'cond-mat.str-el': 14, - 'cond-mat.supr-con': 15, - 'cs.AI': 16, - 'cs.AR': 17, - 'cs.CC': 18, - 'cs.CE': 19, - 'cs.CG': 20, - 'cs.CL': 21, - 'cs.CR': 22, - 'cs.CV': 23, - 'cs.CY': 24, - 'cs.DB': 25, - 'cs.DC': 26, - 'cs.DL': 27, - 'cs.DM': 28, - 'cs.DS': 29, - 'cs.ET': 30, - 'cs.FL': 31, - 'cs.GL': 32, - 'cs.GR': 33, - 'cs.GT': 34, - 'cs.HC': 35, - 'cs.IR': 36, - 'cs.IT': 37, - 'cs.LG': 38, - 'cs.LO': 39, - 'cs.MA': 40, - 'cs.MM': 41, - 'cs.MS': 42, - 'cs.NA': 43, - 'cs.NE': 44, - 'cs.NI': 45, - 'cs.OH': 46, - 'cs.OS': 47, - 'cs.PF': 48, - 'cs.PL': 49, - 'cs.RO': 50, - 'cs.SC': 51, - 'cs.SD': 52, - 'cs.SE': 53, - 'cs.SI': 54, - 'cs.SY': 55, - 'econ.EM': 56, - 'econ.GN': 57, - 'econ.TH': 58, - 'eess.AS': 59, - 'eess.IV': 60, - 'eess.SP': 61, - 'eess.SY': 62, - 'gr-qc': 63, - 'hep-ex': 64, - 'hep-lat': 65, - 'hep-ph': 66, - 'hep-th': 67, - 'math-ph': 68, - 'math.AC': 69, - 'math.AG': 70, - 'math.AP': 71, - 'math.AT': 72, - 'math.CA': 73, - 'math.CO': 74, - 'math.CT': 75, - 'math.CV': 76, - 'math.DG': 77, - 'math.DS': 78, - 'math.FA': 79, - 'math.GM': 80, - 'math.GN': 81, - 'math.GR': 82, - 'math.GT': 83, - 'math.HO': 84, - 'math.KT': 85, - 'math.LO': 86, - 'math.MG': 87, - 'math.NA': 88, - 'math.NT': 89, - 'math.OA': 90, - 'math.OC': 91, - 'math.PR': 92, - 'math.QA': 93, - 'math.RA': 94, - 'math.RT': 95, - 'math.SG': 96, - 'math.SP': 97, - 'math.ST': 98, - 'nlin.AO': 99, - 'nlin.CD': 100, - 'nlin.CG': 101, - 'nlin.PS': 102, - 'nlin.SI': 103, - 'nucl-ex': 104, - 'nucl-th': 105, - 'physics.acc-ph': 106, - 'physics.ao-ph': 107, - 'physics.app-ph': 108, - 'physics.atm-clus': 109, - 'physics.atom-ph': 110, - 'physics.bio-ph': 111, - 'physics.chem-ph': 112, - 'physics.class-ph': 113, - 'physics.comp-ph': 114, - 'physics.data-an': 115, - 'physics.ed-ph': 116, - 'physics.flu-dyn': 117, - 'physics.gen-ph': 118, - 'physics.geo-ph': 119, - 'physics.hist-ph': 120, - 'physics.ins-det': 121, - 'physics.med-ph': 122, - 'physics.optics': 123, - 'physics.plasm-ph': 124, - 'physics.pop-ph': 125, - 'physics.soc-ph': 126, - 'physics.space-ph': 127, - 'q-bio.BM': 128, - 'q-bio.CB': 129, - 'q-bio.GN': 130, - 'q-bio.MN': 131, - 'q-bio.NC': 132, - 'q-bio.OT': 133, - 'q-bio.PE': 134, - 'q-bio.QM': 135, - 'q-bio.SC': 136, - 'q-bio.TO': 137, - 'q-fin.CP': 138, - 'q-fin.EC': 139, - 'q-fin.GN': 140, - 'q-fin.MF': 141, - 'q-fin.PM': 142, - 'q-fin.PR': 143, - 'q-fin.RM': 144, - 'q-fin.ST': 145, - 'q-fin.TR': 146, - 'quant-ph': 147, - 'stat.AP': 148, - 'stat.CO': 149, - 'stat.ME': 150, - 'stat.ML': 151, - 'stat.OT': 152} - -taxonomy = {'cs.AI': 'Artificial Intelligence', - 'cs.AR': 'Hardware Architecture', - 'cs.CC': 'Computational Complexity', - 'cs.CE': 'Computational Engineering, Finance, and Science', - 'cs.CG': 'Computational Geometry', - 'cs.CL': 'Computation and Language', - 'cs.CR': 'Cryptography and Security', - 'cs.CV': 'Computer Vision and Pattern Recognition', - 'cs.CY': 'Computers and Society', - 'cs.DB': 'Databases', - 'cs.DC': 'Distributed, Parallel, and Cluster Computing', - 'cs.DL': 'Digital Libraries', - 'cs.DM': 'Discrete Mathematics', - 'cs.DS': 'Data Structures and Algorithms', - 'cs.ET': 'Emerging Technologies', - 'cs.FL': 'Formal Languages and Automata Theory', - 'cs.GL': 'General Literature', - 'cs.GR': 'Graphics', - 'cs.GT': 'Computer Science and Game Theory', - 'cs.HC': 'Human-Computer Interaction', - 'cs.IR': 'Information Retrieval', - 'cs.IT': 'Information Theory', - 'cs.LG': 'Machine Learning', - 'cs.LO': 'Logic in Computer Science', - 'cs.MA': 'Multiagent Systems', - 'cs.MM': 'Multimedia', - 'cs.MS': 'Mathematical Software', - 'cs.NA': 'Numerical Analysis', - 'cs.NE': 'Neural and Evolutionary Computing', - 'cs.NI': 'Networking and Internet Architecture', - 'cs.OH': 'Other Computer Science', - 'cs.OS': 'Operating Systems', - 'cs.PF': 'Performance', - 'cs.PL': 'Programming Languages', - 'cs.RO': 'Robotics', - 'cs.SC': 'Symbolic Computation', - 'cs.SD': 'Sound', - 'cs.SE': 'Software Engineering', - 'cs.SI': 'Social and Information Networks', - 'cs.SY': 'Systems and Control', - 'econ.EM': 'Econometrics', - 'econ.GN': 'General Economics', - 'econ.TH': 'Theoretical Economics', - 'eess.AS': 'Audio and Speech Processing', - 'eess.IV': 'Image and Video Processing', - 'eess.SP': 'Signal Processing', - 'eess.SY': 'Systems and Control', - 'math.AC': 'Commutative Algebra', - 'math.AG': 'Algebraic Geometry', - 'math.AP': 'Analysis of PDEs', - 'math.AT': 'Algebraic Topology', - 'math.CA': 'Classical Analysis and ODEs', - 'math.CO': 'Combinatorics', - 'math.CT': 'Category Theory', - 'math.CV': 'Complex Variables', - 'math.DG': 'Differential Geometry', - 'math.DS': 'Dynamical Systems', - 'math.FA': 'Functional Analysis', - 'math.GM': 'General Mathematics', - 'math.GN': 'General Topology', - 'math.GR': 'Group Theory', - 'math.GT': 'Geometric Topology', - 'math.HO': 'History and Overview', - 'math.IT': 'Information Theory', - 'math.KT': 'K-Theory and Homology', - 'math.LO': 'Logic', - 'math.MG': 'Metric Geometry', - 'math.MP': 'Mathematical Physics', - 'math.NA': 'Numerical Analysis', - 'math.NT': 'Number Theory', - 'math.OA': 'Operator Algebras', - 'math.OC': 'Optimization and Control', - 'math.PR': 'Probability', - 'math.QA': 'Quantum Algebra', - 'math.RA': 'Rings and Algebras', - 'math.RT': 'Representation Theory', - 'math.SG': 'Symplectic Geometry', - 'math.SP': 'Spectral Theory', - 'math.ST': 'Statistics Theory', - 'astro-ph.all': 'Astrophysics', - 'astro-ph.CO': 'Cosmology and Nongalactic Astrophysics', - 'astro-ph.EP': 'Earth and Planetary Astrophysics', - 'astro-ph.GA': 'Astrophysics of Galaxies', - 'astro-ph.HE': 'High Energy Astrophysical Phenomena', - 'astro-ph.IM': 'Instrumentation and Methods for Astrophysics', - 'astro-ph.SR': 'Solar and Stellar Astrophysics', - 'cond-mat.dis-nn': 'Disordered Systems and Neural Networks', - 'cond-mat.mes-hall': 'Mesoscale and Nanoscale Physics', - 'cond-mat.mtrl-sci': 'Materials Science', - 'cond-mat.other': 'Other Condensed Matter', - 'cond-mat.quant-gas': 'Quantum Gases', - 'cond-mat.soft': 'Soft Condensed Matter', - 'cond-mat.stat-mech': 'Statistical Mechanics', - 'cond-mat.str-el': 'Strongly Correlated Electrons', - 'cond-mat.supr-con': 'Superconductivity', - 'gr-qc': 'General Relativity and Quantum Cosmology', - 'hep-ex': 'High Energy Physics - Experiment', - 'hep-lat': 'High Energy Physics - Lattice', - 'hep-ph': 'High Energy Physics - Phenomenology', - 'hep-th': 'High Energy Physics - Theory', - 'math-ph': 'Mathematical Physics', - 'nlin.AO': 'Adaptation and Self-Organizing Systems', - 'nlin.CD': 'Chaotic Dynamics', - 'nlin.CG': 'Cellular Automata and Lattice Gases', - 'nlin.PS': 'Pattern Formation and Solitons', - 'nlin.SI': 'Exactly Solvable and Integrable Systems', - 'nucl-ex': 'Nuclear Experiment', - 'nucl-th': 'Nuclear Theory', - 'physics.acc-ph': 'Accelerator Physics', - 'physics.ao-ph': 'Atmospheric and Oceanic Physics', - 'physics.app-ph': 'Applied Physics', - 'physics.atm-clus': 'Atomic and Molecular Clusters', - 'atom-ph.all': 'Atomic Physics', - 'physics.atom-ph': 'Atomic Physics', - 'physics.bio-ph': 'Biological Physics', - 'physics.chem-ph': 'Chemical Physics', - 'physics.class-ph': 'Classical Physics', - 'physics.comp-ph': 'Computational Physics', - 'physics.data-an': 'Data Analysis, Statistics and Probability', - 'physics.ed-ph': 'Physics Education', - 'physics.flu-dyn': 'Fluid Dynamics', - 'physics.gen-ph': 'General Physics', - 'physics.geo-ph': 'Geophysics', - 'physics.hist-ph': 'History and Philosophy of Physics', - 'physics.ins-det': 'Instrumentation and Detectors', - 'physics.med-ph': 'Medical Physics', - 'physics.optics': 'Optics', - 'physics.plasm-ph': 'Plasma Physics', - 'physics.pop-ph': 'Popular Physics', - 'physics.soc-ph': 'Physics and Society', - 'physics.space-ph': 'Space Physics', - 'quant-ph': 'Quantum Physics', - 'q-bio.BM': 'Biomolecules', - 'q-bio.CB': 'Cell Behavior', - 'q-bio.GN': 'Genomics', - 'q-bio.MN': 'Molecular Networks', - 'q-bio.NC': 'Neurons and Cognition', - 'q-bio.OT': 'Other Quantitative Biology', - 'q-bio.PE': 'Populations and Evolution', - 'q-bio.QM': 'Quantitative Methods', - 'q-bio.SC': 'Subcellular Processes', - 'q-bio.TO': 'Tissues and Organs', - 'q-fin.CP': 'Computational Finance', - 'q-fin.EC': 'Economics', - 'q-fin.GN': 'General Finance', - 'q-fin.MF': 'Mathematical Finance', - 'q-fin.PM': 'Portfolio Management', - 'q-fin.PR': 'Pricing of Securities', - 'q-fin.RM': 'Risk Management', - 'q-fin.ST': 'Statistical Finance', - 'q-fin.TR': 'Trading and Market Microstructure', - 'stat.AP': 'Applications', - 'stat.CO': 'Computation', - 'stat.ME': 'Methodology', - 'stat.ML': 'Machine Learning', - 'stat.OT': 'Other Statistics', - 'stat.TH': 'Statistics Theory'} diff --git a/spaces/krazyxki/V-1488abed/src/proxy/kobold.ts b/spaces/krazyxki/V-1488abed/src/proxy/kobold.ts deleted file mode 100644 index b2fa5ec53d2b55c56e3a90f08fa21a4fa6fa94e9..0000000000000000000000000000000000000000 --- a/spaces/krazyxki/V-1488abed/src/proxy/kobold.ts +++ /dev/null @@ -1,118 +0,0 @@ -/* Pretends to be a KoboldAI API endpoint and translates incoming Kobold -requests to OpenAI API equivalents. */ - -import { Request, Response, Router } from "express"; -import http from "http"; -import { createProxyMiddleware } from "http-proxy-middleware"; -import util from "util"; -import zlib from "zlib"; -import { logger } from "../logger"; -import { - copyHttpHeaders, - handleDownstreamErrors, - handleInternalError, - incrementKeyUsage, -} from "./common"; -import { - addKey, - finalizeBody, - languageFilter, - limitOutputTokens, - transformKoboldPayload, -} from "./rewriters"; -import { injectMDReq } from "../proxy/middleware/request/md-request"; - -export const handleModelRequest = (_req: Request, res: Response) => { - res.status(200).json({ result: "Connected to OpenAI reverse proxy" }); -}; - -export const handleSoftPromptsRequest = (_req: Request, res: Response) => { - res.status(200).json({ soft_prompts_list: [] }); -}; - -const rewriteRequest = ( - proxyReq: http.ClientRequest, - req: Request, - res: Response -) => { - const rewriterPipeline = [ - addKey, - transformKoboldPayload, - languageFilter, - limitOutputTokens, - injectMDReq, - finalizeBody, - ]; - - try { - for (const rewriter of rewriterPipeline) { - rewriter(proxyReq, req, res, {}); - } - } catch (error) { - logger.error(error, "Error while executing proxy rewriter"); - proxyReq.destroy(error as Error); - } -}; - -const handleProxiedResponse = async ( - proxyRes: http.IncomingMessage, - req: Request, - res: Response -) => { - try { - await handleDownstreamErrors(proxyRes, req, res); - } catch (error) { - // Handler takes over the response, we're done here. - return; - } - incrementKeyUsage(req); - copyHttpHeaders(proxyRes, res); - - // For Kobold we need to consume the response body to turn it into a KoboldAI - // response payload. - let chunks: Buffer[] = []; - proxyRes.on("data", (chunk) => chunks.push(chunk)); - proxyRes.on("end", async () => { - let body = Buffer.concat(chunks); - const contentEncoding = proxyRes.headers["content-encoding"]; - - if (contentEncoding === "gzip") { - body = await util.promisify(zlib.gunzip)(body); - } else if (contentEncoding === "deflate") { - body = await util.promisify(zlib.inflate)(body); - } - - const response = JSON.parse(body.toString()); - - const koboldResponse = { - results: [{ text: response.choices[0].message.content }], - }; - res.status(200).json(koboldResponse); - }); -}; - -const koboldOaiProxy = createProxyMiddleware({ - target: "https://api.openai.com", - changeOrigin: true, - pathRewrite: { - "^/api/v1/generate": "/v1/chat/completions", - }, - on: { - proxyReq: rewriteRequest, - proxyRes: handleProxiedResponse, - error: handleInternalError, - }, - selfHandleResponse: true, - logger, -}); - -const koboldRouter = Router(); -koboldRouter.get("/api/v1/model", handleModelRequest); -koboldRouter.get("/api/v1/config/soft_prompts_list", handleSoftPromptsRequest); -koboldRouter.post("/api/v1/generate", koboldOaiProxy); -koboldRouter.use((req, res) => { - logger.warn(`Unhandled kobold request: ${req.method} ${req.path}`); - res.status(404).json({ error: "Not found" }); -}); - -export const kobold = koboldRouter; diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/macRes.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/macRes.py deleted file mode 100644 index f5a6cfe4789a351204d0ce6fa2abb5651487c5c0..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/macRes.py +++ /dev/null @@ -1,261 +0,0 @@ -from io import BytesIO -import struct -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytesjoin, tostr -from collections import OrderedDict -from collections.abc import MutableMapping - - -class ResourceError(Exception): - pass - - -class ResourceReader(MutableMapping): - """Reader for Mac OS resource forks. - - Parses a resource fork and returns resources according to their type. - If run on OS X, this will open the resource fork in the filesystem. - Otherwise, it will open the file itself and attempt to read it as - though it were a resource fork. - - The returned object can be indexed by type and iterated over, - returning in each case a list of py:class:`Resource` objects - representing all the resources of a certain type. - - """ - - def __init__(self, fileOrPath): - """Open a file - - Args: - fileOrPath: Either an object supporting a ``read`` method, an - ``os.PathLike`` object, or a string. - """ - self._resources = OrderedDict() - if hasattr(fileOrPath, "read"): - self.file = fileOrPath - else: - try: - # try reading from the resource fork (only works on OS X) - self.file = self.openResourceFork(fileOrPath) - self._readFile() - return - except (ResourceError, IOError): - # if it fails, use the data fork - self.file = self.openDataFork(fileOrPath) - self._readFile() - - @staticmethod - def openResourceFork(path): - if hasattr(path, "__fspath__"): # support os.PathLike objects - path = path.__fspath__() - with open(path + "/..namedfork/rsrc", "rb") as resfork: - data = resfork.read() - infile = BytesIO(data) - infile.name = path - return infile - - @staticmethod - def openDataFork(path): - with open(path, "rb") as datafork: - data = datafork.read() - infile = BytesIO(data) - infile.name = path - return infile - - def _readFile(self): - self._readHeaderAndMap() - self._readTypeList() - - def _read(self, numBytes, offset=None): - if offset is not None: - try: - self.file.seek(offset) - except OverflowError: - raise ResourceError("Failed to seek offset ('offset' is too large)") - if self.file.tell() != offset: - raise ResourceError("Failed to seek offset (reached EOF)") - try: - data = self.file.read(numBytes) - except OverflowError: - raise ResourceError("Cannot read resource ('numBytes' is too large)") - if len(data) != numBytes: - raise ResourceError("Cannot read resource (not enough data)") - return data - - def _readHeaderAndMap(self): - self.file.seek(0) - headerData = self._read(ResourceForkHeaderSize) - sstruct.unpack(ResourceForkHeader, headerData, self) - # seek to resource map, skip reserved - mapOffset = self.mapOffset + 22 - resourceMapData = self._read(ResourceMapHeaderSize, mapOffset) - sstruct.unpack(ResourceMapHeader, resourceMapData, self) - self.absTypeListOffset = self.mapOffset + self.typeListOffset - self.absNameListOffset = self.mapOffset + self.nameListOffset - - def _readTypeList(self): - absTypeListOffset = self.absTypeListOffset - numTypesData = self._read(2, absTypeListOffset) - (self.numTypes,) = struct.unpack(">H", numTypesData) - absTypeListOffset2 = absTypeListOffset + 2 - for i in range(self.numTypes + 1): - resTypeItemOffset = absTypeListOffset2 + ResourceTypeItemSize * i - resTypeItemData = self._read(ResourceTypeItemSize, resTypeItemOffset) - item = sstruct.unpack(ResourceTypeItem, resTypeItemData) - resType = tostr(item["type"], encoding="mac-roman") - refListOffset = absTypeListOffset + item["refListOffset"] - numRes = item["numRes"] + 1 - resources = self._readReferenceList(resType, refListOffset, numRes) - self._resources[resType] = resources - - def _readReferenceList(self, resType, refListOffset, numRes): - resources = [] - for i in range(numRes): - refOffset = refListOffset + ResourceRefItemSize * i - refData = self._read(ResourceRefItemSize, refOffset) - res = Resource(resType) - res.decompile(refData, self) - resources.append(res) - return resources - - def __getitem__(self, resType): - return self._resources[resType] - - def __delitem__(self, resType): - del self._resources[resType] - - def __setitem__(self, resType, resources): - self._resources[resType] = resources - - def __len__(self): - return len(self._resources) - - def __iter__(self): - return iter(self._resources) - - def keys(self): - return self._resources.keys() - - @property - def types(self): - """A list of the types of resources in the resource fork.""" - return list(self._resources.keys()) - - def countResources(self, resType): - """Return the number of resources of a given type.""" - try: - return len(self[resType]) - except KeyError: - return 0 - - def getIndices(self, resType): - """Returns a list of indices of resources of a given type.""" - numRes = self.countResources(resType) - if numRes: - return list(range(1, numRes + 1)) - else: - return [] - - def getNames(self, resType): - """Return list of names of all resources of a given type.""" - return [res.name for res in self.get(resType, []) if res.name is not None] - - def getIndResource(self, resType, index): - """Return resource of given type located at an index ranging from 1 - to the number of resources for that type, or None if not found. - """ - if index < 1: - return None - try: - res = self[resType][index - 1] - except (KeyError, IndexError): - return None - return res - - def getNamedResource(self, resType, name): - """Return the named resource of given type, else return None.""" - name = tostr(name, encoding="mac-roman") - for res in self.get(resType, []): - if res.name == name: - return res - return None - - def close(self): - if not self.file.closed: - self.file.close() - - -class Resource(object): - """Represents a resource stored within a resource fork. - - Attributes: - type: resource type. - data: resource data. - id: ID. - name: resource name. - attr: attributes. - """ - - def __init__( - self, resType=None, resData=None, resID=None, resName=None, resAttr=None - ): - self.type = resType - self.data = resData - self.id = resID - self.name = resName - self.attr = resAttr - - def decompile(self, refData, reader): - sstruct.unpack(ResourceRefItem, refData, self) - # interpret 3-byte dataOffset as (padded) ULONG to unpack it with struct - (self.dataOffset,) = struct.unpack(">L", bytesjoin([b"\0", self.dataOffset])) - absDataOffset = reader.dataOffset + self.dataOffset - (dataLength,) = struct.unpack(">L", reader._read(4, absDataOffset)) - self.data = reader._read(dataLength) - if self.nameOffset == -1: - return - absNameOffset = reader.absNameListOffset + self.nameOffset - (nameLength,) = struct.unpack("B", reader._read(1, absNameOffset)) - (name,) = struct.unpack(">%ss" % nameLength, reader._read(nameLength)) - self.name = tostr(name, encoding="mac-roman") - - -ResourceForkHeader = """ - > # big endian - dataOffset: L - mapOffset: L - dataLen: L - mapLen: L -""" - -ResourceForkHeaderSize = sstruct.calcsize(ResourceForkHeader) - -ResourceMapHeader = """ - > # big endian - attr: H - typeListOffset: H - nameListOffset: H -""" - -ResourceMapHeaderSize = sstruct.calcsize(ResourceMapHeader) - -ResourceTypeItem = """ - > # big endian - type: 4s - numRes: H - refListOffset: H -""" - -ResourceTypeItemSize = sstruct.calcsize(ResourceTypeItem) - -ResourceRefItem = """ - > # big endian - id: h - nameOffset: h - attr: B - dataOffset: 3s - reserved: L -""" - -ResourceRefItemSize = sstruct.calcsize(ResourceRefItem) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_v_h_e_a.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_v_h_e_a.py deleted file mode 100644 index 965674203db1b76cff23e3c640d4b7cadca5ae98..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_v_h_e_a.py +++ /dev/null @@ -1,126 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from fontTools.misc.fixedTools import ( - ensureVersionIsLong as fi2ve, - versionToFixed as ve2fi, -) -from . import DefaultTable -import math - - -vheaFormat = """ - > # big endian - tableVersion: L - ascent: h - descent: h - lineGap: h - advanceHeightMax: H - minTopSideBearing: h - minBottomSideBearing: h - yMaxExtent: h - caretSlopeRise: h - caretSlopeRun: h - caretOffset: h - reserved1: h - reserved2: h - reserved3: h - reserved4: h - metricDataFormat: h - numberOfVMetrics: H -""" - - -class table__v_h_e_a(DefaultTable.DefaultTable): - - # Note: Keep in sync with table__h_h_e_a - - dependencies = ["vmtx", "glyf", "CFF ", "CFF2"] - - def decompile(self, data, ttFont): - sstruct.unpack(vheaFormat, data, self) - - def compile(self, ttFont): - if ttFont.recalcBBoxes and ( - ttFont.isLoaded("glyf") - or ttFont.isLoaded("CFF ") - or ttFont.isLoaded("CFF2") - ): - self.recalc(ttFont) - self.tableVersion = fi2ve(self.tableVersion) - return sstruct.pack(vheaFormat, self) - - def recalc(self, ttFont): - if "vmtx" in ttFont: - vmtxTable = ttFont["vmtx"] - self.advanceHeightMax = max(adv for adv, _ in vmtxTable.metrics.values()) - - boundsHeightDict = {} - if "glyf" in ttFont: - glyfTable = ttFont["glyf"] - for name in ttFont.getGlyphOrder(): - g = glyfTable[name] - if g.numberOfContours == 0: - continue - if g.numberOfContours < 0 and not hasattr(g, "yMax"): - # Composite glyph without extents set. - # Calculate those. - g.recalcBounds(glyfTable) - boundsHeightDict[name] = g.yMax - g.yMin - elif "CFF " in ttFont or "CFF2" in ttFont: - if "CFF " in ttFont: - topDict = ttFont["CFF "].cff.topDictIndex[0] - else: - topDict = ttFont["CFF2"].cff.topDictIndex[0] - charStrings = topDict.CharStrings - for name in ttFont.getGlyphOrder(): - cs = charStrings[name] - bounds = cs.calcBounds(charStrings) - if bounds is not None: - boundsHeightDict[name] = int( - math.ceil(bounds[3]) - math.floor(bounds[1]) - ) - - if boundsHeightDict: - minTopSideBearing = float("inf") - minBottomSideBearing = float("inf") - yMaxExtent = -float("inf") - for name, boundsHeight in boundsHeightDict.items(): - advanceHeight, tsb = vmtxTable[name] - bsb = advanceHeight - tsb - boundsHeight - extent = tsb + boundsHeight - minTopSideBearing = min(minTopSideBearing, tsb) - minBottomSideBearing = min(minBottomSideBearing, bsb) - yMaxExtent = max(yMaxExtent, extent) - self.minTopSideBearing = minTopSideBearing - self.minBottomSideBearing = minBottomSideBearing - self.yMaxExtent = yMaxExtent - - else: # No glyph has outlines. - self.minTopSideBearing = 0 - self.minBottomSideBearing = 0 - self.yMaxExtent = 0 - - def toXML(self, writer, ttFont): - formatstring, names, fixes = sstruct.getformat(vheaFormat) - for name in names: - value = getattr(self, name) - if name == "tableVersion": - value = fi2ve(value) - value = "0x%08x" % value - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "tableVersion": - setattr(self, name, ve2fi(attrs["value"])) - return - setattr(self, name, safeEval(attrs["value"])) - - # reserved0 is caretOffset for legacy reasons - @property - def reserved0(self): - return self.caretOffset - - @reserved0.setter - def reserved0(self, value): - self.caretOffset = value diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/sftp.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/sftp.py deleted file mode 100644 index c08741774d727a86c746c8a11ba956542f9af231..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/implementations/sftp.py +++ /dev/null @@ -1,175 +0,0 @@ -import datetime -import logging -import os -import types -import uuid -from stat import S_ISDIR, S_ISLNK - -import paramiko - -from .. import AbstractFileSystem -from ..utils import infer_storage_options - -logger = logging.getLogger("fsspec.sftp") - - -class SFTPFileSystem(AbstractFileSystem): - """Files over SFTP/SSH - - Peer-to-peer filesystem over SSH using paramiko. - - Note: if using this with the ``open`` or ``open_files``, with full URLs, - there is no way to tell if a path is relative, so all paths are assumed - to be absolute. - """ - - protocol = "sftp", "ssh" - - def __init__(self, host, **ssh_kwargs): - """ - - Parameters - ---------- - host: str - Hostname or IP as a string - temppath: str - Location on the server to put files, when within a transaction - ssh_kwargs: dict - Parameters passed on to connection. See details in - http://docs.paramiko.org/en/2.4/api/client.html#paramiko.client.SSHClient.connect - May include port, username, password... - """ - if self._cached: - return - super(SFTPFileSystem, self).__init__(**ssh_kwargs) - self.temppath = ssh_kwargs.pop("temppath", "/tmp") # remote temp directory - self.host = host - self.ssh_kwargs = ssh_kwargs - self._connect() - - def _connect(self): - logger.debug("Connecting to SFTP server %s" % self.host) - self.client = paramiko.SSHClient() - self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) - self.client.connect(self.host, **self.ssh_kwargs) - self.ftp = self.client.open_sftp() - - @classmethod - def _strip_protocol(cls, path): - return infer_storage_options(path)["path"] - - @staticmethod - def _get_kwargs_from_urls(urlpath): - out = infer_storage_options(urlpath) - out.pop("path", None) - out.pop("protocol", None) - return out - - def mkdir(self, path, create_parents=False, mode=511): - logger.debug("Creating folder %s" % path) - if self.exists(path): - raise FileExistsError("File exists: {}".format(path)) - - if create_parents: - self.makedirs(path) - else: - self.ftp.mkdir(path, mode) - - def makedirs(self, path, exist_ok=False, mode=511): - if self.exists(path) and not exist_ok: - raise FileExistsError("File exists: {}".format(path)) - - parts = path.split("/") - path = "" - - for part in parts: - path += "/" + part - if not self.exists(path): - self.ftp.mkdir(path, mode) - - def rmdir(self, path): - logger.debug("Removing folder %s" % path) - self.ftp.rmdir(path) - - def info(self, path): - stat = self._decode_stat(self.ftp.stat(path)) - stat["name"] = path - return stat - - @staticmethod - def _decode_stat(stat, parent_path=None): - if S_ISDIR(stat.st_mode): - t = "directory" - elif S_ISLNK(stat.st_mode): - t = "link" - else: - t = "file" - out = { - "name": "", - "size": stat.st_size, - "type": t, - "uid": stat.st_uid, - "gid": stat.st_gid, - "time": datetime.datetime.utcfromtimestamp(stat.st_atime), - "mtime": datetime.datetime.utcfromtimestamp(stat.st_mtime), - } - if parent_path: - out["name"] = "/".join([parent_path.rstrip("/"), stat.filename]) - return out - - def ls(self, path, detail=False): - logger.debug("Listing folder %s" % path) - stats = [self._decode_stat(stat, path) for stat in self.ftp.listdir_iter(path)] - if detail: - return stats - else: - paths = [stat["name"] for stat in stats] - return sorted(paths) - - def put(self, lpath, rpath, callback=None, **kwargs): - logger.debug("Put file %s into %s" % (lpath, rpath)) - self.ftp.put(lpath, rpath) - - def get_file(self, rpath, lpath, **kwargs): - if self.isdir(rpath): - os.makedirs(lpath, exist_ok=True) - else: - self.ftp.get(self._strip_protocol(rpath), lpath) - - def _open(self, path, mode="rb", block_size=None, **kwargs): - """ - block_size: int or None - If 0, no buffering, if 1, line buffering, if >1, buffer that many - bytes, if None use default from paramiko. - """ - logger.debug("Opening file %s" % path) - if kwargs.get("autocommit", True) is False: - # writes to temporary file, move on commit - path2 = "/".join([self.temppath, str(uuid.uuid4())]) - f = self.ftp.open(path2, mode, bufsize=block_size if block_size else -1) - f.temppath = path2 - f.targetpath = path - f.fs = self - f.commit = types.MethodType(commit_a_file, f) - f.discard = types.MethodType(discard_a_file, f) - else: - f = self.ftp.open(path, mode, bufsize=block_size if block_size else -1) - return f - - def _rm(self, path): - if self.isdir(path): - self.ftp.rmdir(path) - else: - self.ftp.remove(path) - - def mv(self, old, new): - logger.debug("Renaming %s into %s" % (old, new)) - self.ftp.posix_rename(old, new) - - -def commit_a_file(self): - self.fs.mv(self.temppath, self.targetpath) - - -def discard_a_file(self): - self.fs._rm(self.temppath) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/backends/auto.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/backends/auto.py deleted file mode 100644 index f4766ab5197990506dac99b5132f22991ad6582b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/backends/auto.py +++ /dev/null @@ -1,52 +0,0 @@ -import typing -from typing import Optional - -import sniffio - -from .base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream - - -class AutoBackend(AsyncNetworkBackend): - async def _init_backend(self) -> None: - if not (hasattr(self, "_backend")): - backend = sniffio.current_async_library() - if backend == "trio": - from .trio import TrioBackend - - self._backend: AsyncNetworkBackend = TrioBackend() - else: - from .asyncio import AsyncIOBackend - - self._backend = AsyncIOBackend() - - async def connect_tcp( - self, - host: str, - port: int, - timeout: Optional[float] = None, - local_address: Optional[str] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: - await self._init_backend() - return await self._backend.connect_tcp( - host, - port, - timeout=timeout, - local_address=local_address, - socket_options=socket_options, - ) - - async def connect_unix_socket( - self, - path: str, - timeout: Optional[float] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: # pragma: nocover - await self._init_backend() - return await self._backend.connect_unix_socket( - path, timeout=timeout, socket_options=socket_options - ) - - async def sleep(self, seconds: float) -> None: # pragma: nocover - await self._init_backend() - return await self._backend.sleep(seconds) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_nbagg.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_nbagg.py deleted file mode 100644 index 4ebf3e1f56d117895388e709cbdefec4f98bd5e6..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_nbagg.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -from pathlib import Path -import subprocess -from tempfile import TemporaryDirectory - -import pytest - -nbformat = pytest.importorskip('nbformat') -pytest.importorskip('nbconvert') -pytest.importorskip('ipykernel') - -# From https://blog.thedataincubator.com/2016/06/testing-jupyter-notebooks/ - - -def test_ipynb(): - nb_path = Path(__file__).parent / 'test_nbagg_01.ipynb' - - with TemporaryDirectory() as tmpdir: - out_path = Path(tmpdir, "out.ipynb") - subprocess.check_call( - ["jupyter", "nbconvert", "--to", "notebook", - "--execute", "--ExecutePreprocessor.timeout=500", - "--output", str(out_path), str(nb_path)], - env={**os.environ, "IPYTHONDIR": tmpdir}) - with out_path.open() as out: - nb = nbformat.read(out, nbformat.current_nbformat) - - errors = [output for cell in nb.cells for output in cell.get("outputs", []) - if output.output_type == "error"] - assert not errors diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_vrt.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_vrt.py deleted file mode 100644 index 4419633b3c1f6ff1dfcc5786f4e5a3ca07cc10be..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_vrt.py +++ /dev/null @@ -1,1564 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the BSD license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import warnings -import math -import torch -import torch.nn as nn -import torchvision -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from distutils.version import LooseVersion -from torch.nn.modules.utils import _pair, _single -import numpy as np -from functools import reduce, lru_cache -from operator import mul -from einops import rearrange -from einops.layers.torch import Rearrange - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - # def forward(self, x, offset, mask): - # return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - # self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - # def forward(self, x): - # out = self.conv_offset(x) - # o1, o2, mask = torch.chunk(out, 3, dim=1) - # offset = torch.cat((o1, o2), dim=1) - # mask = torch.sigmoid(mask) - # return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - # self.groups, self.deformable_groups) - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - low = norm_cdf((a - mean) / std) - up = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [low, up], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * low - 1, 2 * up - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. - - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/weight_init.py - - The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0], ) + (1, ) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - From: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/drop.py - """ - - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -def flow_warp(x, flow, interp_mode='bilinear', padding_mode='zeros', align_corners=True, use_pad_mask=False): - """Warp an image or feature map with optical flow. - - Args: - x (Tensor): Tensor with size (n, c, h, w). - flow (Tensor): Tensor with size (n, h, w, 2), normal value. - interp_mode (str): 'nearest' or 'bilinear' or 'nearest4'. Default: 'bilinear'. - padding_mode (str): 'zeros' or 'border' or 'reflection'. - Default: 'zeros'. - align_corners (bool): Before pytorch 1.3, the default value is - align_corners=True. After pytorch 1.3, the default value is - align_corners=False. Here, we use the True as default. - use_pad_mask (bool): only used for PWCNet, x is first padded with ones along the channel dimension. - The mask is generated according to the grid_sample results of the padded dimension. - - - Returns: - Tensor: Warped image or feature map. - """ - # assert x.size()[-2:] == flow.size()[1:3] # temporaily turned off for image-wise shift - n, _, h, w = x.size() - # create mesh grid - # grid_y, grid_x = torch.meshgrid(torch.arange(0, h).type_as(x), torch.arange(0, w).type_as(x)) # an illegal memory access on TITAN RTX + PyTorch1.9.1 - grid_y, grid_x = torch.meshgrid(torch.arange(0, h, dtype=x.dtype, device=x.device), torch.arange(0, w, dtype=x.dtype, device=x.device)) - grid = torch.stack((grid_x, grid_y), 2).float() # W(x), H(y), 2 - grid.requires_grad = False - - vgrid = grid + flow - - # if use_pad_mask: # for PWCNet - # x = F.pad(x, (0,0,0,0,0,1), mode='constant', value=1) - - # scale grid to [-1,1] - if interp_mode == 'nearest4': # todo: bug, no gradient for flow model in this case!!! but the result is good - vgrid_x_floor = 2.0 * torch.floor(vgrid[:, :, :, 0]) / max(w - 1, 1) - 1.0 - vgrid_x_ceil = 2.0 * torch.ceil(vgrid[:, :, :, 0]) / max(w - 1, 1) - 1.0 - vgrid_y_floor = 2.0 * torch.floor(vgrid[:, :, :, 1]) / max(h - 1, 1) - 1.0 - vgrid_y_ceil = 2.0 * torch.ceil(vgrid[:, :, :, 1]) / max(h - 1, 1) - 1.0 - - output00 = F.grid_sample(x, torch.stack((vgrid_x_floor, vgrid_y_floor), dim=3), mode='nearest', padding_mode=padding_mode, align_corners=align_corners) - output01 = F.grid_sample(x, torch.stack((vgrid_x_floor, vgrid_y_ceil), dim=3), mode='nearest', padding_mode=padding_mode, align_corners=align_corners) - output10 = F.grid_sample(x, torch.stack((vgrid_x_ceil, vgrid_y_floor), dim=3), mode='nearest', padding_mode=padding_mode, align_corners=align_corners) - output11 = F.grid_sample(x, torch.stack((vgrid_x_ceil, vgrid_y_ceil), dim=3), mode='nearest', padding_mode=padding_mode, align_corners=align_corners) - - return torch.cat([output00, output01, output10, output11], 1) - - else: - vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(w - 1, 1) - 1.0 - vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(h - 1, 1) - 1.0 - vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3) - output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode, align_corners=align_corners) - - # if use_pad_mask: # for PWCNet - # output = _flow_warp_masking(output) - - # TODO, what if align_corners=False - return output - - -class DCNv2PackFlowGuided(ModulatedDeformConvPack): - """Flow-guided deformable alignment module. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - max_residue_magnitude (int): The maximum magnitude of the offset residue. Default: 10. - pa_frames (int): The number of parallel warping frames. Default: 2. - - Ref: - BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment. - - """ - - def __init__(self, *args, **kwargs): - self.max_residue_magnitude = kwargs.pop('max_residue_magnitude', 10) - self.pa_frames = kwargs.pop('pa_frames', 2) - - super(DCNv2PackFlowGuided, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Sequential( - nn.Conv2d((1+self.pa_frames//2) * self.in_channels + self.pa_frames, self.out_channels, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.1, inplace=True), - nn.Conv2d(self.out_channels, self.out_channels, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.1, inplace=True), - nn.Conv2d(self.out_channels, self.out_channels, 3, 1, 1), - nn.LeakyReLU(negative_slope=0.1, inplace=True), - nn.Conv2d(self.out_channels, 3 * 9 * self.deformable_groups, 3, 1, 1), - ) - - self.init_offset() - - def init_offset(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset[-1].weight.data.zero_() - self.conv_offset[-1].bias.data.zero_() - - def forward(self, x, x_flow_warpeds, x_current, flows): - out = self.conv_offset(torch.cat(x_flow_warpeds + [x_current] + flows, dim=1)) - o1, o2, mask = torch.chunk(out, 3, dim=1) - - # offset - offset = self.max_residue_magnitude * torch.tanh(torch.cat((o1, o2), dim=1)) - if self.pa_frames == 2: - offset = offset + flows[0].flip(1).repeat(1, offset.size(1)//2, 1, 1) - elif self.pa_frames == 4: - offset1, offset2 = torch.chunk(offset, 2, dim=1) - offset1 = offset1 + flows[0].flip(1).repeat(1, offset1.size(1) // 2, 1, 1) - offset2 = offset2 + flows[1].flip(1).repeat(1, offset2.size(1) // 2, 1, 1) - offset = torch.cat([offset1, offset2], dim=1) - elif self.pa_frames == 6: - offset = self.max_residue_magnitude * torch.tanh(torch.cat((o1, o2), dim=1)) - offset1, offset2, offset3 = torch.chunk(offset, 3, dim=1) - offset1 = offset1 + flows[0].flip(1).repeat(1, offset1.size(1) // 2, 1, 1) - offset2 = offset2 + flows[1].flip(1).repeat(1, offset2.size(1) // 2, 1, 1) - offset3 = offset3 + flows[2].flip(1).repeat(1, offset3.size(1) // 2, 1, 1) - offset = torch.cat([offset1, offset2, offset3], dim=1) - - # mask - mask = torch.sigmoid(mask) - - return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, self.stride, self.padding, - self.dilation, mask) - - -class BasicModule(nn.Module): - """Basic Module for SpyNet. - """ - - def __init__(self): - super(BasicModule, self).__init__() - - self.basic_module = nn.Sequential( - nn.Conv2d(in_channels=8, out_channels=32, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False), - nn.Conv2d(in_channels=32, out_channels=64, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False), - nn.Conv2d(in_channels=64, out_channels=32, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False), - nn.Conv2d(in_channels=32, out_channels=16, kernel_size=7, stride=1, padding=3), nn.ReLU(inplace=False), - nn.Conv2d(in_channels=16, out_channels=2, kernel_size=7, stride=1, padding=3)) - - def forward(self, tensor_input): - return self.basic_module(tensor_input) - - -class SpyNet(nn.Module): - """SpyNet architecture. - - Args: - load_path (str): path for pretrained SpyNet. Default: None. - return_levels (list[int]): return flows of different levels. Default: [5]. - """ - - def __init__(self, load_path=None, return_levels=[5]): - super(SpyNet, self).__init__() - self.return_levels = return_levels - self.basic_module = nn.ModuleList([BasicModule() for _ in range(6)]) - if load_path: - if not os.path.exists(load_path): - import requests - url = 'https://github.com/JingyunLiang/VRT/releases/download/v0.0/spynet_sintel_final-3d2a1287.pth' - r = requests.get(url, allow_redirects=True) - print(f'downloading SpyNet pretrained model from {url}') - os.makedirs(os.path.dirname(load_path), exist_ok=True) - open(load_path, 'wb').write(r.content) - - self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params']) - - self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def preprocess(self, tensor_input): - tensor_output = (tensor_input - self.mean) / self.std - return tensor_output - - def process(self, ref, supp, w, h, w_floor, h_floor): - flow_list = [] - - ref = [self.preprocess(ref)] - supp = [self.preprocess(supp)] - - for level in range(5): - ref.insert(0, F.avg_pool2d(input=ref[0], kernel_size=2, stride=2, count_include_pad=False)) - supp.insert(0, F.avg_pool2d(input=supp[0], kernel_size=2, stride=2, count_include_pad=False)) - - flow = ref[0].new_zeros( - [ref[0].size(0), 2, - int(math.floor(ref[0].size(2) / 2.0)), - int(math.floor(ref[0].size(3) / 2.0))]) - - for level in range(len(ref)): - upsampled_flow = F.interpolate(input=flow, scale_factor=2, mode='bilinear', align_corners=True) * 2.0 - - if upsampled_flow.size(2) != ref[level].size(2): - upsampled_flow = F.pad(input=upsampled_flow, pad=[0, 0, 0, 1], mode='replicate') - if upsampled_flow.size(3) != ref[level].size(3): - upsampled_flow = F.pad(input=upsampled_flow, pad=[0, 1, 0, 0], mode='replicate') - - flow = self.basic_module[level](torch.cat([ - ref[level], - flow_warp( - supp[level], upsampled_flow.permute(0, 2, 3, 1), interp_mode='bilinear', padding_mode='border'), - upsampled_flow - ], 1)) + upsampled_flow - - if level in self.return_levels: - scale = 2**(5-level) # level=5 (scale=1), level=4 (scale=2), level=3 (scale=4), level=2 (scale=8) - flow_out = F.interpolate(input=flow, size=(h//scale, w//scale), mode='bilinear', align_corners=False) - flow_out[:, 0, :, :] *= float(w//scale) / float(w_floor//scale) - flow_out[:, 1, :, :] *= float(h//scale) / float(h_floor//scale) - flow_list.insert(0, flow_out) - - return flow_list - - def forward(self, ref, supp): - assert ref.size() == supp.size() - - h, w = ref.size(2), ref.size(3) - w_floor = math.floor(math.ceil(w / 32.0) * 32.0) - h_floor = math.floor(math.ceil(h / 32.0) * 32.0) - - ref = F.interpolate(input=ref, size=(h_floor, w_floor), mode='bilinear', align_corners=False) - supp = F.interpolate(input=supp, size=(h_floor, w_floor), mode='bilinear', align_corners=False) - - flow_list = self.process(ref, supp, w, h, w_floor, h_floor) - - return flow_list[0] if len(flow_list) == 1 else flow_list - - -def window_partition(x, window_size): - """ Partition the input into windows. Attention will be conducted within the windows. - - Args: - x: (B, D, H, W, C) - window_size (tuple[int]): window size - - Returns: - windows: (B*num_windows, window_size*window_size, C) - """ - B, D, H, W, C = x.shape - x = x.view(B, D // window_size[0], window_size[0], H // window_size[1], window_size[1], W // window_size[2], - window_size[2], C) - windows = x.permute(0, 1, 3, 5, 2, 4, 6, 7).contiguous().view(-1, reduce(mul, window_size), C) - - return windows - - -def window_reverse(windows, window_size, B, D, H, W): - """ Reverse windows back to the original input. Attention was conducted within the windows. - - Args: - windows: (B*num_windows, window_size, window_size, C) - window_size (tuple[int]): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, D, H, W, C) - """ - x = windows.view(B, D // window_size[0], H // window_size[1], W // window_size[2], window_size[0], window_size[1], - window_size[2], -1) - x = x.permute(0, 1, 4, 2, 5, 3, 6, 7).contiguous().view(B, D, H, W, -1) - - return x - - -def get_window_size(x_size, window_size, shift_size=None): - """ Get the window size and the shift size """ - - use_window_size = list(window_size) - if shift_size is not None: - use_shift_size = list(shift_size) - for i in range(len(x_size)): - if x_size[i] <= window_size[i]: - use_window_size[i] = x_size[i] - if shift_size is not None: - use_shift_size[i] = 0 - - if shift_size is None: - return tuple(use_window_size) - else: - return tuple(use_window_size), tuple(use_shift_size) - - -@lru_cache() -def compute_mask(D, H, W, window_size, shift_size, device): - """ Compute attnetion mask for input of size (D, H, W). @lru_cache caches each stage results. """ - - img_mask = torch.zeros((1, D, H, W, 1), device=device) # 1 Dp Hp Wp 1 - cnt = 0 - for d in slice(-window_size[0]), slice(-window_size[0], -shift_size[0]), slice(-shift_size[0], None): - for h in slice(-window_size[1]), slice(-window_size[1], -shift_size[1]), slice(-shift_size[1], None): - for w in slice(-window_size[2]), slice(-window_size[2], -shift_size[2]), slice(-shift_size[2], None): - img_mask[:, d, h, w, :] = cnt - cnt += 1 - mask_windows = window_partition(img_mask, window_size) # nW, ws[0]*ws[1]*ws[2], 1 - mask_windows = mask_windows.squeeze(-1) # nW, ws[0]*ws[1]*ws[2] - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - -class Upsample(nn.Sequential): - """Upsample module for video SR. - - Args: - scale (int): Scale factor. Supported scales: 2^n and 3. - num_feat (int): Channel number of intermediate features. - """ - - def __init__(self, scale, num_feat): - assert LooseVersion(torch.__version__) >= LooseVersion('1.8.1'), \ - 'PyTorch version >= 1.8.1 to support 5D PixelShuffle.' - - class Transpose_Dim12(nn.Module): - """ Transpose Dim1 and Dim2 of a tensor.""" - - def __init__(self): - super().__init__() - - def forward(self, x): - return x.transpose(1, 2) - - m = [] - if (scale & (scale - 1)) == 0: # scale = 2^n - for _ in range(int(math.log(scale, 2))): - m.append(nn.Conv3d(num_feat, 4 * num_feat, kernel_size=(1, 3, 3), padding=(0, 1, 1))) - m.append(Transpose_Dim12()) - m.append(nn.PixelShuffle(2)) - m.append(Transpose_Dim12()) - m.append(nn.LeakyReLU(negative_slope=0.1, inplace=True)) - m.append(nn.Conv3d(num_feat, num_feat, kernel_size=(1, 3, 3), padding=(0, 1, 1))) - elif scale == 3: - m.append(nn.Conv3d(num_feat, 9 * num_feat, kernel_size=(1, 3, 3), padding=(0, 1, 1))) - m.append(Transpose_Dim12()) - m.append(nn.PixelShuffle(3)) - m.append(Transpose_Dim12()) - m.append(nn.LeakyReLU(negative_slope=0.1, inplace=True)) - m.append(nn.Conv3d(num_feat, num_feat, kernel_size=(1, 3, 3), padding=(0, 1, 1))) - else: - raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.') - super(Upsample, self).__init__(*m) - - -class Mlp_GEGLU(nn.Module): - """ Multilayer perceptron with gated linear unit (GEGLU). Ref. "GLU Variants Improve Transformer". - - Args: - x: (B, D, H, W, C) - - Returns: - x: (B, D, H, W, C) - """ - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - - self.fc11 = nn.Linear(in_features, hidden_features) - self.fc12 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.act(self.fc11(x)) * self.fc12(x) - x = self.drop(x) - x = self.fc2(x) - - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head mutual attention and self attention. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The temporal length, height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - mut_attn (bool): If True, add mutual attention to the module. Default: True - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=False, qk_scale=None, mut_attn=True): - super().__init__() - self.dim = dim - self.window_size = window_size - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - self.mut_attn = mut_attn - - # self attention with relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1) * (2 * window_size[2] - 1), - num_heads)) # 2*Wd-1 * 2*Wh-1 * 2*Ww-1, nH - self.register_buffer("relative_position_index", self.get_position_index(window_size)) - self.qkv_self = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.proj = nn.Linear(dim, dim) - - # mutual attention with sine position encoding - if self.mut_attn: - self.register_buffer("position_bias", - self.get_sine_position_encoding(window_size[1:], dim // 2, normalize=True)) - self.qkv_mut = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.proj = nn.Linear(2 * dim, dim) - - self.softmax = nn.Softmax(dim=-1) - trunc_normal_(self.relative_position_bias_table, std=.02) - - def forward(self, x, mask=None): - """ Forward function. - - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, N, N) or None - """ - - # self attention - B_, N, C = x.shape - qkv = self.qkv_self(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # B_, nH, N, C - x_out = self.attention(q, k, v, mask, (B_, N, C), relative_position_encoding=True) - - # mutual attention - if self.mut_attn: - qkv = self.qkv_mut(x + self.position_bias.repeat(1, 2, 1)).reshape(B_, N, 3, self.num_heads, - C // self.num_heads).permute(2, 0, 3, 1, - 4) - (q1, q2), (k1, k2), (v1, v2) = torch.chunk(qkv[0], 2, dim=2), torch.chunk(qkv[1], 2, dim=2), torch.chunk( - qkv[2], 2, dim=2) # B_, nH, N/2, C - x1_aligned = self.attention(q2, k1, v1, mask, (B_, N // 2, C), relative_position_encoding=False) - x2_aligned = self.attention(q1, k2, v2, mask, (B_, N // 2, C), relative_position_encoding=False) - x_out = torch.cat([torch.cat([x1_aligned, x2_aligned], 1), x_out], 2) - - # projection - x = self.proj(x_out) - - return x - - def attention(self, q, k, v, mask, x_shape, relative_position_encoding=True): - B_, N, C = x_shape - attn = (q * self.scale) @ k.transpose(-2, -1) - - if relative_position_encoding: - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index[:N, :N].reshape(-1)].reshape(N, N, -1) # Wd*Wh*Ww, Wd*Wh*Ww,nH - attn = attn + relative_position_bias.permute(2, 0, 1).unsqueeze(0) # B_, nH, N, N - - if mask is None: - attn = self.softmax(attn) - else: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask[:, :N, :N].unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - - return x - - def get_position_index(self, window_size): - ''' Get pair-wise relative position index for each token inside the window. ''' - - coords_d = torch.arange(window_size[0]) - coords_h = torch.arange(window_size[1]) - coords_w = torch.arange(window_size[2]) - coords = torch.stack(torch.meshgrid(coords_d, coords_h, coords_w)) # 3, Wd, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 3, Wd*Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 3, Wd*Wh*Ww, Wd*Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wd*Wh*Ww, Wd*Wh*Ww, 3 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 2] += window_size[2] - 1 - - relative_coords[:, :, 0] *= (2 * window_size[1] - 1) * (2 * window_size[2] - 1) - relative_coords[:, :, 1] *= (2 * window_size[2] - 1) - relative_position_index = relative_coords.sum(-1) # Wd*Wh*Ww, Wd*Wh*Ww - - return relative_position_index - - def get_sine_position_encoding(self, HW, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - """ Get sine position encoding """ - - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - - if scale is None: - scale = 2 * math.pi - - not_mask = torch.ones([1, HW[0], HW[1]]) - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * scale - - dim_t = torch.arange(num_pos_feats, dtype=torch.float32) - dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats) - - # BxCxHxW - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) - pos_embed = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - - return pos_embed.flatten(2).permute(0, 2, 1).contiguous() - - -class TMSA(nn.Module): - """ Temporal Mutual Self Attention (TMSA). - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - num_heads (int): Number of attention heads. - window_size (tuple[int]): Window size. - shift_size (tuple[int]): Shift size for mutual and self attention. - mut_attn (bool): If True, use mutual and self attention. Default: True. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True. - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop_path (float, optional): Stochastic depth rate. Default: 0.0. - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm. - use_checkpoint_attn (bool): If True, use torch.checkpoint for attention modules. Default: False. - use_checkpoint_ffn (bool): If True, use torch.checkpoint for feed-forward modules. Default: False. - """ - - def __init__(self, - dim, - input_resolution, - num_heads, - window_size=(6, 8, 8), - shift_size=(0, 0, 0), - mut_attn=True, - mlp_ratio=2., - qkv_bias=True, - qk_scale=None, - drop_path=0., - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - use_checkpoint_attn=False, - use_checkpoint_ffn=False - ): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.use_checkpoint_attn = use_checkpoint_attn - self.use_checkpoint_ffn = use_checkpoint_ffn - - assert 0 <= self.shift_size[0] < self.window_size[0], "shift_size must in 0-window_size" - assert 0 <= self.shift_size[1] < self.window_size[1], "shift_size must in 0-window_size" - assert 0 <= self.shift_size[2] < self.window_size[2], "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention(dim, window_size=self.window_size, num_heads=num_heads, qkv_bias=qkv_bias, - qk_scale=qk_scale, mut_attn=mut_attn) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - self.mlp = Mlp_GEGLU(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer) - - def forward_part1(self, x, mask_matrix): - B, D, H, W, C = x.shape - window_size, shift_size = get_window_size((D, H, W), self.window_size, self.shift_size) - - x = self.norm1(x) - - # pad feature maps to multiples of window size - pad_l = pad_t = pad_d0 = 0 - pad_d1 = (window_size[0] - D % window_size[0]) % window_size[0] - pad_b = (window_size[1] - H % window_size[1]) % window_size[1] - pad_r = (window_size[2] - W % window_size[2]) % window_size[2] - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b, pad_d0, pad_d1), mode='constant') - - _, Dp, Hp, Wp, _ = x.shape - # cyclic shift - if any(i > 0 for i in shift_size): - shifted_x = torch.roll(x, shifts=(-shift_size[0], -shift_size[1], -shift_size[2]), dims=(1, 2, 3)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, window_size) # B*nW, Wd*Wh*Ww, C - - # attention / shifted attention - attn_windows = self.attn(x_windows, mask=attn_mask) # B*nW, Wd*Wh*Ww, C - - # merge windows - attn_windows = attn_windows.view(-1, *(window_size + (C,))) - shifted_x = window_reverse(attn_windows, window_size, B, Dp, Hp, Wp) # B D' H' W' C - - # reverse cyclic shift - if any(i > 0 for i in shift_size): - x = torch.roll(shifted_x, shifts=(shift_size[0], shift_size[1], shift_size[2]), dims=(1, 2, 3)) - else: - x = shifted_x - - if pad_d1 > 0 or pad_r > 0 or pad_b > 0: - x = x[:, :D, :H, :W, :] - - x = self.drop_path(x) - - return x - - def forward_part2(self, x): - return self.drop_path(self.mlp(self.norm2(x))) - - def forward(self, x, mask_matrix): - """ Forward function. - - Args: - x: Input feature, tensor size (B, D, H, W, C). - mask_matrix: Attention mask for cyclic shift. - """ - - # attention - if self.use_checkpoint_attn: - x = x + checkpoint.checkpoint(self.forward_part1, x, mask_matrix) - else: - x = x + self.forward_part1(x, mask_matrix) - - # feed-forward - if self.use_checkpoint_ffn: - x = x + checkpoint.checkpoint(self.forward_part2, x) - else: - x = x + self.forward_part2(x) - - return x - - -class TMSAG(nn.Module): - """ Temporal Mutual Self Attention Group (TMSAG). - - Args: - dim (int): Number of feature channels - input_resolution (tuple[int]): Input resolution. - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (tuple[int]): Local window size. Default: (6,8,8). - shift_size (tuple[int]): Shift size for mutual and self attention. Default: None. - mut_attn (bool): If True, use mutual and self attention. Default: True. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 2. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - use_checkpoint_attn (bool): If True, use torch.checkpoint for attention modules. Default: False. - use_checkpoint_ffn (bool): If True, use torch.checkpoint for feed-forward modules. Default: False. - """ - - def __init__(self, - dim, - input_resolution, - depth, - num_heads, - window_size=[6, 8, 8], - shift_size=None, - mut_attn=True, - mlp_ratio=2., - qkv_bias=False, - qk_scale=None, - drop_path=0., - norm_layer=nn.LayerNorm, - use_checkpoint_attn=False, - use_checkpoint_ffn=False - ): - super().__init__() - self.input_resolution = input_resolution - self.window_size = window_size - self.shift_size = list(i // 2 for i in window_size) if shift_size is None else shift_size - - # build blocks - self.blocks = nn.ModuleList([ - TMSA( - dim=dim, - input_resolution=input_resolution, - num_heads=num_heads, - window_size=window_size, - shift_size=[0, 0, 0] if i % 2 == 0 else self.shift_size, - mut_attn=mut_attn, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - use_checkpoint_attn=use_checkpoint_attn, - use_checkpoint_ffn=use_checkpoint_ffn - ) - for i in range(depth)]) - - def forward(self, x): - """ Forward function. - - Args: - x: Input feature, tensor size (B, C, D, H, W). - """ - # calculate attention mask for attention - B, C, D, H, W = x.shape - window_size, shift_size = get_window_size((D, H, W), self.window_size, self.shift_size) - x = rearrange(x, 'b c d h w -> b d h w c') - Dp = int(np.ceil(D / window_size[0])) * window_size[0] - Hp = int(np.ceil(H / window_size[1])) * window_size[1] - Wp = int(np.ceil(W / window_size[2])) * window_size[2] - attn_mask = compute_mask(Dp, Hp, Wp, window_size, shift_size, x.device) - - for blk in self.blocks: - x = blk(x, attn_mask) - - x = x.view(B, D, H, W, -1) - x = rearrange(x, 'b d h w c -> b c d h w') - - return x - - -class RTMSA(nn.Module): - """ Residual Temporal Mutual Self Attention (RTMSA). Only used in stage 8. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True. - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm. - use_checkpoint_attn (bool): If True, use torch.checkpoint for attention modules. Default: False. - use_checkpoint_ffn (bool): If True, use torch.checkpoint for feed-forward modules. Default: False. - """ - - def __init__(self, - dim, - input_resolution, - depth, - num_heads, - window_size, - mlp_ratio=2., - qkv_bias=True, - qk_scale=None, - drop_path=0., - norm_layer=nn.LayerNorm, - use_checkpoint_attn=False, - use_checkpoint_ffn=None - ): - super(RTMSA, self).__init__() - self.dim = dim - self.input_resolution = input_resolution - - self.residual_group = TMSAG(dim=dim, - input_resolution=input_resolution, - depth=depth, - num_heads=num_heads, - window_size=window_size, - mut_attn=False, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop_path=drop_path, - norm_layer=norm_layer, - use_checkpoint_attn=use_checkpoint_attn, - use_checkpoint_ffn=use_checkpoint_ffn - ) - - self.linear = nn.Linear(dim, dim) - - def forward(self, x): - return x + self.linear(self.residual_group(x).transpose(1, 4)).transpose(1, 4) - - -class Stage(nn.Module): - """Residual Temporal Mutual Self Attention Group and Parallel Warping. - - Args: - in_dim (int): Number of input channels. - dim (int): Number of channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - mul_attn_ratio (float): Ratio of mutual attention layers. Default: 0.75. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - pa_frames (float): Number of warpped frames. Default: 2. - deformable_groups (float): Number of deformable groups. Default: 16. - reshape (str): Downscale (down), upscale (up) or keep the size (none). - max_residue_magnitude (float): Maximum magnitude of the residual of optical flow. - use_checkpoint_attn (bool): If True, use torch.checkpoint for attention modules. Default: False. - use_checkpoint_ffn (bool): If True, use torch.checkpoint for feed-forward modules. Default: False. - """ - - def __init__(self, - in_dim, - dim, - input_resolution, - depth, - num_heads, - window_size, - mul_attn_ratio=0.75, - mlp_ratio=2., - qkv_bias=True, - qk_scale=None, - drop_path=0., - norm_layer=nn.LayerNorm, - pa_frames=2, - deformable_groups=16, - reshape=None, - max_residue_magnitude=10, - use_checkpoint_attn=False, - use_checkpoint_ffn=False - ): - super(Stage, self).__init__() - self.pa_frames = pa_frames - - # reshape the tensor - if reshape == 'none': - self.reshape = nn.Sequential(Rearrange('n c d h w -> n d h w c'), - nn.LayerNorm(dim), - Rearrange('n d h w c -> n c d h w')) - elif reshape == 'down': - self.reshape = nn.Sequential(Rearrange('n c d (h neih) (w neiw) -> n d h w (neiw neih c)', neih=2, neiw=2), - nn.LayerNorm(4 * in_dim), nn.Linear(4 * in_dim, dim), - Rearrange('n d h w c -> n c d h w')) - elif reshape == 'up': - self.reshape = nn.Sequential(Rearrange('n (neiw neih c) d h w -> n d (h neih) (w neiw) c', neih=2, neiw=2), - nn.LayerNorm(in_dim // 4), nn.Linear(in_dim // 4, dim), - Rearrange('n d h w c -> n c d h w')) - - # mutual and self attention - self.residual_group1 = TMSAG(dim=dim, - input_resolution=input_resolution, - depth=int(depth * mul_attn_ratio), - num_heads=num_heads, - window_size=(2, window_size[1], window_size[2]), - mut_attn=True, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_path=drop_path, - norm_layer=norm_layer, - use_checkpoint_attn=use_checkpoint_attn, - use_checkpoint_ffn=use_checkpoint_ffn - ) - self.linear1 = nn.Linear(dim, dim) - - # only self attention - self.residual_group2 = TMSAG(dim=dim, - input_resolution=input_resolution, - depth=depth - int(depth * mul_attn_ratio), - num_heads=num_heads, - window_size=window_size, - mut_attn=False, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop_path=drop_path, - norm_layer=norm_layer, - use_checkpoint_attn=True, - use_checkpoint_ffn=use_checkpoint_ffn - ) - self.linear2 = nn.Linear(dim, dim) - - # parallel warping - self.pa_deform = DCNv2PackFlowGuided(dim, dim, 3, padding=1, deformable_groups=deformable_groups, - max_residue_magnitude=max_residue_magnitude, pa_frames=pa_frames) - self.pa_fuse = Mlp_GEGLU(dim * (1 + 2), dim * (1 + 2), dim) - - def forward(self, x, flows_backward, flows_forward): - x = self.reshape(x) - x = self.linear1(self.residual_group1(x).transpose(1, 4)).transpose(1, 4) + x - x = self.linear2(self.residual_group2(x).transpose(1, 4)).transpose(1, 4) + x - x = x.transpose(1, 2) - - x_backward, x_forward = getattr(self, f'get_aligned_feature_{self.pa_frames}frames')(x, flows_backward, flows_forward) - x = self.pa_fuse(torch.cat([x, x_backward, x_forward], 2).permute(0, 1, 3, 4, 2)).permute(0, 4, 1, 2, 3) - - return x - - def get_aligned_feature_2frames(self, x, flows_backward, flows_forward): - '''Parallel feature warping for 2 frames.''' - - # backward - n = x.size(1) - x_backward = [torch.zeros_like(x[:, -1, ...])] - for i in range(n - 1, 0, -1): - x_i = x[:, i, ...] - flow = flows_backward[0][:, i - 1, ...] - x_i_warped = flow_warp(x_i, flow.permute(0, 2, 3, 1), 'bilinear') # frame i+1 aligned towards i - x_backward.insert(0, self.pa_deform(x_i, [x_i_warped], x[:, i - 1, ...], [flow])) - - # forward - x_forward = [torch.zeros_like(x[:, 0, ...])] - for i in range(0, n - 1): - x_i = x[:, i, ...] - flow = flows_forward[0][:, i, ...] - x_i_warped = flow_warp(x_i, flow.permute(0, 2, 3, 1), 'bilinear') # frame i-1 aligned towards i - x_forward.append(self.pa_deform(x_i, [x_i_warped], x[:, i + 1, ...], [flow])) - - return [torch.stack(x_backward, 1), torch.stack(x_forward, 1)] - - def get_aligned_feature_4frames(self, x, flows_backward, flows_forward): - '''Parallel feature warping for 4 frames.''' - - # backward - n = x.size(1) - x_backward = [torch.zeros_like(x[:, -1, ...])] - for i in range(n, 1, -1): - x_i = x[:, i - 1, ...] - flow1 = flows_backward[0][:, i - 2, ...] - if i == n: - x_ii = torch.zeros_like(x[:, n - 2, ...]) - flow2 = torch.zeros_like(flows_backward[1][:, n - 3, ...]) - else: - x_ii = x[:, i, ...] - flow2 = flows_backward[1][:, i - 2, ...] - - x_i_warped = flow_warp(x_i, flow1.permute(0, 2, 3, 1), 'bilinear') # frame i+1 aligned towards i - x_ii_warped = flow_warp(x_ii, flow2.permute(0, 2, 3, 1), 'bilinear') # frame i+2 aligned towards i - x_backward.insert(0, - self.pa_deform(torch.cat([x_i, x_ii], 1), [x_i_warped, x_ii_warped], x[:, i - 2, ...], [flow1, flow2])) - - # forward - x_forward = [torch.zeros_like(x[:, 0, ...])] - for i in range(-1, n - 2): - x_i = x[:, i + 1, ...] - flow1 = flows_forward[0][:, i + 1, ...] - if i == -1: - x_ii = torch.zeros_like(x[:, 1, ...]) - flow2 = torch.zeros_like(flows_forward[1][:, 0, ...]) - else: - x_ii = x[:, i, ...] - flow2 = flows_forward[1][:, i, ...] - - x_i_warped = flow_warp(x_i, flow1.permute(0, 2, 3, 1), 'bilinear') # frame i-1 aligned towards i - x_ii_warped = flow_warp(x_ii, flow2.permute(0, 2, 3, 1), 'bilinear') # frame i-2 aligned towards i - x_forward.append( - self.pa_deform(torch.cat([x_i, x_ii], 1), [x_i_warped, x_ii_warped], x[:, i + 2, ...], [flow1, flow2])) - - return [torch.stack(x_backward, 1), torch.stack(x_forward, 1)] - - def get_aligned_feature_6frames(self, x, flows_backward, flows_forward): - '''Parallel feature warping for 6 frames.''' - - # backward - n = x.size(1) - x_backward = [torch.zeros_like(x[:, -1, ...])] - for i in range(n + 1, 2, -1): - x_i = x[:, i - 2, ...] - flow1 = flows_backward[0][:, i - 3, ...] - if i == n + 1: - x_ii = torch.zeros_like(x[:, -1, ...]) - flow2 = torch.zeros_like(flows_backward[1][:, -1, ...]) - x_iii = torch.zeros_like(x[:, -1, ...]) - flow3 = torch.zeros_like(flows_backward[2][:, -1, ...]) - elif i == n: - x_ii = x[:, i - 1, ...] - flow2 = flows_backward[1][:, i - 3, ...] - x_iii = torch.zeros_like(x[:, -1, ...]) - flow3 = torch.zeros_like(flows_backward[2][:, -1, ...]) - else: - x_ii = x[:, i - 1, ...] - flow2 = flows_backward[1][:, i - 3, ...] - x_iii = x[:, i, ...] - flow3 = flows_backward[2][:, i - 3, ...] - - x_i_warped = flow_warp(x_i, flow1.permute(0, 2, 3, 1), 'bilinear') # frame i+1 aligned towards i - x_ii_warped = flow_warp(x_ii, flow2.permute(0, 2, 3, 1), 'bilinear') # frame i+2 aligned towards i - x_iii_warped = flow_warp(x_iii, flow3.permute(0, 2, 3, 1), 'bilinear') # frame i+3 aligned towards i - x_backward.insert(0, - self.pa_deform(torch.cat([x_i, x_ii, x_iii], 1), [x_i_warped, x_ii_warped, x_iii_warped], - x[:, i - 3, ...], [flow1, flow2, flow3])) - - # forward - x_forward = [torch.zeros_like(x[:, 0, ...])] - for i in range(0, n - 1): - x_i = x[:, i, ...] - flow1 = flows_forward[0][:, i, ...] - if i == 0: - x_ii = torch.zeros_like(x[:, 0, ...]) - flow2 = torch.zeros_like(flows_forward[1][:, 0, ...]) - x_iii = torch.zeros_like(x[:, 0, ...]) - flow3 = torch.zeros_like(flows_forward[2][:, 0, ...]) - elif i == 1: - x_ii = x[:, i - 1, ...] - flow2 = flows_forward[1][:, i - 1, ...] - x_iii = torch.zeros_like(x[:, 0, ...]) - flow3 = torch.zeros_like(flows_forward[2][:, 0, ...]) - else: - x_ii = x[:, i - 1, ...] - flow2 = flows_forward[1][:, i - 1, ...] - x_iii = x[:, i - 2, ...] - flow3 = flows_forward[2][:, i - 2, ...] - - x_i_warped = flow_warp(x_i, flow1.permute(0, 2, 3, 1), 'bilinear') # frame i-1 aligned towards i - x_ii_warped = flow_warp(x_ii, flow2.permute(0, 2, 3, 1), 'bilinear') # frame i-2 aligned towards i - x_iii_warped = flow_warp(x_iii, flow3.permute(0, 2, 3, 1), 'bilinear') # frame i-3 aligned towards i - x_forward.append(self.pa_deform(torch.cat([x_i, x_ii, x_iii], 1), [x_i_warped, x_ii_warped, x_iii_warped], - x[:, i + 1, ...], [flow1, flow2, flow3])) - - return [torch.stack(x_backward, 1), torch.stack(x_forward, 1)] - - -class VRT(nn.Module): - """ Video Restoration Transformer (VRT). - A PyTorch impl of : `VRT: A Video Restoration Transformer` - - https://arxiv.org/pdf/2201.00000 - - Args: - upscale (int): Upscaling factor. Set as 1 for video deblurring, etc. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - img_size (int | tuple(int)): Size of input image. Default: [6, 64, 64]. - window_size (int | tuple(int)): Window size. Default: (6,8,8). - depths (list[int]): Depths of each Transformer stage. - indep_reconsts (list[int]): Layers that extract features of different frames independently. - embed_dims (list[int]): Number of linear projection output channels. - num_heads (list[int]): Number of attention head of each stage. - mul_attn_ratio (float): Ratio of mutual attention layers. Default: 0.75. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 2. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (obj): Normalization layer. Default: nn.LayerNorm. - spynet_path (str): Pretrained SpyNet model path. - pa_frames (float): Number of warpped frames. Default: 2. - deformable_groups (float): Number of deformable groups. Default: 16. - recal_all_flows (bool): If True, derive (t,t+2) and (t,t+3) flows from (t,t+1). Default: False. - nonblind_denoising (bool): If True, conduct experiments on non-blind denoising. Default: False. - use_checkpoint_attn (bool): If True, use torch.checkpoint for attention modules. Default: False. - use_checkpoint_ffn (bool): If True, use torch.checkpoint for feed-forward modules. Default: False. - no_checkpoint_attn_blocks (list[int]): Layers without torch.checkpoint for attention modules. - no_checkpoint_ffn_blocks (list[int]): Layers without torch.checkpoint for feed-forward modules. - """ - - def __init__(self, - upscale=4, - in_chans=3, - img_size=[6, 64, 64], - window_size=[6, 8, 8], - depths=[8, 8, 8, 8, 8, 8, 8, 4, 4, 4, 4, 4, 4], - indep_reconsts=[11, 12], - embed_dims=[120, 120, 120, 120, 120, 120, 120, 180, 180, 180, 180, 180, 180], - num_heads=[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6], - mul_attn_ratio=0.75, - mlp_ratio=2., - qkv_bias=True, - qk_scale=None, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - spynet_path=None, - pa_frames=2, - deformable_groups=16, - recal_all_flows=False, - nonblind_denoising=False, - use_checkpoint_attn=False, - use_checkpoint_ffn=False, - no_checkpoint_attn_blocks=[], - no_checkpoint_ffn_blocks=[], - ): - super().__init__() - self.in_chans = in_chans - self.upscale = upscale - self.pa_frames = pa_frames - self.recal_all_flows = recal_all_flows - self.nonblind_denoising = nonblind_denoising - - # conv_first - self.conv_first = nn.Conv3d(in_chans*(1+2*4)+1 if self.nonblind_denoising else in_chans*(1+2*4), - embed_dims[0], kernel_size=(1, 3, 3), padding=(0, 1, 1)) - - # main body - self.spynet = SpyNet(spynet_path, [2, 3, 4, 5]) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - reshapes = ['none', 'down', 'down', 'down', 'up', 'up', 'up'] - scales = [1, 2, 4, 8, 4, 2, 1] - use_checkpoint_attns = [False if i in no_checkpoint_attn_blocks else use_checkpoint_attn for i in - range(len(depths))] - use_checkpoint_ffns = [False if i in no_checkpoint_ffn_blocks else use_checkpoint_ffn for i in - range(len(depths))] - - # stage 1- 7 - for i in range(7): - setattr(self, f'stage{i + 1}', - Stage( - in_dim=embed_dims[i - 1], - dim=embed_dims[i], - input_resolution=(img_size[0], img_size[1] // scales[i], img_size[2] // scales[i]), - depth=depths[i], - num_heads=num_heads[i], - mul_attn_ratio=mul_attn_ratio, - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop_path=dpr[sum(depths[:i]):sum(depths[:i + 1])], - norm_layer=norm_layer, - pa_frames=pa_frames, - deformable_groups=deformable_groups, - reshape=reshapes[i], - max_residue_magnitude=10 / scales[i], - use_checkpoint_attn=use_checkpoint_attns[i], - use_checkpoint_ffn=use_checkpoint_ffns[i], - ) - ) - - # stage 8 - self.stage8 = nn.ModuleList( - [nn.Sequential( - Rearrange('n c d h w -> n d h w c'), - nn.LayerNorm(embed_dims[6]), - nn.Linear(embed_dims[6], embed_dims[7]), - Rearrange('n d h w c -> n c d h w') - )] - ) - for i in range(7, len(depths)): - self.stage8.append( - RTMSA(dim=embed_dims[i], - input_resolution=img_size, - depth=depths[i], - num_heads=num_heads[i], - window_size=[1, window_size[1], window_size[2]] if i in indep_reconsts else window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop_path=dpr[sum(depths[:i]):sum(depths[:i + 1])], - norm_layer=norm_layer, - use_checkpoint_attn=use_checkpoint_attns[i], - use_checkpoint_ffn=use_checkpoint_ffns[i] - ) - ) - - self.norm = norm_layer(embed_dims[-1]) - self.conv_after_body = nn.Linear(embed_dims[-1], embed_dims[0]) - - # reconstruction - num_feat = 64 - if self.upscale == 1: - # for video deblurring, etc. - self.conv_last = nn.Conv3d(embed_dims[0], in_chans, kernel_size=(1, 3, 3), padding=(0, 1, 1)) - else: - # for video sr - self.conv_before_upsample = nn.Sequential( - nn.Conv3d(embed_dims[0], num_feat, kernel_size=(1, 3, 3), padding=(0, 1, 1)), - nn.LeakyReLU(inplace=True)) - self.upsample = Upsample(upscale, num_feat) - self.conv_last = nn.Conv3d(num_feat, in_chans, kernel_size=(1, 3, 3), padding=(0, 1, 1)) - - def forward(self, x): - # x: (N, D, C, H, W) - - # obtain noise level map - if self.nonblind_denoising: - x, noise_level_map = x[:, :, :self.in_chans, :, :], x[:, :, self.in_chans:, :, :] - - x_lq = x.clone() - - # calculate flows - flows_backward, flows_forward = self.get_flows(x) - - # warp input - x_backward, x_forward = self.get_aligned_image_2frames(x, flows_backward[0], flows_forward[0]) - x = torch.cat([x, x_backward, x_forward], 2) - - # concatenate noise level map - if self.nonblind_denoising: - x = torch.cat([x, noise_level_map], 2) - - # main network - if self.upscale == 1: - # video deblurring, etc. - x = self.conv_first(x.transpose(1, 2)) - x = x + self.conv_after_body( - self.forward_features(x, flows_backward, flows_forward).transpose(1, 4)).transpose(1, 4) - x = self.conv_last(x).transpose(1, 2) - return x + x_lq - else: - # video sr - x = self.conv_first(x.transpose(1, 2)) - x = x + self.conv_after_body( - self.forward_features(x, flows_backward, flows_forward).transpose(1, 4)).transpose(1, 4) - x = self.conv_last(self.upsample(self.conv_before_upsample(x))).transpose(1, 2) - _, _, C, H, W = x.shape - return x + torch.nn.functional.interpolate(x_lq, size=(C, H, W), mode='trilinear', align_corners=False) - - def get_flows(self, x): - ''' Get flows for 2 frames, 4 frames or 6 frames.''' - - if self.pa_frames == 2: - flows_backward, flows_forward = self.get_flow_2frames(x) - elif self.pa_frames == 4: - flows_backward_2frames, flows_forward_2frames = self.get_flow_2frames(x) - flows_backward_4frames, flows_forward_4frames = self.get_flow_4frames(flows_forward_2frames, flows_backward_2frames) - flows_backward = flows_backward_2frames + flows_backward_4frames - flows_forward = flows_forward_2frames + flows_forward_4frames - elif self.pa_frames == 6: - flows_backward_2frames, flows_forward_2frames = self.get_flow_2frames(x) - flows_backward_4frames, flows_forward_4frames = self.get_flow_4frames(flows_forward_2frames, flows_backward_2frames) - flows_backward_6frames, flows_forward_6frames = self.get_flow_6frames(flows_forward_2frames, flows_backward_2frames, flows_forward_4frames, flows_backward_4frames) - flows_backward = flows_backward_2frames + flows_backward_4frames + flows_backward_6frames - flows_forward = flows_forward_2frames + flows_forward_4frames + flows_forward_6frames - - return flows_backward, flows_forward - - def get_flow_2frames(self, x): - '''Get flow between frames t and t+1 from x.''' - - b, n, c, h, w = x.size() - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - # backward - flows_backward = self.spynet(x_1, x_2) - flows_backward = [flow.view(b, n-1, 2, h // (2 ** i), w // (2 ** i)) for flow, i in - zip(flows_backward, range(4))] - - # forward - flows_forward = self.spynet(x_2, x_1) - flows_forward = [flow.view(b, n-1, 2, h // (2 ** i), w // (2 ** i)) for flow, i in - zip(flows_forward, range(4))] - - return flows_backward, flows_forward - - def get_flow_4frames(self, flows_forward, flows_backward): - '''Get flow between t and t+2 from (t,t+1) and (t+1,t+2).''' - - # backward - d = flows_forward[0].shape[1] - flows_backward2 = [] - for flows in flows_backward: - flow_list = [] - for i in range(d - 1, 0, -1): - flow_n1 = flows[:, i - 1, :, :, :] # flow from i+1 to i - flow_n2 = flows[:, i, :, :, :] # flow from i+2 to i+1 - flow_list.insert(0, flow_n1 + flow_warp(flow_n2, flow_n1.permute(0, 2, 3, 1))) # flow from i+2 to i - flows_backward2.append(torch.stack(flow_list, 1)) - - # forward - flows_forward2 = [] - for flows in flows_forward: - flow_list = [] - for i in range(1, d): - flow_n1 = flows[:, i, :, :, :] # flow from i-1 to i - flow_n2 = flows[:, i - 1, :, :, :] # flow from i-2 to i-1 - flow_list.append(flow_n1 + flow_warp(flow_n2, flow_n1.permute(0, 2, 3, 1))) # flow from i-2 to i - flows_forward2.append(torch.stack(flow_list, 1)) - - return flows_backward2, flows_forward2 - - def get_flow_6frames(self, flows_forward, flows_backward, flows_forward2, flows_backward2): - '''Get flow between t and t+3 from (t,t+2) and (t+2,t+3).''' - - # backward - d = flows_forward2[0].shape[1] - flows_backward3 = [] - for flows, flows2 in zip(flows_backward, flows_backward2): - flow_list = [] - for i in range(d - 1, 0, -1): - flow_n1 = flows2[:, i - 1, :, :, :] # flow from i+2 to i - flow_n2 = flows[:, i + 1, :, :, :] # flow from i+3 to i+2 - flow_list.insert(0, flow_n1 + flow_warp(flow_n2, flow_n1.permute(0, 2, 3, 1))) # flow from i+3 to i - flows_backward3.append(torch.stack(flow_list, 1)) - - # forward - flows_forward3 = [] - for flows, flows2 in zip(flows_forward, flows_forward2): - flow_list = [] - for i in range(2, d + 1): - flow_n1 = flows2[:, i - 1, :, :, :] # flow from i-2 to i - flow_n2 = flows[:, i - 2, :, :, :] # flow from i-3 to i-2 - flow_list.append(flow_n1 + flow_warp(flow_n2, flow_n1.permute(0, 2, 3, 1))) # flow from i-3 to i - flows_forward3.append(torch.stack(flow_list, 1)) - - return flows_backward3, flows_forward3 - - def get_aligned_image_2frames(self, x, flows_backward, flows_forward): - '''Parallel feature warping for 2 frames.''' - - # backward - n = x.size(1) - x_backward = [torch.zeros_like(x[:, -1, ...]).repeat(1, 4, 1, 1)] - for i in range(n - 1, 0, -1): - x_i = x[:, i, ...] - flow = flows_backward[:, i - 1, ...] - x_backward.insert(0, flow_warp(x_i, flow.permute(0, 2, 3, 1), 'nearest4')) # frame i+1 aligned towards i - - # forward - x_forward = [torch.zeros_like(x[:, 0, ...]).repeat(1, 4, 1, 1)] - for i in range(0, n - 1): - x_i = x[:, i, ...] - flow = flows_forward[:, i, ...] - x_forward.append(flow_warp(x_i, flow.permute(0, 2, 3, 1), 'nearest4')) # frame i-1 aligned towards i - - return [torch.stack(x_backward, 1), torch.stack(x_forward, 1)] - - def forward_features(self, x, flows_backward, flows_forward): - '''Main network for feature extraction.''' - - x1 = self.stage1(x, flows_backward[0::4], flows_forward[0::4]) - x2 = self.stage2(x1, flows_backward[1::4], flows_forward[1::4]) - x3 = self.stage3(x2, flows_backward[2::4], flows_forward[2::4]) - x4 = self.stage4(x3, flows_backward[3::4], flows_forward[3::4]) - x = self.stage5(x4, flows_backward[2::4], flows_forward[2::4]) - x = self.stage6(x + x3, flows_backward[1::4], flows_forward[1::4]) - x = self.stage7(x + x2, flows_backward[0::4], flows_forward[0::4]) - x = x + x1 - - for layer in self.stage8: - x = layer(x) - - x = rearrange(x, 'n c d h w -> n d h w c') - x = self.norm(x) - x = rearrange(x, 'n d h w c -> n c d h w') - - return x - - -if __name__ == '__main__': - device = torch.device('cpu') - upscale = 4 - window_size = 8 - height = (256 // upscale // window_size) * window_size - width = (256 // upscale // window_size) * window_size - - model = VRT(upscale=4, - img_size=[6, 64, 64], - window_size=[6, 8, 8], - depths=[8, 8, 8, 8, 8, 8, 8, 4, 4, 4, 4, 4, 4], - indep_reconsts=[11, 12], - embed_dims=[120, 120, 120, 120, 120, 120, 120, 180, 180, 180, 180, 180, 180], - num_heads=[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6], - spynet_path=None, - pa_frames=2, - deformable_groups=12 - ).to(device) - print(model) - print('{:>16s} : {:<.4f} [M]'.format('#Params', sum(map(lambda x: x.numel(), model.parameters())) / 10 ** 6)) - - x = torch.randn((2, 12, 3, height, width)).to(device) - x = model(x) - print(x.shape) diff --git a/spaces/lc202301/ChuanhuChatGPT/app.py b/spaces/lc202301/ChuanhuChatGPT/app.py deleted file mode 100644 index 7da62a5dabf7ad74e60400f4b8a4d4401c7f79ec..0000000000000000000000000000000000000000 --- a/spaces/lc202301/ChuanhuChatGPT/app.py +++ /dev/null @@ -1,455 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from utils import * -from presets import * -from overwrites import * -from chat_func import * - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get("my_api_key") - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks( - css=customCSS, - theme=gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ), -) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - gr.HTML(title) - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - default_btn = gr.Button("🔙 恢复默认设置") - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - - keyTxt.submit(submit_key, keyTxt, [user_api_key, status_display]) - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]) - # Chatbot - user_input.submit( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click( - predict, - [ - user_api_key, - systemPromptTxt, - history, - user_input, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - - retryBtn.click( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(0), - model_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue().launch( - server_name="0.0.0.0", server_port=7860, auth=(username, password), - favicon_path="./assets/favicon.png" - ) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False, favicon_path="./assets/favicon.png") - # if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password), favicon_path="./assets/favicon.png", inbrowser=True) - else: - demo.queue(concurrency_count=1000).launch(share=False, favicon_path="./assets/favicon.ico", inbrowser=True) # 改为 share=True 可以创建公开分享链接 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/lewiswu1209/MockingBird/ppg_extractor/nets_utils.py b/spaces/lewiswu1209/MockingBird/ppg_extractor/nets_utils.py deleted file mode 100644 index 6db064b7a829ad7c45dd17e9f5a4fc92c95a72f4..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg_extractor/nets_utils.py +++ /dev/null @@ -1,465 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Network related utility tools.""" - -import logging -from typing import Dict - -import numpy as np -import torch - - -def to_device(m, x): - """Send tensor into the device of the module. - - Args: - m (torch.nn.Module): Torch module. - x (Tensor): Torch tensor. - - Returns: - Tensor: Torch tensor located in the same place as torch module. - - """ - assert isinstance(m, torch.nn.Module) - device = next(m.parameters()).device - return x.to(device) - - -def pad_list(xs, pad_value): - """Perform padding for the list of tensors. - - Args: - xs (List): List of Tensors [(T_1, `*`), (T_2, `*`), ..., (T_B, `*`)]. - pad_value (float): Value for padding. - - Returns: - Tensor: Padded tensor (B, Tmax, `*`). - - Examples: - >>> x = [torch.ones(4), torch.ones(2), torch.ones(1)] - >>> x - [tensor([1., 1., 1., 1.]), tensor([1., 1.]), tensor([1.])] - >>> pad_list(x, 0) - tensor([[1., 1., 1., 1.], - [1., 1., 0., 0.], - [1., 0., 0., 0.]]) - - """ - n_batch = len(xs) - max_len = max(x.size(0) for x in xs) - pad = xs[0].new(n_batch, max_len, *xs[0].size()[1:]).fill_(pad_value) - - for i in range(n_batch): - pad[i, :xs[i].size(0)] = xs[i] - - return pad - - -def make_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of padded part. - - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. See the example. - - Returns: - Tensor: Mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - - Examples: - With only lengths. - - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[0, 0, 0, 0 ,0], - [0, 0, 0, 1, 1], - [0, 0, 1, 1, 1]] - - With the reference tensor. - - >>> xs = torch.zeros((3, 2, 4)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0], - [0, 0, 0, 0]], - [[0, 0, 0, 1], - [0, 0, 0, 1]], - [[0, 0, 1, 1], - [0, 0, 1, 1]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - - With the reference tensor and dimension indicator. - - >>> xs = torch.zeros((3, 6, 6)) - >>> make_pad_mask(lengths, xs, 1) - tensor([[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8) - >>> make_pad_mask(lengths, xs, 2) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - - """ - if length_dim == 0: - raise ValueError('length_dim cannot be 0: {}'.format(length_dim)) - - if not isinstance(lengths, list): - lengths = lengths.tolist() - bs = int(len(lengths)) - if xs is None: - maxlen = int(max(lengths)) - else: - maxlen = xs.size(length_dim) - - seq_range = torch.arange(0, maxlen, dtype=torch.int64) - seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen) - seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1) - mask = seq_range_expand >= seq_length_expand - - if xs is not None: - assert xs.size(0) == bs, (xs.size(0), bs) - - if length_dim < 0: - length_dim = xs.dim() + length_dim - # ind = (:, None, ..., None, :, , None, ..., None) - ind = tuple(slice(None) if i in (0, length_dim) else None - for i in range(xs.dim())) - mask = mask[ind].expand_as(xs).to(xs.device) - return mask - - -def make_non_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of non-padded part. - - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. See the example. - - Returns: - ByteTensor: mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - - Examples: - With only lengths. - - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[1, 1, 1, 1 ,1], - [1, 1, 1, 0, 0], - [1, 1, 0, 0, 0]] - - With the reference tensor. - - >>> xs = torch.zeros((3, 2, 4)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1], - [1, 1, 1, 1]], - [[1, 1, 1, 0], - [1, 1, 1, 0]], - [[1, 1, 0, 0], - [1, 1, 0, 0]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - - With the reference tensor and dimension indicator. - - >>> xs = torch.zeros((3, 6, 6)) - >>> make_non_pad_mask(lengths, xs, 1) - tensor([[[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8) - >>> make_non_pad_mask(lengths, xs, 2) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - - """ - return ~make_pad_mask(lengths, xs, length_dim) - - -def mask_by_length(xs, lengths, fill=0): - """Mask tensor according to length. - - Args: - xs (Tensor): Batch of input tensor (B, `*`). - lengths (LongTensor or List): Batch of lengths (B,). - fill (int or float): Value to fill masked part. - - Returns: - Tensor: Batch of masked input tensor (B, `*`). - - Examples: - >>> x = torch.arange(5).repeat(3, 1) + 1 - >>> x - tensor([[1, 2, 3, 4, 5], - [1, 2, 3, 4, 5], - [1, 2, 3, 4, 5]]) - >>> lengths = [5, 3, 2] - >>> mask_by_length(x, lengths) - tensor([[1, 2, 3, 4, 5], - [1, 2, 3, 0, 0], - [1, 2, 0, 0, 0]]) - - """ - assert xs.size(0) == len(lengths) - ret = xs.data.new(*xs.size()).fill_(fill) - for i, l in enumerate(lengths): - ret[i, :l] = xs[i, :l] - return ret - - -def th_accuracy(pad_outputs, pad_targets, ignore_label): - """Calculate accuracy. - - Args: - pad_outputs (Tensor): Prediction tensors (B * Lmax, D). - pad_targets (LongTensor): Target label tensors (B, Lmax, D). - ignore_label (int): Ignore label id. - - Returns: - float: Accuracy value (0.0 - 1.0). - - """ - pad_pred = pad_outputs.view( - pad_targets.size(0), - pad_targets.size(1), - pad_outputs.size(1)).argmax(2) - mask = pad_targets != ignore_label - numerator = torch.sum(pad_pred.masked_select(mask) == pad_targets.masked_select(mask)) - denominator = torch.sum(mask) - return float(numerator) / float(denominator) - - -def to_torch_tensor(x): - """Change to torch.Tensor or ComplexTensor from numpy.ndarray. - - Args: - x: Inputs. It should be one of numpy.ndarray, Tensor, ComplexTensor, and dict. - - Returns: - Tensor or ComplexTensor: Type converted inputs. - - Examples: - >>> xs = np.ones(3, dtype=np.float32) - >>> xs = to_torch_tensor(xs) - tensor([1., 1., 1.]) - >>> xs = torch.ones(3, 4, 5) - >>> assert to_torch_tensor(xs) is xs - >>> xs = {'real': xs, 'imag': xs} - >>> to_torch_tensor(xs) - ComplexTensor( - Real: - tensor([1., 1., 1.]) - Imag; - tensor([1., 1., 1.]) - ) - - """ - # If numpy, change to torch tensor - if isinstance(x, np.ndarray): - if x.dtype.kind == 'c': - # Dynamically importing because torch_complex requires python3 - from torch_complex.tensor import ComplexTensor - return ComplexTensor(x) - else: - return torch.from_numpy(x) - - # If {'real': ..., 'imag': ...}, convert to ComplexTensor - elif isinstance(x, dict): - # Dynamically importing because torch_complex requires python3 - from torch_complex.tensor import ComplexTensor - - if 'real' not in x or 'imag' not in x: - raise ValueError("has 'real' and 'imag' keys: {}".format(list(x))) - # Relative importing because of using python3 syntax - return ComplexTensor(x['real'], x['imag']) - - # If torch.Tensor, as it is - elif isinstance(x, torch.Tensor): - return x - - else: - error = ("x must be numpy.ndarray, torch.Tensor or a dict like " - "{{'real': torch.Tensor, 'imag': torch.Tensor}}, " - "but got {}".format(type(x))) - try: - from torch_complex.tensor import ComplexTensor - except Exception: - # If PY2 - raise ValueError(error) - else: - # If PY3 - if isinstance(x, ComplexTensor): - return x - else: - raise ValueError(error) - - -def get_subsample(train_args, mode, arch): - """Parse the subsampling factors from the training args for the specified `mode` and `arch`. - - Args: - train_args: argument Namespace containing options. - mode: one of ('asr', 'mt', 'st') - arch: one of ('rnn', 'rnn-t', 'rnn_mix', 'rnn_mulenc', 'transformer') - - Returns: - np.ndarray / List[np.ndarray]: subsampling factors. - """ - if arch == 'transformer': - return np.array([1]) - - elif mode == 'mt' and arch == 'rnn': - # +1 means input (+1) and layers outputs (train_args.elayer) - subsample = np.ones(train_args.elayers + 1, dtype=np.int) - logging.warning('Subsampling is not performed for machine translation.') - logging.info('subsample: ' + ' '.join([str(x) for x in subsample])) - return subsample - - elif (mode == 'asr' and arch in ('rnn', 'rnn-t')) or \ - (mode == 'mt' and arch == 'rnn') or \ - (mode == 'st' and arch == 'rnn'): - subsample = np.ones(train_args.elayers + 1, dtype=np.int) - if train_args.etype.endswith("p") and not train_args.etype.startswith("vgg"): - ss = train_args.subsample.split("_") - for j in range(min(train_args.elayers + 1, len(ss))): - subsample[j] = int(ss[j]) - else: - logging.warning( - 'Subsampling is not performed for vgg*. It is performed in max pooling layers at CNN.') - logging.info('subsample: ' + ' '.join([str(x) for x in subsample])) - return subsample - - elif mode == 'asr' and arch == 'rnn_mix': - subsample = np.ones(train_args.elayers_sd + train_args.elayers + 1, dtype=np.int) - if train_args.etype.endswith("p") and not train_args.etype.startswith("vgg"): - ss = train_args.subsample.split("_") - for j in range(min(train_args.elayers_sd + train_args.elayers + 1, len(ss))): - subsample[j] = int(ss[j]) - else: - logging.warning( - 'Subsampling is not performed for vgg*. It is performed in max pooling layers at CNN.') - logging.info('subsample: ' + ' '.join([str(x) for x in subsample])) - return subsample - - elif mode == 'asr' and arch == 'rnn_mulenc': - subsample_list = [] - for idx in range(train_args.num_encs): - subsample = np.ones(train_args.elayers[idx] + 1, dtype=np.int) - if train_args.etype[idx].endswith("p") and not train_args.etype[idx].startswith("vgg"): - ss = train_args.subsample[idx].split("_") - for j in range(min(train_args.elayers[idx] + 1, len(ss))): - subsample[j] = int(ss[j]) - else: - logging.warning( - 'Encoder %d: Subsampling is not performed for vgg*. ' - 'It is performed in max pooling layers at CNN.', idx + 1) - logging.info('subsample: ' + ' '.join([str(x) for x in subsample])) - subsample_list.append(subsample) - return subsample_list - - else: - raise ValueError('Invalid options: mode={}, arch={}'.format(mode, arch)) - - -def rename_state_dict(old_prefix: str, new_prefix: str, state_dict: Dict[str, torch.Tensor]): - """Replace keys of old prefix with new prefix in state dict.""" - # need this list not to break the dict iterator - old_keys = [k for k in state_dict if k.startswith(old_prefix)] - if len(old_keys) > 0: - logging.warning(f'Rename: {old_prefix} -> {new_prefix}') - for k in old_keys: - v = state_dict.pop(k) - new_k = k.replace(old_prefix, new_prefix) - state_dict[new_k] = v - -def get_activation(act): - """Return activation function.""" - # Lazy load to avoid unused import - from .encoder.swish import Swish - - activation_funcs = { - "hardtanh": torch.nn.Hardtanh, - "relu": torch.nn.ReLU, - "selu": torch.nn.SELU, - "swish": Swish, - } - - return activation_funcs[act]() diff --git a/spaces/lfoppiano/document-qa/Dockerfile b/spaces/lfoppiano/document-qa/Dockerfile deleted file mode 100644 index 69da66e6a7c3c689e08c92b89fbee7313494f4a2..0000000000000000000000000000000000000000 --- a/spaces/lfoppiano/document-qa/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -FROM python:3.9-slim - -WORKDIR /app - -RUN apt-get update && apt-get install -y \ - build-essential \ - curl \ - software-properties-common \ - git \ - && rm -rf /var/lib/apt/lists/* - -COPY requirements.txt . - -RUN pip3 install -r requirements.txt - -COPY .streamlit ./.streamlit -COPY document_qa ./document_qa -COPY grobid_client_generic.py . -COPY client.py . -COPY streamlit_app.py . - -# extract version -COPY .git ./.git -RUN git rev-parse --short HEAD > revision.txt -RUN rm -rf ./.git - -EXPOSE 8501 - -HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health - -ENV PYTHONPATH "${PYTHONPATH}:." - -ENTRYPOINT ["streamlit", "run", "streamlit_app.py", "--server.port=8501", "--server.address=0.0.0.0"] diff --git a/spaces/liimefruit/RVCollection/infer_pack/models_onnx.py b/spaces/liimefruit/RVCollection/infer_pack/models_onnx.py deleted file mode 100644 index 2cb5f674d79356cb5ea820d4afb4440331296b44..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/!!EXCLUSIVE!! Download Product Activation Key For Omsi Bus Simulator 2011 Offline.md b/spaces/lincquiQcaudo/Top-20-Diffusion/!!EXCLUSIVE!! Download Product Activation Key For Omsi Bus Simulator 2011 Offline.md deleted file mode 100644 index 7564a867d1ee8f19aa5b749187c6f2833fa3f343..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/!!EXCLUSIVE!! Download Product Activation Key For Omsi Bus Simulator 2011 Offline.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Download Product Activation Key For Omsi Bus Simulator 2011 Offline


            Download Zip ->->->-> https://bytlly.com/2uGyrc



            - - 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Cubeiq40full [PORTABLE]crack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Cubeiq40full [PORTABLE]crack.md deleted file mode 100644 index 8e7acf0252ab82c89cdadcade197aaf9f5aee966..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Cubeiq40full [PORTABLE]crack.md +++ /dev/null @@ -1,20 +0,0 @@ -
            -

            Cube-IQ: A Powerful Load Planning Software

            -

            Cube-IQ is a software that helps you optimize the loading of containers, trucks, rail cars, pallets, crates and boxes. It can handle complex loading scenarios, such as different container sizes, weight limits, center of gravity, rolls and multiple destinations. Cube-IQ can also generate loading diagrams, reports and labels for easy packing and tracking.

            -

            If you are looking for a reliable and efficient load planning solution, Cube-IQ is the software for you. You can download a free version of Cube-IQ from the official website[^1^] and try it for yourself. You can also contact the developer MagicLogic Optimization Inc. for more information and support.

            -

            cubeiq40fullcrack


            Download Filehttps://bytlly.com/2uGwTO



            -

            Cube-IQ is not only a software, but also a community of users who share their experiences and feedback. You can join the Cube-IQ blog[^2^] and learn more about the features and benefits of Cube-IQ. You can also ask questions, share tips and tricks, and get updates on the latest versions of Cube-IQ.

            -

            Don't miss this opportunity to improve your load planning process with Cube-IQ. Download it today and see the difference!

            Cube-IQ: A Powerful Load Planning Software (continued)

            -

            In this article, we will explore some of the features and benefits of Cube-IQ that make it the ultimate load planning software for logistics professionals.

            -

            High-Speed Optimal Load Plans

            -

            Cube-IQ can generate optimal load plans for any type of container, pallet, crate or box in seconds. It can handle multiple containers of different sizes and shapes, as well as multiple destinations and priorities. Cube-IQ can also optimize the loading of rolls, cylinders, drums and other irregular items. Cube-IQ uses a proprietary algorithm that is fully developed in-house and constantly improved by ongoing research and development. Cube-IQ can achieve the best possible volume and weight utilization, as well as minimize the number of containers needed for a load.

            -

            Graphical User Interface

            -

            Cube-IQ has a complete graphical user interface that allows you to easily create and edit loading jobs with point-and-click and drag-and-drop actions. You can view the load plans in 3D, rotate them, zoom in and out, and change the colors and labels. You can also drag and drop items to modify the load plan manually, and Cube-IQ will update the load plan accordingly. Cube-IQ can also generate loading diagrams, reports and labels that you can print or export in various formats.

            -

            -

            Built-In Database Engine

            -

            Cube-IQ has a built-in database engine that stores all your data securely and efficiently. You can import and export data in CSV, Excel or XML formats, and share the database across multiple users with ODBC compliance. You can also customize the database fields and tables to suit your specific needs. Cube-IQ can also integrate with your existing ERP, WMS or TMS systems for full automation and data exchange.

            -

            Flexible Loading and Stacking Rules

            -

            Cube-IQ offers full flexibility in loading and stacking rules, which you can define for each orientation of a box. You can specify the maximum weight, height, length and width of each item, as well as the minimum and maximum number of items per layer or per container. You can also set rules for overhangs, gaps, orientation restrictions, stability factors, center of gravity, axle weight limits and more. Cube-IQ can also work in multiple units that can be switched on the fly or even mixed.

            -

            These are just some of the features and benefits of Cube-IQ that make it a powerful load planning software. If you want to learn more about Cube-IQ or request a demo, please visit the official website[^1^] or contact MagicLogic Optimization Inc.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Janj Tur Pai Wajeyan Naal Audio Free Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Janj Tur Pai Wajeyan Naal Audio Free Download.md deleted file mode 100644 index c22e7d5648f74a41f20562200acee8b0a3a14513..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Janj Tur Pai Wajeyan Naal Audio Free Download.md +++ /dev/null @@ -1,34 +0,0 @@ - -

            How to Download Janj Tur Pai Wajeyan Naal Audio for Free

            -

            If you are looking for a way to download Janj Tur Pai Wajeyan Naal audio for free, you have come to the right place. Janj Tur Pai Wajeyan Naal is a popular Punjabi song by Naseebo Lal and Arif Lohar, which was released in 2016. The song is a duet that expresses the love and longing of two lovers who are separated by distance and circumstances.

            -

            Janj Tur Pai Wajeyan Naal Audio Free Download


            Download File ––– https://bytlly.com/2uGxWl



            -

            In this article, we will show you how to download Janj Tur Pai Wajeyan Naal audio for free from various sources, such as YouTube, SoundCloud, and other websites. We will also provide you with some tips on how to optimize your download speed and quality, as well as how to avoid any legal issues or malware risks.

            -

            How to Download Janj Tur Pai Wajeyan Naal Audio from YouTube

            -

            One of the easiest ways to download Janj Tur Pai Wajeyan Naal audio for free is to use a YouTube downloader tool. There are many online tools and software that can help you download any YouTube video or audio in various formats and qualities. Here are the steps to follow:

            -
              -
            1. Go to YouTube and search for Janj Tur Pai Wajeyan Naal. You will find several videos of the song, but we recommend choosing the official one from the Coke Studio channel.
            2. -
            3. Copy the URL of the video from the address bar of your browser.
            4. -
            5. Go to a YouTube downloader website, such as y2mate.com, mp3juices.cc, or ytmp3.cc.
            6. -
            7. Paste the URL of the video into the search box and click on the download button.
            8. -
            9. Select the format and quality of the audio that you want to download. We suggest choosing MP3 as the format and 320 kbps as the quality for the best sound experience.
            10. -
            11. Click on the download button again and wait for the file to be processed and saved on your device.
            12. -
            -

            Congratulations! You have successfully downloaded Janj Tur Pai Wajeyan Naal audio for free from YouTube. You can now enjoy listening to it offline or share it with your friends.

            -

            How to Download Janj Tur Pai Wajeyan Naal Audio from SoundCloud

            -

            Another option to download Janj Tur Pai Wajeyan Naal audio for free is to use a SoundCloud downloader tool. SoundCloud is a popular platform for streaming and sharing music, podcasts, and other audio content. You can find many versions of Janj Tur Pai Wajeyan Naal on SoundCloud, but not all of them are available for download. Here are the steps to follow:

            -
              -
            1. Go to SoundCloud and search for Janj Tur Pai Wajeyan Naal. You will find many results, but we recommend choosing the one from the Coke Studio Pakistan account.
            2. -
            3. Copy the URL of the track from the address bar of your browser.
            4. -
            5. Go to a SoundCloud downloader website, such as scdownloader.net, klickaud.net, or soundcloudtomp3.co.
            6. -
            7. Paste the URL of the track into the search box and click on the download button.
            8. -
            9. Select the format and quality of the audio that you want to download. We suggest choosing MP3 as the format and 320 kbps as the quality for the best sound experience.
            10. -
            11. Click on the download button again and wait for the file to be processed and saved on your device.
            12. -
            -

            Congratulations! You have successfully downloaded Janj Tur Pai Wajeyan Naal audio for free from SoundCloud. You can now enjoy listening to it offline or share it with your friends.

            -

            -

            How to Download Janj Tur Pai Wajeyan Naal Audio from Other Websites

            -

            A third option to download Janj Tur Pai Wajeyan Naal audio for free is to use a general downloader tool. There are many websites that offer free downloads of music and other audio content from various sources, such as Dailymotion, Vimeo, Facebook, Instagram, etc. Here are the steps to follow:

            -
              -
            1. Go to any website that has Janj Tur Pai Wajeyan Naal

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/luisoala/glide-test/glide_text2im/gaussian_diffusion.py b/spaces/luisoala/glide-test/glide_text2im/gaussian_diffusion.py deleted file mode 100644 index 1c0f97783e7a336390324516f2ba8e89d1dcfaf1..0000000000000000000000000000000000000000 --- a/spaces/luisoala/glide-test/glide_text2im/gaussian_diffusion.py +++ /dev/null @@ -1,639 +0,0 @@ -""" -Simplified from https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/gaussian_diffusion.py. -""" - -import math - -import numpy as np -import torch as th - - -def _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, warmup_frac): - betas = beta_end * np.ones(num_diffusion_timesteps, dtype=np.float64) - warmup_time = int(num_diffusion_timesteps * warmup_frac) - betas[:warmup_time] = np.linspace(beta_start, beta_end, warmup_time, dtype=np.float64) - return betas - - -def get_beta_schedule(beta_schedule, *, beta_start, beta_end, num_diffusion_timesteps): - """ - This is the deprecated API for creating beta schedules. - - See get_named_beta_schedule() for the new library of schedules. - """ - if beta_schedule == "quad": - betas = ( - np.linspace( - beta_start ** 0.5, - beta_end ** 0.5, - num_diffusion_timesteps, - dtype=np.float64, - ) - ** 2 - ) - elif beta_schedule == "linear": - betas = np.linspace(beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64) - elif beta_schedule == "warmup10": - betas = _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, 0.1) - elif beta_schedule == "warmup50": - betas = _warmup_beta(beta_start, beta_end, num_diffusion_timesteps, 0.5) - elif beta_schedule == "const": - betas = beta_end * np.ones(num_diffusion_timesteps, dtype=np.float64) - elif beta_schedule == "jsd": # 1/T, 1/(T-1), 1/(T-2), ..., 1 - betas = 1.0 / np.linspace( - num_diffusion_timesteps, 1, num_diffusion_timesteps, dtype=np.float64 - ) - else: - raise NotImplementedError(beta_schedule) - assert betas.shape == (num_diffusion_timesteps,) - return betas - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - return get_beta_schedule( - "linear", - beta_start=scale * 0.0001, - beta_end=scale * 0.02, - num_diffusion_timesteps=num_diffusion_timesteps, - ) - elif schedule_name == "squaredcos_cap_v2": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -class GaussianDiffusion: - """ - Utilities for training and sampling diffusion models. - - Original ported from this codebase: - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 - - :param betas: a 1-D numpy array of betas for each diffusion timestep, - starting at T and going to 1. - """ - - def __init__( - self, - *, - betas, - ): - # Use float64 for accuracy. - betas = np.array(betas, dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = ( - betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.posterior_log_variance_clipped = np.log( - np.append(self.posterior_variance[1], self.posterior_variance[1:]) - ) - self.posterior_mean_coef1 = ( - betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - ) - self.posterior_mean_coef2 = ( - (1.0 - self.alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - self.alphas_cumprod) - ) - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def q_sample(self, x_start, t, noise=None): - """ - Diffuse the data for a given number of diffusion steps. - - In other words, sample from q(x_t | x_0). - - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - if noise is None: - noise = th.randn_like(x_start) - assert noise.shape == x_start.shape - return ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - - q(x_{t-1} | x_t, x_0) - - """ - assert x_start.shape == x_t.shape - posterior_mean = ( - _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = _extract_into_tensor( - self.posterior_log_variance_clipped, t, x_t.shape - ) - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None): - """ - Apply the model to get p(x_{t-1} | x_t), as well as a prediction of - the initial x, x_0. - - :param model: the model, which takes a signal and a batch of timesteps - as input. - :param x: the [N x C x ...] tensor at time t. - :param t: a 1-D Tensor of timesteps. - :param clip_denoised: if True, clip the denoised signal into [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. Applies before - clip_denoised. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict with the following keys: - - 'mean': the model mean output. - - 'variance': the model variance output. - - 'log_variance': the log of 'variance'. - - 'pred_xstart': the prediction for x_0. - """ - if model_kwargs is None: - model_kwargs = {} - - B, C = x.shape[:2] - assert t.shape == (B,) - model_output = model(x, t, **model_kwargs) - if isinstance(model_output, tuple): - model_output, extra = model_output - else: - extra = None - - assert model_output.shape == (B, C * 2, *x.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - min_log = _extract_into_tensor(self.posterior_log_variance_clipped, t, x.shape) - max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) - # The model_var_values is [-1, 1] for [min_var, max_var]. - frac = (model_var_values + 1) / 2 - model_log_variance = frac * max_log + (1 - frac) * min_log - model_variance = th.exp(model_log_variance) - - def process_xstart(x): - if denoised_fn is not None: - x = denoised_fn(x) - if clip_denoised: - return x.clamp(-1, 1) - return x - - pred_xstart = process_xstart(self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)) - model_mean, _, _ = self.q_posterior_mean_variance(x_start=pred_xstart, x_t=x, t=t) - - assert model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape - return { - "mean": model_mean, - "variance": model_variance, - "log_variance": model_log_variance, - "pred_xstart": pred_xstart, - "extra": extra, - } - - def _predict_xstart_from_eps(self, x_t, t, eps): - assert x_t.shape == eps.shape - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps - ) - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute the mean for the previous step, given a function cond_fn that - computes the gradient of a conditional log probability with respect to - x. In particular, cond_fn computes grad(log(p(y|x))), and we want to - condition on y. - - This uses the conditioning strategy from Sohl-Dickstein et al. (2015). - """ - gradient = cond_fn(x, t, **model_kwargs) - new_mean = p_mean_var["mean"].float() + p_mean_var["variance"] * gradient.float() - return new_mean - - def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute what the p_mean_variance output would have been, should the - model's score function be conditioned by cond_fn. - - See condition_mean() for details on cond_fn. - - Unlike condition_mean(), this instead uses the conditioning strategy - from Song et al (2020). - """ - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - - eps = self._predict_eps_from_xstart(x, t, p_mean_var["pred_xstart"]) - eps = eps - (1 - alpha_bar).sqrt() * cond_fn(x, t, **model_kwargs) - - out = p_mean_var.copy() - out["pred_xstart"] = self._predict_xstart_from_eps(x, t, eps) - out["mean"], _, _ = self.q_posterior_mean_variance(x_start=out["pred_xstart"], x_t=x, t=t) - return out - - def p_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - ): - """ - Sample x_{t-1} from the model at the given timestep. - - :param model: the model to sample from. - :param x: the current tensor at x_{t-1}. - :param t: the value of t, starting at 0 for the first diffusion step. - :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict containing the following keys: - - 'sample': a random sample from the model. - - 'pred_xstart': a prediction of x_0. - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - noise = th.randn_like(x) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - if cond_fn is not None: - out["mean"] = self.condition_mean(cond_fn, out, x, t, model_kwargs=model_kwargs) - sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def p_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model. - - :param model: the model module. - :param shape: the shape of the samples, (N, C, H, W). - :param noise: if specified, the noise from the encoder to sample. - Should be of the same shape as `shape`. - :param clip_denoised: if True, clip x_start predictions to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param device: if specified, the device to create the samples on. - If not specified, use a model parameter's device. - :param progress: if True, show a tqdm progress bar. - :return: a non-differentiable batch of samples. - """ - final = None - for sample in self.p_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - ): - final = sample - return final["sample"] - - def p_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model and yield intermediate samples from - each timestep of diffusion. - - Arguments are the same as p_sample_loop(). - Returns a generator over dicts, where each dict is the return value of - p_sample(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.p_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - ) - yield out - img = out["sample"] - - def ddim_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t-1} from the model using DDIM. - - Same usage as p_sample(). - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) - - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) - sigma = ( - eta - * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) - * th.sqrt(1 - alpha_bar / alpha_bar_prev) - ) - # Equation 12. - noise = th.randn_like(x) - mean_pred = ( - out["pred_xstart"] * th.sqrt(alpha_bar_prev) - + th.sqrt(1 - alpha_bar_prev - sigma ** 2) * eps - ) - nonzero_mask = ( - (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) - ) # no noise when t == 0 - sample = mean_pred + nonzero_mask * sigma * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def ddim_reverse_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t+1} from the model using DDIM reverse ODE. - """ - assert eta == 0.0, "Reverse ODE only for deterministic path" - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - - out["pred_xstart"] - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) - alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) - - # Equation 12. reversed - mean_pred = out["pred_xstart"] * th.sqrt(alpha_bar_next) + th.sqrt(1 - alpha_bar_next) * eps - - return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} - - def ddim_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Generate samples from the model using DDIM. - - Same usage as p_sample_loop(). - """ - final = None - for sample in self.ddim_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - eta=eta, - ): - final = sample - return final["sample"] - - def ddim_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Use DDIM to sample from the model and yield intermediate samples from - each timestep of DDIM. - - Same usage as p_sample_loop_progressive(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.ddim_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - eta=eta, - ) - yield out - img = out["sample"] - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res + th.zeros(broadcast_shape, device=timesteps.device) diff --git a/spaces/luoshang/Real-CUGAN/upcunet_v3.py b/spaces/luoshang/Real-CUGAN/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/luoshang/Real-CUGAN/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/ma-xu/LIVE/thrust/CODE_OF_CONDUCT.md b/spaces/ma-xu/LIVE/thrust/CODE_OF_CONDUCT.md deleted file mode 100644 index 25140337afb95175f2082389a4f91161cdff779b..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,59 +0,0 @@ -# Contributor Covenant Code of Conduct - -## Overview - -Define the code of conduct followed and enforced for Thrust - -### Intended audience - -* Community -* Developers -* Project Leads - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment include: - -- Using welcoming and inclusive language -- Being respectful of differing viewpoints and experiences -- Gracefully accepting constructive criticism -- Focusing on what is best for the community -- Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -- The use of sexualized language or imagery and unwelcome sexual attention or advances -- Trolling, insulting/derogatory comments, and personal or political attacks -- Public or private harassment -- Publishing others’ private information, such as a physical or electronic address, without explicit permission -- Other conduct which could reasonably be considered inappropriate in a professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [cpp-conduct@nvidia.com](mailto:cpp-conduct@nvidia.com) All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership. - -## Attribution - -This Code of Conduct was taken from the [NVIDIA RAPIDS](https://docs.rapids.ai/resources/conduct/) project, which was adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq - -## Contact - -If you need to contact the Thrust team, please reach out to cpp-conduct@nvidia.com diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/scatter.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/scatter.h deleted file mode 100644 index 3ba0a4b743b3a4def4e17639cb3dcc263bddb788..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/scatter.h +++ /dev/null @@ -1,106 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -template -void __host__ __device__ -scatter(execution_policy& policy, - ItemsIt first, - ItemsIt last, - MapIt map, - ResultIt result) -{ - cuda_cub::transform(policy, - first, - last, - thrust::make_permutation_iterator(result, map), - identity()); -} - -template -void __host__ __device__ -scatter_if(execution_policy& policy, - ItemsIt first, - ItemsIt last, - MapIt map, - StencilIt stencil, - ResultIt result, - Predicate predicate) -{ - cuda_cub::transform_if(policy, - first, - last, - stencil, - thrust::make_permutation_iterator(result, map), - identity(), - predicate); -} - -template -void __host__ __device__ -scatter_if(execution_policy& policy, - ItemsIt first, - ItemsIt last, - MapIt map, - StencilIt stencil, - ResultIt result) -{ - cuda_cub::scatter_if(policy, - first, - last, - map, - stencil, - result, - identity()); -} - - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/adjacent_difference.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/adjacent_difference.h deleted file mode 100644 index c6f6c72820a2fe158d7389d03d49334e1675438b..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/adjacent_difference.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the adjacent_difference.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch adjacent_difference - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_ADJACENT_DIFFERENCE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/adjacent_difference.h> -#include __THRUST_HOST_SYSTEM_ADJACENT_DIFFERENCE_HEADER -#undef __THRUST_HOST_SYSTEM_ADJACENT_DIFFERENCE_HEADER - -#define __THRUST_DEVICE_SYSTEM_ADJACENT_DIFFERENCE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/adjacent_difference.h> -#include __THRUST_DEVICE_SYSTEM_ADJACENT_DIFFERENCE_HEADER -#undef __THRUST_DEVICE_SYSTEM_ADJACENT_DIFFERENCE_HEADER - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/assign_value.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/assign_value.h deleted file mode 100644 index cf244a02193211b9b4e4f07a6bc9b975d50e5388..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/assign_value.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits assign_value -#include - diff --git a/spaces/matthoffner/starchat-ui/prettier.config.js b/spaces/matthoffner/starchat-ui/prettier.config.js deleted file mode 100644 index daf4139177fd80181d50b1542647a69cd76fcac4..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/prettier.config.js +++ /dev/null @@ -1,25 +0,0 @@ -module.exports = { - trailingComma: 'all', - singleQuote: true, - plugins: [ - 'prettier-plugin-tailwindcss', - '@trivago/prettier-plugin-sort-imports', - ], - importOrder: [ - 'react', // React - '^react-.*$', // React-related imports - '^next', // Next-related imports - '^next-.*$', // Next-related imports - '^next/.*$', // Next-related imports - '^.*/hooks/.*$', // Hooks - '^.*/services/.*$', // Services - '^.*/utils/.*$', // Utils - '^.*/types/.*$', // Types - '^.*/pages/.*$', // Components - '^.*/components/.*$', // Components - '^[./]', // Other imports - '.*', // Any uncaught imports - ], - importOrderSeparation: true, - importOrderSortSpecifiers: true, -}; diff --git a/spaces/merve/GPT-2-story-gen/app.py b/spaces/merve/GPT-2-story-gen/app.py deleted file mode 100644 index bbefb7041869a3331a5b4727976b3eb46fd4dae3..0000000000000000000000000000000000000000 --- a/spaces/merve/GPT-2-story-gen/app.py +++ /dev/null @@ -1,12 +0,0 @@ -import gradio as gr -from gradio import inputs -description = "Story generation with GPT-2" -interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator", - title = "Story Generation with GPT-2", - inputs = [ - gr.inputs.Textbox(lines=7, label="Story"), - ], - description=description, - examples=[["Adventurer is approached by a mysterious stranger in the tavern for a new quest"]] -) -interface.launch() \ No newline at end of file diff --git a/spaces/merve/data-leak/source/private-and-fair/top-bot-digits.js b/spaces/merve/data-leak/source/private-and-fair/top-bot-digits.js deleted file mode 100644 index bc2f85ec8cb3b5544245f159aa62ff2fbffbcbb5..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/source/private-and-fair/top-bot-digits.js +++ /dev/null @@ -1,66 +0,0 @@ - -!(async function(){ - await util.getFile(`cns-cache/mnist_train_raw_3.npy`) - var digitMetadata = await util.getFile('mnist_train.csv') - var {byLabel} = util.decorateDigitMetadata(digitMetadata) - - var sel = d3.select('.top-bot-digits').html('') - .at({role: 'graphics-document', 'aria-label': `The twenty-five MNIST 3 digits most and least senstive to higher and lower privacy. The digits most sensitive to higher privacy are much more poorly drawn than the onces least sensitive to higher privacy.`}) - - var digitSel = sel.append('div') - var buttonSel = sel.append('div.digit-button-container') - .appendMany('div.button', d3.range(10)) - .text(d => d) - .on('click', d => drawClass(byLabel[d])) - - drawClass(byLabel[3]) - - async function drawClass(digitClass){ - buttonSel.classed('active', d => d == digitClass.key) - await util.getFile(`cns-cache/mnist_train_raw_${digitClass.key}.npy`) - - var nRows = 5 - var nCols = 5 - - var bot = _.sortBy(digitClass, d => +d.priv_order).slice(0, nRows*nCols) - var top = _.sortBy(digitClass, d => -d.priv_order).slice(0, nRows*nCols) - - digitSel.html('').append('div') - .st({maxWidth: 640, margin: '0 auto'}) - .appendMany('div', [bot, top]) - .st({display: 'inline-block'}) - .each(drawDigitBlock) - - - function drawDigitBlock(digits, isBot){ - var s = 2 - - var sel = d3.select(this).append('div') - - var c = d3.conventions({ - sel, - width: s*29*nCols, - height: s*29*nRows, - layers: 'cs', - margin: {top: 30, bottom: 10, right: 10, left: 10} - }) - - var ctx = c.layers[0] - - digits.forEach((d, i) => { - util.drawDigit( - ctx, - +d.i, - s, - (i % nCols)*s*29, - Math.floor(i/nCols)*s*29 - ) - }) - - c.svg.append('text') - .text(isBot ? 'Least sensitive to higher privacy' : 'Most sensitive to higher privacy') - .at({dy: '-.4em', textAnchor: 'middle', x: c.width/2, fontWeight: 600, fontSize: 14}) - } - } - -})() \ No newline at end of file diff --git a/spaces/merve/dataset-worldviews/public/dataset-worldviews/interface-images.js b/spaces/merve/dataset-worldviews/public/dataset-worldviews/interface-images.js deleted file mode 100644 index 5e7040a3a979423e2c88cdbf8c4e5e840a5b35d0..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/dataset-worldviews/interface-images.js +++ /dev/null @@ -1,8 +0,0 @@ -function createInterfaceImage(divName){ - - var c = d3.conventions({ - sel: d3.select('.' + divName).html('') - }) - - -} \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/public/anonymization/make-sel.js b/spaces/merve/fill-in-the-blank/public/anonymization/make-sel.js deleted file mode 100644 index 3b35b931008be7afe990694afdf232d05d5f4ee2..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/anonymization/make-sel.js +++ /dev/null @@ -1,78 +0,0 @@ -window.makeSel = function(){ - function ttFmt(d){ - var ttSel = d3.select('.tooltip').html('') - - var ageStr = d.age + ' year old' - if (slides.curSlide.index == 4){ - ageStr = ageStr + ' born in the ' + ['spring', 'summer', 'fall', 'winter'][d.season] - } - ttSel.append('div').html(` - ${ageStr} from ${d.state} who - ${d.plagerized ? - 'plagiarized' : - 'never plagiarized'} - `) - - if (slides.curSlide.index < 6) return - - var isHeads = d.coinVals[estimates.active.index] < sliders.headsProb - ttSel.append('div').html(` - They flipped - ${isHeads ? 'heads' : 'tails'} - and said they had - ${d.plagerized || isHeads ? - 'plagiarized' : - 'never plagiarized'} - `) - .st({marginTop: 10}) - } - - var rectAt = {} - var rs = (axii.bw - 10)*2 - rectAt.ageState = {width: rs, height: rs, x: -rs/2, y: -rs/2} - var uniqueBox = c.svg.appendMany('rect.unique.init-hidden', students.byAgeState.filter(d => d.length == 1)) - .translate(d => d.pos) - .at(rectAt.ageState) - - var rs = axii.bw/4 + 5.5 - rectAt.ageStateSeason = {width: rs, height: rs, x: Math.round(-rs/2), y: 4} - var uniqueSeasonBox = c.svg.appendMany( - 'rect.unique.init-hidden', - students.byAgeStateSeason.filter(d => d.length == 1 && d[0].group.ageState.length > 1)) - .translate(d => d.pos) - .at(rectAt.ageStateSeason) - - // number of uniquely id'd students - // console.log(uniqueSeasonBox.size()) - - var studentGroup = c.svg.append('g') - .at({width: 500, height: 500}) - - var student = studentGroup.appendMany('g.student', students.all) - .call(d3.attachTooltip) - .on('mouseover', ttFmt) - .translate(d => d.isAdditionalStudent ? [0,0]: d.pos.grid) - .classed('inactive', d => d.isAdditionalStudent) - - var rs = 16 - var flipCircle = student.append('circle') - .at({transform: 'scale(.1)'}) - .at({r: 9, fill: '#fff'}) - .at({stroke: '#b0b' }) - - var circle = student.append('circle').at({ - r: 5, - fill: d => d.plagerized ? '#f0f' : '#ccc', - stroke: d => d.plagerized ? '#b0b' : '#aaa', - strokeWidth: 1, - }) - - - - addSwoop(c) - - return {student, studentGroup, circle, flipCircle, rectAt, uniqueBox, uniqueSeasonBox} -} - - -if (window.init) window.init() diff --git a/spaces/merve/fill-in-the-blank/source/_posts/2019-10-03-fairness.html b/spaces/merve/fill-in-the-blank/source/_posts/2019-10-03-fairness.html deleted file mode 100644 index e87b79e7fec2d286610661ddae8970bb7c9fe1dc..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/_posts/2019-10-03-fairness.html +++ /dev/null @@ -1,219 +0,0 @@ - ---- -permalink: /measuring-fairness/ -template: post.html - -title: Considering Model Fairness -title: Measuring Fairness -summary: There are multiple ways to measure accuracy. No matter how we build our model, accuracy across these measures will vary when applied to different groups of people. -summaryalt: There are multiple ways to assess machine learning models, such as its overall accuracy. Another important perspective to consider is the fairness of the model with respect to different groups of people or different contexts of use. -shareimg: https://pair.withgoogle.com/explorables/images/measuring-fairness.png -date: 2021-05-01 ---- - - - - - -
              -
              -
              - - -
              -

              Measuring Fairness

              - -

              How do you make sure a model works equally well for different groups of people? It turns out that in many situations, this is harder than you might think. - -

              The problem is that there are different ways to measure the accuracy of a model, and often it's mathematically impossible for them all to be equal across groups. - -

              We'll illustrate how this happens by creating a (fake) medical model to screen these people for a disease. -

              - - -
              -

              Ground Truth

              - -

              About half of these people actually have the disease a; half of them don't b. -

              - - -
              -

              Model Predictions

              - -

              In a perfect world, only sick people would test positive for the disease and only healthy people would test negative. -

              - - -
              -

              Model Mistakes

              - -

              But models and tests aren't perfect. - -

              The model might make a mistake and mark a sick person as healthy c. - -

              Or the opposite: marking a healthy person as sick f. -

              - - -

              Never Miss the Disease...

              - -

              If there's a simple follow-up test, we could have the model aggressively call close cases so it rarely misses the disease. - -

              We can quantify this by measuring the percentage of sick people a who test positive g. - -

              -
              - - -
              -

              ...Or Avoid Overcalling?

              - -

              On the other hand, if there isn't a secondary test, or the treatment uses a drug with a limited supply, we might care more about the percentage of people with positive tests who are actually sick g . - -

              - -

              These issues and trade-offs in model optimization aren't new, but they're brought into focus when we have the ability to fine-tune exactly how aggressively disease is diagnosed. - -

              - - Try adjusting how aggressive the model is in diagnosing the disease -
              - - -
              -

              Subgroup Analysis

              - -

              Things get even more complicated when we check if the model treats different groups fairly.¹ - -

              Whatever we decide on in terms of trade-offs between these metrics, we'd probably like them to be roughly even across different groups of people. - -

              If we're trying to evenly allocate resources, having the model miss more cases in children than adults would be bad! ² -

              - - -
              -

              Base Rates

              - -

              If you look carefully, you'll see that the disease is more prevalent in children. That is, the "base rate" of the disease is different across groups. - -

              The fact that the base rates are different makes the situation surprisingly tricky. For one thing, even though the test catches the same percentage of sick adults and sick children, an adult who tests positive is less likely to have the disease than a child who tests positive. -

              - - -
              -

              Imbalanced Metrics

              - -

              Why is there a disparity in diagnosing between children and adults? There is a higher proportion of well adults, so mistakes in the test will cause more well adults to be marked "positive" than well children (and similarly with mistaken negatives). - -


              -
              - -

              To fix this, we could have the model take age into account. - -

              -
              -
              - -
              -

              Try adjusting the slider to make the model grade adults less aggressively than children.
              - -
              -

              This allows us to align one metric. But now adults who have the disease are less likely to be diagnosed with it! - -

              -
              -
              - -

              No matter how you move the sliders, you won't be able to make both metrics fair at once. It turns out this is inevitable any time the base rates are different, and the test isn't perfect. - -

              There are multiple ways to define fairness mathematically. It usually isn't possible to satisfy all of them.³ -

              -
              - - -
              -
              -
              - - -

              Conclusion

              - -

              Thankfully, the notion of fairness you choose to satisfy will depend on the context of your model, so while it may not be possible to satisfy every definition of fairness, you can focus on the notions of fairness that make sense for your use case. - -

              Even if fairness along every dimension isn't possible, we shouldn't stop checking for bias. The Hidden Bias explorable outlines different ways human bias can feed into an ML model. - -

              More Reading

              - -

              In some contexts, setting different thresholds for different populations might not be acceptable. Can you make AI fairer than a judge? explores an algorithm that can send people to jail. - -

              There are lots of different metrics you might use to determine if an algorithm is fair. Attacking discrimination with smarter machine learning shows how several of them work. Using Fairness Indicators in conjunction with the What-If Tool and other fairness tools, you can test your own model against commonly used fairness metrics. - -

              Machine learning practitioners use words like “recall” to describe the percentage of sick people who test positive. Checkout the PAIR Guidebook Glossary to learn how to learn how to talk to the people building the models. - -

              Appendix

              - -

              ¹ This essay uses very academic, mathematical standards for fairness that don't encompass everything we might include in the colloquial meaning of fairness. There's a gap between the technical descriptions of algorithms here and the social context that they're deployed in. - -

              ² Sometimes we might care more about different error modes in different populations. If treatment is riskier for children, we'd probably want the model to be less aggressive in diagnosing. - -

              ³The above example assumes the model sorts and scores people based on how likely it is that they are sick. With complete control over the model's exact rate of under- and over-diagnosing in both groups, it's actually possible to align both of the metrics we've discussed so far. Try tweaking the model below to get both of them to line up. - -

              Adding a third metric, the percentage of well people a who test negative e, makes perfect fairness impossible. Can you see why all three metrics won't align unless the base rate of the disease is the same in both populations? - -

              - -
              Drag ⁠— to adjust model accuracy and ⁠| to adjust the occurrence of disease
              -
              - -

              Credits

              - -

              Adam Pearce // May 2020 - -

              Thanks to Carey Radebaugh, Dan Nanas, David Weinberger, Emily Denton, Emily Reif, Fernanda Viégas, Hal Abelson, James Wexler, Kristen Olson, Lucas Dixon, Mahima Pushkarna, Martin Wattenberg, Michael Terry, Rebecca Salois, Timnit Gebru, Tulsee Doshi, Yannick Assogba, Yoni Halpern, Zan Armstrong, and my other colleagues at Google for their help with this piece. - -

              Silhouettes from ProPublica's Wee People. - -

              More Explorables

              - -

              - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/mrm8488/speech-to-diffusion/app.py b/spaces/mrm8488/speech-to-diffusion/app.py deleted file mode 100644 index 87d454daa3590364e59617a2acea46f369dbf7e8..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/speech-to-diffusion/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import gradio as gr -import os -import torch -from torch import autocast -from diffusers import StableDiffusionPipeline -from PIL import Image -from styles import css, header_html, footer_html -from examples import examples -from transformers import pipeline - -ars_model = pipeline("automatic-speech-recognition") - - -model_id = "CompVis/stable-diffusion-v1-4" -device = "cuda" if torch.cuda.is_available() else "cpu" - -# If you are running this code locally, you need to either do a 'huggingface-cli login` or paste your User Access Token from here https://huggingface.co/settings/tokens into the use_auth_token field below. -pipe = StableDiffusionPipeline.from_pretrained( - model_id, use_auth_token=os.environ.get('auth_token'), revision="fp16", torch_dtype=torch.float16) -pipe = pipe.to(device) - - -def transcribe(audio): - text = ars_model(audio)["text"] - return text - - -def infer(audio, samples, steps, scale, seed): - - prompt = transcribe(audio) - generator = torch.Generator(device=device).manual_seed(seed) - - # If you are running locally with CPU, you can remove the `with autocast("cuda")` - if device == "cuda": - with autocast("cuda"): - images_list = pipe( - [prompt] * samples, - num_inference_steps=steps, - guidance_scale=scale, - generator=generator, - ) - else: - images_list = pipe( - [prompt] * samples, - num_inference_steps=steps, - guidance_scale=scale, - generator=generator, - ) - images = [] - safe_image = Image.open(r"unsafe.png") - for i, image in enumerate(images_list["sample"]): - if(images_list["nsfw_content_detected"][i]): - images.append(safe_image) - else: - images.append(image) - return images - - -block = gr.Blocks(css=css) - - -with block: - gr.HTML(header_html) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - audio = gr.Audio( - label="Describe a prompt", - source="microphone", - type="filepath" - # ).style( - # border=(True, False, True, True), - # rounded=(True, False, False, True), - # container=False, - ) - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - ) - - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - - with gr.Row(elem_id="advanced-options"): - samples = gr.Slider(label="Images", minimum=1, - maximum=4, value=4, step=1) - steps = gr.Slider(label="Steps", minimum=1, - maximum=50, value=45, step=1) - scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=2147483647, - step=1, - randomize=True, - ) - - # ex = gr.Examples(fn=infer, inputs=[ - # audio, samples, steps, scale, seed], outputs=gallery) - # ex.dataset.headers = [""] - - # audio.submit(infer, inputs=[audio, samples, - # steps, scale, seed], outputs=gallery) - btn.click(infer, inputs=[audio, samples, steps, - scale, seed], outputs=gallery) - advanced_button.click( - None, - [], - audio, - _js=""" - () => { - const options = document.querySelector("body > gradio-app").querySelector("#advanced-options"); - options.style.display = ["none", ""].includes(options.style.display) ? "flex" : "none"; - }""", - ) - gr.HTML(footer_html) - -block.queue(max_size=25).launch() \ No newline at end of file diff --git a/spaces/mrtimmydontplay/120/README.md b/spaces/mrtimmydontplay/120/README.md deleted file mode 100644 index f71fe4ef8129b9a1dc21b3a66f5c8defccfa7e6c..0000000000000000000000000000000000000000 --- a/spaces/mrtimmydontplay/120/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Shiny for Python template -emoji: 🌍 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false -license: other -duplicated_from: posit/shiny-for-python-template ---- - -This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/). - -To get started with a new app do the following: - -1) Install Shiny with `pip install shiny` -2) Create a new app with `shiny create .` -3) Then run the app with `shiny run --reload` - -To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html). diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/nonautoregressive_transformer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/nonautoregressive_transformer.py deleted file mode 100644 index d114202d25fbd1dca66c7abebb0b0a8bffbe094d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/nat/nonautoregressive_transformer.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.iterative_refinement_generator import DecoderOut -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import FairseqNATDecoder, FairseqNATModel, ensemble_decoder -from fairseq.models.transformer import Embedding -from fairseq.modules.transformer_sentence_encoder import init_bert_params - - -def _mean_pooling(enc_feats, src_masks): - # enc_feats: T x B x C - # src_masks: B x T or None - if src_masks is None: - enc_feats = enc_feats.mean(0) - else: - src_masks = (~src_masks).transpose(0, 1).type_as(enc_feats) - enc_feats = ( - (enc_feats / src_masks.sum(0)[None, :, None]) * src_masks[:, :, None] - ).sum(0) - return enc_feats - - -def _argmax(x, dim): - return (x == x.max(dim, keepdim=True)[0]).type_as(x) - - -def _uniform_assignment(src_lens, trg_lens): - max_trg_len = trg_lens.max() - steps = (src_lens.float() - 1) / (trg_lens.float() - 1) # step-size - # max_trg_len - index_t = utils.new_arange(trg_lens, max_trg_len).float() - index_t = steps[:, None] * index_t[None, :] # batch_size X max_trg_len - index_t = torch.round(index_t).long().detach() - return index_t - - -@register_model("nonautoregressive_transformer") -class NATransformerModel(FairseqNATModel): - @property - def allow_length_beam(self): - return True - - @staticmethod - def add_args(parser): - FairseqNATModel.add_args(parser) - - # length prediction - parser.add_argument( - "--src-embedding-copy", - action="store_true", - help="copy encoder word embeddings as the initial input of the decoder", - ) - parser.add_argument( - "--pred-length-offset", - action="store_true", - help="predicting the length difference between the target and source sentences", - ) - parser.add_argument( - "--sg-length-pred", - action="store_true", - help="stop the gradients back-propagated from the length predictor", - ) - parser.add_argument( - "--length-loss-factor", - type=float, - help="weights on the length prediction loss", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = NATransformerDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": tgt_tokens.ne(self.pad), - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - step = decoder_out.step - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder - output_masks = output_tokens.ne(self.pad) - _scores, _tokens = self.decoder( - normalize=True, - prev_output_tokens=output_tokens, - encoder_out=encoder_out, - step=step, - ).max(-1) - - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - def initialize_output_tokens(self, encoder_out, src_tokens): - # length prediction - length_tgt = self.decoder.forward_length_prediction( - self.decoder.forward_length(normalize=True, encoder_out=encoder_out), - encoder_out=encoder_out, - ) - - max_length = length_tgt.clamp_(min=2).max() - idx_length = utils.new_arange(src_tokens, max_length) - - initial_output_tokens = src_tokens.new_zeros( - src_tokens.size(0), max_length - ).fill_(self.pad) - initial_output_tokens.masked_fill_( - idx_length[None, :] < length_tgt[:, None], self.unk - ) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos) - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(encoder_out["encoder_out"][0]) - - return DecoderOut( - output_tokens=initial_output_tokens, - output_scores=initial_output_scores, - attn=None, - step=0, - max_step=0, - history=None, - ) - - def regenerate_length_beam(self, decoder_out, beam_size): - output_tokens = decoder_out.output_tokens - length_tgt = output_tokens.ne(self.pad).sum(1) - length_tgt = ( - length_tgt[:, None] - + utils.new_arange(length_tgt, 1, beam_size) - - beam_size // 2 - ) - length_tgt = length_tgt.view(-1).clamp_(min=2) - max_length = length_tgt.max() - idx_length = utils.new_arange(length_tgt, max_length) - - initial_output_tokens = output_tokens.new_zeros( - length_tgt.size(0), max_length - ).fill_(self.pad) - initial_output_tokens.masked_fill_( - idx_length[None, :] < length_tgt[:, None], self.unk - ) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos) - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(decoder_out.output_scores) - - return decoder_out._replace( - output_tokens=initial_output_tokens, output_scores=initial_output_scores - ) - - -class NATransformerDecoder(FairseqNATDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.dictionary = dictionary - self.bos = dictionary.bos() - self.unk = dictionary.unk() - self.eos = dictionary.eos() - - self.encoder_embed_dim = args.encoder_embed_dim - self.sg_length_pred = getattr(args, "sg_length_pred", False) - self.pred_length_offset = getattr(args, "pred_length_offset", False) - self.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - self.src_embedding_copy = getattr(args, "src_embedding_copy", False) - self.embed_length = Embedding(256, self.encoder_embed_dim, None) - - @ensemble_decoder - def forward(self, normalize, encoder_out, prev_output_tokens, step=0, **unused): - features, _ = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - embedding_copy=(step == 0) & self.src_embedding_copy, - ) - decoder_out = self.output_layer(features) - return F.log_softmax(decoder_out, -1) if normalize else decoder_out - - @ensemble_decoder - def forward_length(self, normalize, encoder_out): - enc_feats = encoder_out["encoder_out"][0] # T x B x C - if len(encoder_out["encoder_padding_mask"]) > 0: - src_masks = encoder_out["encoder_padding_mask"][0] # B x T - else: - src_masks = None - enc_feats = _mean_pooling(enc_feats, src_masks) - if self.sg_length_pred: - enc_feats = enc_feats.detach() - length_out = F.linear(enc_feats, self.embed_length.weight) - return F.log_softmax(length_out, -1) if normalize else length_out - - def extract_features( - self, - prev_output_tokens, - encoder_out=None, - early_exit=None, - embedding_copy=False, - **unused - ): - """ - Similar to *forward* but only return features. - - Inputs: - prev_output_tokens: Tensor(B, T) - encoder_out: a dictionary of hidden states and masks - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - the LevenshteinTransformer decoder has full-attention to all generated tokens - """ - # embedding - if embedding_copy: - src_embd = encoder_out["encoder_embedding"][0] - if len(encoder_out["encoder_padding_mask"]) > 0: - src_mask = encoder_out["encoder_padding_mask"][0] - else: - src_mask = None - src_mask = ( - ~src_mask - if src_mask is not None - else prev_output_tokens.new_ones(*src_embd.size()[:2]).bool() - ) - - x, decoder_padding_mask = self.forward_embedding( - prev_output_tokens, - self.forward_copying_source( - src_embd, src_mask, prev_output_tokens.ne(self.padding_idx) - ), - ) - - else: - - x, decoder_padding_mask = self.forward_embedding(prev_output_tokens) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - inner_states = [x] - - # decoder layers - for i, layer in enumerate(self.layers): - - # early exit from the decoder. - if (early_exit is not None) and (i >= early_exit): - break - - x, attn, _ = layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - self_attn_mask=None, - self_attn_padding_mask=decoder_padding_mask, - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": attn, "inner_states": inner_states} - - def forward_embedding(self, prev_output_tokens, states=None): - # embed positions - positions = ( - self.embed_positions(prev_output_tokens) - if self.embed_positions is not None - else None - ) - - # embed tokens and positions - if states is None: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - if self.project_in_dim is not None: - x = self.project_in_dim(x) - else: - x = states - - if positions is not None: - x += positions - x = self.dropout_module(x) - decoder_padding_mask = prev_output_tokens.eq(self.padding_idx) - return x, decoder_padding_mask - - def forward_copying_source(self, src_embeds, src_masks, tgt_masks): - length_sources = src_masks.sum(1) - length_targets = tgt_masks.sum(1) - mapped_inputs = _uniform_assignment(length_sources, length_targets).masked_fill( - ~tgt_masks, 0 - ) - copied_embedding = torch.gather( - src_embeds, - 1, - mapped_inputs.unsqueeze(-1).expand( - *mapped_inputs.size(), src_embeds.size(-1) - ), - ) - return copied_embedding - - def forward_length_prediction(self, length_out, encoder_out, tgt_tokens=None): - enc_feats = encoder_out["encoder_out"][0] # T x B x C - if len(encoder_out["encoder_padding_mask"]) > 0: - src_masks = encoder_out["encoder_padding_mask"][0] # B x T - else: - src_masks = None - if self.pred_length_offset: - if src_masks is None: - src_lengs = enc_feats.new_ones(enc_feats.size(1)).fill_( - enc_feats.size(0) - ) - else: - src_lengs = (~src_masks).transpose(0, 1).type_as(enc_feats).sum(0) - src_lengs = src_lengs.long() - - if tgt_tokens is not None: - # obtain the length target - tgt_lengs = tgt_tokens.ne(self.padding_idx).sum(1).long() - if self.pred_length_offset: - length_tgt = tgt_lengs - src_lengs + 128 - else: - length_tgt = tgt_lengs - length_tgt = length_tgt.clamp(min=0, max=255) - - else: - # predict the length target (greedy for now) - # TODO: implementing length-beam - pred_lengs = length_out.max(-1)[1] - if self.pred_length_offset: - length_tgt = pred_lengs - 128 + src_lengs - else: - length_tgt = pred_lengs - - return length_tgt - - -@register_model_architecture( - "nonautoregressive_transformer", "nonautoregressive_transformer" -) -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # --- special arguments --- - args.sg_length_pred = getattr(args, "sg_length_pred", False) - args.pred_length_offset = getattr(args, "pred_length_offset", False) - args.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - args.src_embedding_copy = getattr(args, "src_embedding_copy", False) - - -@register_model_architecture( - "nonautoregressive_transformer", "nonautoregressive_transformer_wmt_en_de" -) -def nonautoregressive_transformer_wmt_en_de(args): - base_architecture(args) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/scalar_bias.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/scalar_bias.py deleted file mode 100644 index c96247c75914fabb8a2b7ff731bb82b588f72690..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/scalar_bias.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import torch - - -class ScalarBias(torch.autograd.Function): - """ - Adds a vector of scalars, used in self-attention mechanism to allow - the model to optionally attend to this vector instead of the past - """ - - @staticmethod - def forward(ctx, input, dim, bias_init): - size = list(input.size()) - size[dim] += 1 - output = input.new(*size).fill_(bias_init) - output.narrow(dim, 1, size[dim] - 1).copy_(input) - ctx.dim = dim - return output - - @staticmethod - def backward(ctx, grad): - return grad.narrow(ctx.dim, 1, grad.size(ctx.dim) - 1), None, None - - -def scalar_bias(input, dim, bias_init=0): - return ScalarBias.apply(input, dim, bias_init) diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption.sh deleted file mode 100644 index 61aa20d05c28621fef3750fe6ad4b5947856c16c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption -#SBATCH --nodes=1 -#SBATCH --ntasks=1 -#SBATCH --gpus=8 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --time=5:00:00 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/ratatouille/scaling_best/refcocoplus/refcocoplus_ofaplus_base_pretrain_s2_hsep1_fix_lr5e5_bs8_4_shuf_initcaption.sh - - diff --git a/spaces/multimodalart/pix2pix-zero/README.md b/spaces/multimodalart/pix2pix-zero/README.md deleted file mode 100644 index ccd5b69509c645240e734929924bf9927e6b4be2..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/pix2pix-zero/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pix2pix Zero -emoji: 🌖 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mxs2019/nba-player-classifer/README.md b/spaces/mxs2019/nba-player-classifer/README.md deleted file mode 100644 index be4fa4769e2988fe6941bc458fbb6fe8da69d246..0000000000000000000000000000000000000000 --- a/spaces/mxs2019/nba-player-classifer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nba Player Classifer -emoji: 🚀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chicony Usb 2.0 Camera Driver Download Xp WORK.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chicony Usb 2.0 Camera Driver Download Xp WORK.md deleted file mode 100644 index 6e4e9e399042a61d480f8bda49b31a504d6d981f..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Chicony Usb 2.0 Camera Driver Download Xp WORK.md +++ /dev/null @@ -1,18 +0,0 @@ -
              -``` -

              How to Download and Install Chicony USB 2.0 Camera Driver for Windows XP

              -

              If you have a Chicony USB 2.0 camera device and you want to use it on your Windows XP computer, you need to download and install the correct driver for it. A driver is a software that allows your computer to communicate with your hardware devices. Without a driver, your camera may not work properly or at all.

              -

              Chicony Usb 2.0 Camera Driver Download Xp


              DOWNLOAD ––– https://urlcod.com/2uIaqf



              -

              There are different ways to download and install the Chicony USB 2.0 camera driver for Windows XP, but here we will show you the easiest and most reliable method. Follow these steps:

              -
                -
              1. Go to this website and find the driver that matches your camera device and your operating system. You can use the hardware IDs of your camera device to identify it. To find the hardware IDs, go to Device Manager, right-click on your camera device, select Properties, and then click on the Details tab. Choose Hardware Ids from the drop-down menu and note down the values.
              2. -
              3. Click on the Download driver button next to the driver that you want to download. Save the file to a location that you can easily access.
              4. -
              5. Once the download is complete, locate the file and double-click on it to run it. Follow the on-screen instructions to install the driver on your computer.
              6. -
              7. Restart your computer after the installation is finished.
              8. -
              9. Connect your camera device to your computer and test if it works properly.
              10. -
              -

              Congratulations! You have successfully downloaded and installed the Chicony USB 2.0 camera driver for Windows XP. Enjoy using your camera device!

              -```

              -

              e93f5a0c3f
              -
              -
              \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Film Ishq Vs Luv Full Movie Hd !!EXCLUSIVE!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Film Ishq Vs Luv Full Movie Hd !!EXCLUSIVE!!.md deleted file mode 100644 index 87d4ff6900d12bee3b54047fd3f09434cd809240..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download Film Ishq Vs Luv Full Movie Hd !!EXCLUSIVE!!.md +++ /dev/null @@ -1,29 +0,0 @@ -
              -

              How to Download Film Ishq Vs Luv Full Movie HD for Free

              -

              If you are looking for a romantic comedy film to watch with your loved one, you might want to check out Ishq Vs Luv. This film is a Hindi remake of the Telugu hit film Tholi Prema, starring Aditya Roy Kapur and Pooja Hegde in the lead roles. The film follows the love story of two childhood friends who grow apart and meet again after several years. Will they rekindle their old flame or move on with their lives?

              -

              download film Ishq Vs Luv full movie hd


              DOWNLOAD 🌟 https://urlcod.com/2uI9Ov



              -

              Ishq Vs Luv is a film that will make you laugh, cry and fall in love with the characters. The film has received positive reviews from critics and audiences alike, and has been praised for its fresh and engaging storyline, its catchy songs and its chemistry between the lead actors. The film has also been a box office success, grossing over 100 crore rupees worldwide.

              -

              But what if you missed the film in theatres or want to watch it again at home? Don't worry, we have got you covered. In this article, we will show you how to download film Ishq Vs Luv full movie HD for free from various online sources. You can watch the film on your laptop, smartphone or TV without any hassle.

              -

              Download Film Ishq Vs Luv Full Movie HD from Torrent Sites

              -

              One of the easiest ways to download film Ishq Vs Luv full movie HD for free is to use torrent sites. Torrent sites are platforms that allow users to share files over the internet using peer-to-peer technology. You can find almost any movie or TV show on torrent sites, including Ishq Vs Luv.

              -

              -

              To download film Ishq Vs Luv full movie HD from torrent sites, you will need a torrent client software such as BitTorrent or uTorrent. You will also need a VPN service to hide your IP address and avoid any legal issues. Once you have these tools ready, follow these steps:

              -
                -
              1. Go to a torrent site such as The Pirate Bay, 1337x or RARBG and search for "Ishq Vs Luv" in the search bar.
              2. -
              3. Choose a torrent file that has a high number of seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file from seeders. A high number of seeders and leechers indicates that the file is popular and reliable.
              4. -
              5. Click on the torrent file and download it to your device.
              6. -
              7. Open the torrent file with your torrent client software and start downloading the film.
              8. -
              9. Once the download is complete, you can watch the film on your device using a media player such as VLC or Windows Media Player.
              10. -
              -

              Note: Downloading films from torrent sites may be illegal in some countries and may expose you to malware or viruses. We do not condone or encourage piracy and recommend that you use legal and safe methods to watch films online.

              -

              Download Film Ishq Vs Luv Full Movie HD from Streaming Sites

              -

              Another way to download film Ishq Vs Luv full movie HD for free is to use streaming sites. Streaming sites are platforms that allow users to watch films and TV shows online without downloading them. You can find many streaming sites that offer Ishq Vs Luv for free or for a small fee.

              -

              To download film Ishq Vs Luv full movie HD from streaming sites, you will need a web browser such as Chrome or Firefox. You will also need a VPN service to bypass any geo-restrictions or censorship that may prevent you from accessing some streaming sites. Once you have these tools ready, follow these steps:

              -
                -
              1. Go to a streaming site such as Netflix, Amazon Prime Video or Hotstar and search for "Ishq Vs Luv" in the search bar.
              2. -
              3. Choose the film from the results and click on it.
              4. -
              5. If the streaming site requires you to sign up or pay for a subscription, do so accordingly. Some streaming sites may offer free trials or discounts for new users.
              6. -
              7. If the streaming site allows you to download the film for offline viewing, look for a download button or icon on the screen and click on it.
              8. -
              9. Select the quality and format of the download and wait for it to finish. 81aa517590
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Illuminati Card Game 1995 All Cards Pdf Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Illuminati Card Game 1995 All Cards Pdf Download.md deleted file mode 100644 index 046346c3e2f75ddec0add0f5fbe46fd2e705232e..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Illuminati Card Game 1995 All Cards Pdf Download.md +++ /dev/null @@ -1,16 +0,0 @@ -
                -Here is a possible title and article with html formatting for the keyword "Illuminati Card Game 1995 All Cards Pdf Download": - -

                Illuminati Card Game 1995 All Cards Pdf Download: A Conspiracy Theory or a Prophecy?

                -

                The Illuminati Card Game is a collectible card game that was released in 1995 by Steve Jackson Games. The game is based on the conspiracy theory that a secret society called the Illuminati is behind the events of the world and is plotting to achieve world domination. The game features cards that depict various aspects of the Illuminati's plan, such as assassinations, wars, disasters, scandals, and occult phenomena.

                -

                Some people believe that the Illuminati Card Game is more than just a game, but a prophecy that predicted many of the events that happened after its release. For example, some cards seem to resemble the 9/11 attacks, the Boston Marathon bombing, the Fukushima nuclear disaster, the coronavirus pandemic, and the rise of cryptocurrencies. Some even claim that the game's creator, Steve Jackson, was a member of the Illuminati or had access to insider information.

                -

                Illuminati Card Game 1995 All Cards Pdf Download


                Download Ziphttps://urlcod.com/2uIbQc



                -

                However, others argue that the Illuminati Card Game is just a coincidence and a product of creative imagination. They point out that the game is based on existing conspiracy theories and popular culture references, and that many of the cards are vague or generic enough to match any event. They also note that the game has hundreds of cards, and that only a few of them seem to match reality.

                -

                Whether you believe that the Illuminati Card Game is a conspiracy theory or a prophecy, you can download all of its cards in pdf format from this link: Illuminati Card Game 1995 All Cards Pdf Download. You can also check out this video that shows some of the most controversial cards and their possible meanings: Illuminati Card Game 1995 Predictions.

                Here are a few more paragraphs for the article: - -

                The Illuminati Card Game is not the only card game that has been accused of predicting the future. Another example is the INWO (Illuminati: New World Order) Card Game, which is a spin-off of the original game that was released in 1994. The INWO Card Game features cards that depict the Illuminati's plan to create a new world order, such as the Antichrist, the Mark of the Beast, the World War III, and the New Age Movement. Some of the cards are similar to the ones in the Illuminati Card Game, but others are different or more specific.

                -

                Another example is the Montauk Project Card Game, which is a card game that was released in 1998 by Metagaming Concepts. The game is based on the conspiracy theory that the US government conducted secret experiments on time travel, mind control, and psychic warfare at a military base in Montauk, New York. The game features cards that depict various aspects of the Montauk Project, such as aliens, portals, monsters, and mind machines.

                -

                -

                These card games have attracted the attention of many conspiracy theorists and enthusiasts, who believe that they contain hidden messages and clues about the Illuminati's agenda and the future of humanity. Some of them have created websites, blogs, videos, and books to analyze and interpret the cards and their meanings. Others have used the cards as a source of inspiration or entertainment, or as a way to challenge their critical thinking skills.

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lights Out English Tamil Movie Free Download Utorrent REPACK.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lights Out English Tamil Movie Free Download Utorrent REPACK.md deleted file mode 100644 index b9f9a0e2d483fb284d8b09c212220334bd84c4f4..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lights Out English Tamil Movie Free Download Utorrent REPACK.md +++ /dev/null @@ -1,31 +0,0 @@ - -

                How to Watch Lights Out English Tamil Movie for Free Online

                -

                Lights Out is a 2016 horror movie directed by David F. Sandberg and starring Teresa Palmer, Gabriel Bateman, and Maria Bello. The movie follows a family that is haunted by a supernatural entity that only appears in the dark. Lights Out was a box office success, grossing over $148 million worldwide on a $4.9 million budget.

                -

                If you are a fan of horror movies and want to watch Lights Out in Tamil dubbed version, you might be wondering how to do that without paying any money. Well, you are in luck, because in this article, we will show you how to watch Lights Out English Tamil movie for free online using Utorrent.

                -

                Lights Out English Tamil Movie Free Download Utorrent


                DOWNLOADhttps://urlcod.com/2uIbPz



                -

                What is Utorrent?

                -

                Utorrent is a popular software that allows you to download and share files using the BitTorrent protocol. BitTorrent is a peer-to-peer network that distributes data among users without relying on a central server. This means that you can download files from other users who have the same file on their computers.

                -

                Utorrent is free to download and use, and it has many features that make it easy and convenient to download files. You can pause and resume downloads, set bandwidth limits, manage multiple downloads, and more. Utorrent also has a built-in search engine that lets you find torrents from various sources.

                -

                How to Download Lights Out English Tamil Movie Using Utorrent?

                -

                To download Lights Out English Tamil movie using Utorrent, you need to follow these steps:

                -
                  -
                1. Download and install Utorrent from its official website: https://www.utorrent.com/
                2. -
                3. Open Utorrent and click on the search icon in the top right corner.
                4. -
                5. Type "Lights Out English Tamil Movie" in the search box and press enter.
                6. -
                7. You will see a list of results from various sources. Choose the one that has the most seeds and peers. Seeds are users who have the complete file and are sharing it with others. Peers are users who are downloading the file and also sharing parts of it with others. The more seeds and peers a torrent has, the faster and more reliable the download will be.
                8. -
                9. Click on the download button next to the torrent you want to download. You will see a pop-up window that shows you the details of the torrent, such as the file size, name, quality, etc. You can also choose which files you want to download from the torrent if it contains multiple files.
                10. -
                11. Click on OK to start the download. You will see the progress of the download in Utorrent's main window. You can also see the speed, time remaining, and other information about the download.
                12. -
                13. Once the download is finished, you can find the downloaded file in your default download folder or in the folder you specified in Utorrent's settings.
                14. -
                15. You can now watch Lights Out English Tamil movie for free online using any media player that supports MP4 format.
                16. -
                -

                Is It Legal and Safe to Download Lights Out English Tamil Movie Using Utorrent?

                -

                The answer to this question depends on your location and the source of the torrent. In some countries, downloading copyrighted content without permission is illegal and can result in fines or even jail time. In other countries, downloading copyrighted content for personal use is legal or tolerated as long as you do not distribute or sell it to others.

                -

                -

                However, even if downloading Lights Out English Tamil movie using Utorrent is legal in your country, it may not be safe. Torrents can contain viruses, malware, spyware, or other harmful software that can damage your computer or steal your personal information. Torrents can also expose your IP address to other users who can track your online activity or launch cyberattacks against you.

                -

                Therefore, if you decide to download Lights Out English Tamil movie using Utorrent, you should take some precautions to protect yourself and your device. Here are some tips:

                -
                  -
                • Use a reputable antivirus software and scan your downloaded file before opening it.
                • -
                • Use a VPN (virtual private network) service to hide your IP address and encrypt your online traffic.
                • -
                • cec2833e83
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TRegistration V2.1.12.286 TOP Full Source.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TRegistration V2.1.12.286 TOP Full Source.md deleted file mode 100644 index 747589d515ce4f8226eb506fbbc68053e9c22ef1..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TRegistration V2.1.12.286 TOP Full Source.md +++ /dev/null @@ -1,48 +0,0 @@ - -Title: TRegistration: A Delphi Component for Shareware Developers - -Article: -```html -

                  TRegistration: A Delphi Component for Shareware Developers

                  -

                  If you are a Delphi developer who creates shareware applications, you may be interested in TRegistration, a component that helps you handle the registration process and database of your software. TRegistration is designed for use in Win32 and Win64 (XP/Vista/7/8/10) software, and it comes with full source code.

                  -

                  TRegistration consists of two parts: a component that you can use in your applications to enter the registration details and query the state of the registration, and a Registration Tool that lets you manage the MySQL registration database and automatically process incoming order e-mails and send out registration details.

                  -

                  TRegistration v2.1.12.286 Full Source


                  Download Ziphttps://urlcod.com/2uI9FR



                  -

                  With TRegistration, you can manage multiple projects, register users in seconds or use the built-in automatic registration processing functionality for share-it order e-mails, customize global or per application reply e-mail templates, and more.

                  -

                  TRegistration requires Delphi 2009 or newer, Microsoft SQL Server 2008 R2 Express or newer, and 3delite's Helper functions (included). To install TRegistration, you need to unpack the package, create a new package in Delphi, add RegistrationDefs.pas to it, save it as PackageTRegistration.dpk, build it and install it. To configure the Registration Tool for MySQL server, you need to set up the database connection string in the options menu, create a new database file, and use the project manager to add your projects.

                  -

                  TRegistration is a complete solution for shareware developers who want to simplify and automate the registration process and database of their applications. You can download TRegistration v2.1.12.286 Full Source from here.

                  -``` - -Article: -```html -

                  TRegistration: A Delphi Component for Shareware Developers

                  -

                  If you are a Delphi developer who creates shareware applications, you may be interested in TRegistration, a component that helps you handle the registration process and database of your software. TRegistration is designed for use in Win32 and Win64 (XP/Vista/7/8/10) software, and it comes with full source code.

                  -

                  TRegistration consists of two parts: a component that you can use in your applications to enter the registration details and query the state of the registration, and a Registration Tool that lets you manage the MySQL registration database and automatically process incoming order e-mails and send out registration details.

                  -

                  With TRegistration, you can manage multiple projects, register users in seconds or use the built-in automatic registration processing functionality for share-it order e-mails, customize global or per application reply e-mail templates, and more.

                  -

                  -

                  TRegistration requires Delphi 2009 or newer, Microsoft SQL Server 2008 R2 Express or newer, and 3delite's Helper functions (included). To install TRegistration, you need to unpack the package, create a new package in Delphi, add RegistrationDefs.pas to it, save it as PackageTRegistration.dpk, build it and install it. To configure the Registration Tool for MySQL server, you need to set up the database connection string in the options menu, create a new database file, and use the project manager to add your projects.

                  -

                  TRegistration is a complete solution for shareware developers who want to simplify and automate the registration process and database of their applications. You can download TRegistration v2.1.12.286 Full Source from here.

                  -

                  In this article, we will show you how to use TRegistration in your Delphi projects. We will assume that you have already installed and configured TRegistration as described above. We will also use a simple example project that consists of a main form with a button to enter the registration details and a label to display the registration status.

                  -

                  The first step is to add the TRegistration component to your project. You can do this by right-clicking on the component palette and selecting 'Install Packages...'. Then browse to the PackageTRegistration.dpk file and click 'Open'. You should see the TRegistration icon in the component palette under '3delite'. Drag and drop it on your main form.

                  -

                  The next step is to set up the properties of the TRegistration component. You can do this by clicking on the component and using the Object Inspector. The most important properties are:

                  -
                    -
                  • ApplicationID: This is a unique identifier for your application. You can use any string that you like, but it should match the one that you used in the Registration Tool when adding your project.
                  • -
                  • ApplicationName: This is the name of your application that will be displayed in the registration dialog and e-mails.
                  • -
                  • ApplicationVersion: This is the version number of your application that will be checked against the database to determine if an update is available.
                  • -
                  • ApplicationVersionCode: This is a numerical code that represents your application version. It should be incremented with each new release of your application.
                  • -
                  • DatabaseConnectionString: This is the connection string to access the MySQL registration database. It should match the one that you used in the Registration Tool when creating the database file.
                  • -
                  • EncryptionKey: This is a secret key that is used to encrypt and decrypt the registration details. It should match the one that you used in the Registration Tool when creating the database file.
                  • -
                  • Registered: This is a read-only property that indicates whether your application is registered or not. You can use it to enable or disable features based on the registration status.
                  • -
                  -

                  You can also customize other properties such as dialog captions, messages, colors, fonts, etc. according to your preferences.

                  -

                  The final step is to write some code to handle the button click event on your main form. You can do this by double-clicking on the button and adding the following code:

                  -
                  procedure TForm1.Button1Click(Sender: TObject);
                  -begin
                  -  // Show the registration dialog
                  -  if Registration.ShowRegisterDialog then
                  -  begin
                  -    // Update the label with the registration status
                  -    Label1.Caption := 'Registered: ' + BoolToStr(Registration.Registered);
                  -    // Check for updates
                  -    if Registration.CheckFor

                  7b8c122e87
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/solver/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/solver/__init__.py deleted file mode 100644 index 7e36c64f60f38f41d01dd2c9fb30364489a03841..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/solver/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .build import build_lr_scheduler, build_optimizer, get_default_optimizer_params -from .lr_scheduler import ( - LRMultiplier, - LRScheduler, - WarmupCosineLR, - WarmupMultiStepLR, - WarmupParamScheduler, -) - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/boxes.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/boxes.py deleted file mode 100644 index fd396f68645db1d6946056eed868ffcc02cd7a22..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/structures/boxes.py +++ /dev/null @@ -1,425 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import numpy as np -from enum import IntEnum, unique -from typing import List, Tuple, Union -import torch -from torch import device - -_RawBoxType = Union[List[float], Tuple[float, ...], torch.Tensor, np.ndarray] - - -@unique -class BoxMode(IntEnum): - """ - Enum of different ways to represent a box. - """ - - XYXY_ABS = 0 - """ - (x0, y0, x1, y1) in absolute floating points coordinates. - The coordinates in range [0, width or height]. - """ - XYWH_ABS = 1 - """ - (x0, y0, w, h) in absolute floating points coordinates. - """ - XYXY_REL = 2 - """ - Not yet supported! - (x0, y0, x1, y1) in range [0, 1]. They are relative to the size of the image. - """ - XYWH_REL = 3 - """ - Not yet supported! - (x0, y0, w, h) in range [0, 1]. They are relative to the size of the image. - """ - XYWHA_ABS = 4 - """ - (xc, yc, w, h, a) in absolute floating points coordinates. - (xc, yc) is the center of the rotated box, and the angle a is in degrees ccw. - """ - - @staticmethod - def convert(box: _RawBoxType, from_mode: "BoxMode", to_mode: "BoxMode") -> _RawBoxType: - """ - Args: - box: can be a k-tuple, k-list or an Nxk array/tensor, where k = 4 or 5 - from_mode, to_mode (BoxMode) - - Returns: - The converted box of the same type. - """ - if from_mode == to_mode: - return box - - original_type = type(box) - is_numpy = isinstance(box, np.ndarray) - single_box = isinstance(box, (list, tuple)) - if single_box: - assert len(box) == 4 or len(box) == 5, ( - "BoxMode.convert takes either a k-tuple/list or an Nxk array/tensor," - " where k == 4 or 5" - ) - arr = torch.tensor(box)[None, :] - else: - # avoid modifying the input box - if is_numpy: - arr = torch.from_numpy(np.asarray(box)).clone() - else: - arr = box.clone() - - assert to_mode not in [BoxMode.XYXY_REL, BoxMode.XYWH_REL] and from_mode not in [ - BoxMode.XYXY_REL, - BoxMode.XYWH_REL, - ], "Relative mode not yet supported!" - - if from_mode == BoxMode.XYWHA_ABS and to_mode == BoxMode.XYXY_ABS: - assert ( - arr.shape[-1] == 5 - ), "The last dimension of input shape must be 5 for XYWHA format" - original_dtype = arr.dtype - arr = arr.double() - - w = arr[:, 2] - h = arr[:, 3] - a = arr[:, 4] - c = torch.abs(torch.cos(a * math.pi / 180.0)) - s = torch.abs(torch.sin(a * math.pi / 180.0)) - # This basically computes the horizontal bounding rectangle of the rotated box - new_w = c * w + s * h - new_h = c * h + s * w - - # convert center to top-left corner - arr[:, 0] -= new_w / 2.0 - arr[:, 1] -= new_h / 2.0 - # bottom-right corner - arr[:, 2] = arr[:, 0] + new_w - arr[:, 3] = arr[:, 1] + new_h - - arr = arr[:, :4].to(dtype=original_dtype) - elif from_mode == BoxMode.XYWH_ABS and to_mode == BoxMode.XYWHA_ABS: - original_dtype = arr.dtype - arr = arr.double() - arr[:, 0] += arr[:, 2] / 2.0 - arr[:, 1] += arr[:, 3] / 2.0 - angles = torch.zeros((arr.shape[0], 1), dtype=arr.dtype) - arr = torch.cat((arr, angles), axis=1).to(dtype=original_dtype) - else: - if to_mode == BoxMode.XYXY_ABS and from_mode == BoxMode.XYWH_ABS: - arr[:, 2] += arr[:, 0] - arr[:, 3] += arr[:, 1] - elif from_mode == BoxMode.XYXY_ABS and to_mode == BoxMode.XYWH_ABS: - arr[:, 2] -= arr[:, 0] - arr[:, 3] -= arr[:, 1] - else: - raise NotImplementedError( - "Conversion from BoxMode {} to {} is not supported yet".format( - from_mode, to_mode - ) - ) - - if single_box: - return original_type(arr.flatten().tolist()) - if is_numpy: - return arr.numpy() - else: - return arr - - -class Boxes: - """ - This structure stores a list of boxes as a Nx4 torch.Tensor. - It supports some common methods about boxes - (`area`, `clip`, `nonempty`, etc), - and also behaves like a Tensor - (support indexing, `to(device)`, `.device`, and iteration over all boxes) - - Attributes: - tensor (torch.Tensor): float matrix of Nx4. Each row is (x1, y1, x2, y2). - """ - - def __init__(self, tensor: torch.Tensor): - """ - Args: - tensor (Tensor[float]): a Nx4 matrix. Each row is (x1, y1, x2, y2). - """ - if not isinstance(tensor, torch.Tensor): - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=torch.device("cpu")) - else: - tensor = tensor.to(torch.float32) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that does not depend on - # the inputs (and consequently confuses jit) - tensor = tensor.reshape((-1, 4)).to(dtype=torch.float32) - assert tensor.dim() == 2 and tensor.size(-1) == 4, tensor.size() - - self.tensor = tensor - - def clone(self) -> "Boxes": - """ - Clone the Boxes. - - Returns: - Boxes - """ - return Boxes(self.tensor.clone()) - - def to(self, device: torch.device): - # Boxes are assumed float32 and does not support to(dtype) - return Boxes(self.tensor.to(device=device)) - - def area(self) -> torch.Tensor: - """ - Computes the area of all the boxes. - - Returns: - torch.Tensor: a vector with areas of each box. - """ - box = self.tensor - area = (box[:, 2] - box[:, 0]) * (box[:, 3] - box[:, 1]) - return area - - def clip(self, box_size: Tuple[int, int]) -> None: - """ - Clip (in place) the boxes by limiting x coordinates to the range [0, width] - and y coordinates to the range [0, height]. - - Args: - box_size (height, width): The clipping box's size. - """ - assert torch.isfinite(self.tensor).all(), "Box tensor contains infinite or NaN!" - h, w = box_size - x1 = self.tensor[:, 0].clamp(min=0, max=w) - y1 = self.tensor[:, 1].clamp(min=0, max=h) - x2 = self.tensor[:, 2].clamp(min=0, max=w) - y2 = self.tensor[:, 3].clamp(min=0, max=h) - self.tensor = torch.stack((x1, y1, x2, y2), dim=-1) - - def nonempty(self, threshold: float = 0.0) -> torch.Tensor: - """ - Find boxes that are non-empty. - A box is considered empty, if either of its side is no larger than threshold. - - Returns: - Tensor: - a binary vector which represents whether each box is empty - (False) or non-empty (True). - """ - box = self.tensor - widths = box[:, 2] - box[:, 0] - heights = box[:, 3] - box[:, 1] - keep = (widths > threshold) & (heights > threshold) - return keep - - def __getitem__(self, item) -> "Boxes": - """ - Args: - item: int, slice, or a BoolTensor - - Returns: - Boxes: Create a new :class:`Boxes` by indexing. - - The following usage are allowed: - - 1. `new_boxes = boxes[3]`: return a `Boxes` which contains only one box. - 2. `new_boxes = boxes[2:10]`: return a slice of boxes. - 3. `new_boxes = boxes[vector]`, where vector is a torch.BoolTensor - with `length = len(boxes)`. Nonzero elements in the vector will be selected. - - Note that the returned Boxes might share storage with this Boxes, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Boxes(self.tensor[item].view(1, -1)) - b = self.tensor[item] - assert b.dim() == 2, "Indexing on Boxes with {} failed to return a matrix!".format(item) - return Boxes(b) - - def __len__(self) -> int: - return self.tensor.shape[0] - - def __repr__(self) -> str: - return "Boxes(" + str(self.tensor) + ")" - - def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor: - """ - Args: - box_size (height, width): Size of the reference box. - boundary_threshold (int): Boxes that extend beyond the reference box - boundary by more than boundary_threshold are considered "outside". - - Returns: - a binary vector, indicating whether each box is inside the reference box. - """ - height, width = box_size - inds_inside = ( - (self.tensor[..., 0] >= -boundary_threshold) - & (self.tensor[..., 1] >= -boundary_threshold) - & (self.tensor[..., 2] < width + boundary_threshold) - & (self.tensor[..., 3] < height + boundary_threshold) - ) - return inds_inside - - def get_centers(self) -> torch.Tensor: - """ - Returns: - The box centers in a Nx2 array of (x, y). - """ - return (self.tensor[:, :2] + self.tensor[:, 2:]) / 2 - - def scale(self, scale_x: float, scale_y: float) -> None: - """ - Scale the box with horizontal and vertical scaling factors - """ - self.tensor[:, 0::2] *= scale_x - self.tensor[:, 1::2] *= scale_y - - @classmethod - def cat(cls, boxes_list: List["Boxes"]) -> "Boxes": - """ - Concatenates a list of Boxes into a single Boxes - - Arguments: - boxes_list (list[Boxes]) - - Returns: - Boxes: the concatenated Boxes - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all([isinstance(box, Boxes) for box in boxes_list]) - - # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input - cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0)) - return cat_boxes - - @property - def device(self) -> device: - return self.tensor.device - - # type "Iterator[torch.Tensor]", yield, and iter() not supported by torchscript - # https://github.com/pytorch/pytorch/issues/18627 - @torch.jit.unused - def __iter__(self): - """ - Yield a box as a Tensor of shape (4,) at a time. - """ - yield from self.tensor - - -def pairwise_intersection(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, - compute the intersection area between __all__ N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax) - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: intersection, sized [N,M]. - """ - boxes1, boxes2 = boxes1.tensor, boxes2.tensor - width_height = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) - torch.max( - boxes1[:, None, :2], boxes2[:, :2] - ) # [N,M,2] - - width_height.clamp_(min=0) # [N,M,2] - intersection = width_height.prod(dim=2) # [N,M] - return intersection - - -# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py -# with slight modifications -def pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Given two lists of boxes of size N and M, compute the IoU - (intersection over union) between **all** N x M pairs of boxes. - The box order must be (xmin, ymin, xmax, ymax). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoU, sized [N,M]. - """ - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - iou = torch.where( - inter > 0, - inter / (area1[:, None] + area2 - inter), - torch.zeros(1, dtype=inter.dtype, device=inter.device), - ) - return iou - - -def pairwise_ioa(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Similar to :func:`pariwise_iou` but compute the IoA (intersection over boxes2 area). - - Args: - boxes1,boxes2 (Boxes): two `Boxes`. Contains N & M boxes, respectively. - - Returns: - Tensor: IoA, sized [N,M]. - """ - area2 = boxes2.area() # [M] - inter = pairwise_intersection(boxes1, boxes2) - - # handle empty boxes - ioa = torch.where( - inter > 0, inter / area2, torch.zeros(1, dtype=inter.dtype, device=inter.device) - ) - return ioa - - -def pairwise_point_box_distance(points: torch.Tensor, boxes: Boxes): - """ - Pairwise distance between N points and M boxes. The distance between a - point and a box is represented by the distance from the point to 4 edges - of the box. Distances are all positive when the point is inside the box. - - Args: - points: Nx2 coordinates. Each row is (x, y) - boxes: M boxes - - Returns: - Tensor: distances of size (N, M, 4). The 4 values are distances from - the point to the left, top, right, bottom of the box. - """ - x, y = points.unsqueeze(dim=2).unbind(dim=1) # (N, 1) - x0, y0, x1, y1 = boxes.tensor.unsqueeze(dim=0).unbind(dim=2) # (1, M) - return torch.stack([x - x0, y - y0, x1 - x, y1 - y], dim=2) - - -def matched_pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor: - """ - Compute pairwise intersection over union (IOU) of two sets of matched - boxes that have the same number of boxes. - Similar to :func:`pairwise_iou`, but computes only diagonal elements of the matrix. - - Args: - boxes1 (Boxes): bounding boxes, sized [N,4]. - boxes2 (Boxes): same length as boxes1 - Returns: - Tensor: iou, sized [N]. - """ - assert len(boxes1) == len( - boxes2 - ), "boxlists should have the same" "number of entries, got {}, {}".format( - len(boxes1), len(boxes2) - ) - area1 = boxes1.area() # [N] - area2 = boxes2.area() # [N] - box1, box2 = boxes1.tensor, boxes2.tensor - lt = torch.max(box1[:, :2], box2[:, :2]) # [N,2] - rb = torch.min(box1[:, 2:], box2[:, 2:]) # [N,2] - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - iou = inter / (area1 + area2 - inter) # [N] - return iou diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_rotated_boxes.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_rotated_boxes.py deleted file mode 100644 index 478f034a4b8e1b48a1ace5c0a4823ecdf15c8536..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_rotated_boxes.py +++ /dev/null @@ -1,441 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from __future__ import absolute_import, division, print_function, unicode_literals -import logging -import math -import random -import unittest -import torch -from fvcore.common.benchmark import benchmark - -from detectron2.layers.rotated_boxes import pairwise_iou_rotated -from detectron2.structures.boxes import Boxes -from detectron2.structures.rotated_boxes import RotatedBoxes, pairwise_iou -from detectron2.utils.testing import reload_script_model - -logger = logging.getLogger(__name__) - - -class TestRotatedBoxesLayer(unittest.TestCase): - def test_iou_0_dim_cpu(self): - boxes1 = torch.rand(0, 5, dtype=torch.float32) - boxes2 = torch.rand(10, 5, dtype=torch.float32) - expected_ious = torch.zeros(0, 10, dtype=torch.float32) - ious = pairwise_iou_rotated(boxes1, boxes2) - self.assertTrue(torch.allclose(ious, expected_ious)) - - boxes1 = torch.rand(10, 5, dtype=torch.float32) - boxes2 = torch.rand(0, 5, dtype=torch.float32) - expected_ious = torch.zeros(10, 0, dtype=torch.float32) - ious = pairwise_iou_rotated(boxes1, boxes2) - self.assertTrue(torch.allclose(ious, expected_ious)) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_iou_0_dim_cuda(self): - boxes1 = torch.rand(0, 5, dtype=torch.float32) - boxes2 = torch.rand(10, 5, dtype=torch.float32) - expected_ious = torch.zeros(0, 10, dtype=torch.float32) - ious_cuda = pairwise_iou_rotated(boxes1.cuda(), boxes2.cuda()) - self.assertTrue(torch.allclose(ious_cuda.cpu(), expected_ious)) - - boxes1 = torch.rand(10, 5, dtype=torch.float32) - boxes2 = torch.rand(0, 5, dtype=torch.float32) - expected_ious = torch.zeros(10, 0, dtype=torch.float32) - ious_cuda = pairwise_iou_rotated(boxes1.cuda(), boxes2.cuda()) - self.assertTrue(torch.allclose(ious_cuda.cpu(), expected_ious)) - - def test_iou_half_overlap_cpu(self): - boxes1 = torch.tensor([[0.5, 0.5, 1.0, 1.0, 0.0]], dtype=torch.float32) - boxes2 = torch.tensor([[0.25, 0.5, 0.5, 1.0, 0.0]], dtype=torch.float32) - expected_ious = torch.tensor([[0.5]], dtype=torch.float32) - ious = pairwise_iou_rotated(boxes1, boxes2) - self.assertTrue(torch.allclose(ious, expected_ious)) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_iou_half_overlap_cuda(self): - boxes1 = torch.tensor([[0.5, 0.5, 1.0, 1.0, 0.0]], dtype=torch.float32) - boxes2 = torch.tensor([[0.25, 0.5, 0.5, 1.0, 0.0]], dtype=torch.float32) - expected_ious = torch.tensor([[0.5]], dtype=torch.float32) - ious_cuda = pairwise_iou_rotated(boxes1.cuda(), boxes2.cuda()) - self.assertTrue(torch.allclose(ious_cuda.cpu(), expected_ious)) - - def test_iou_precision(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor([[565, 565, 10, 10.0, 0]], dtype=torch.float32, device=device) - boxes2 = torch.tensor([[565, 565, 10, 8.3, 0]], dtype=torch.float32, device=device) - iou = 8.3 / 10.0 - expected_ious = torch.tensor([[iou]], dtype=torch.float32) - ious = pairwise_iou_rotated(boxes1, boxes2) - self.assertTrue(torch.allclose(ious.cpu(), expected_ious)) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_iou_too_many_boxes_cuda(self): - s1, s2 = 5, 1289035 - boxes1 = torch.zeros(s1, 5) - boxes2 = torch.zeros(s2, 5) - ious_cuda = pairwise_iou_rotated(boxes1.cuda(), boxes2.cuda()) - self.assertTupleEqual(tuple(ious_cuda.shape), (s1, s2)) - - def test_iou_extreme(self): - # Cause floating point issues in cuda kernels (#1266) - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor([[160.0, 153.0, 230.0, 23.0, -37.0]], device=device) - boxes2 = torch.tensor( - [ - [ - -1.117407639806935e17, - 1.3858420478349148e18, - 1000.0000610351562, - 1000.0000610351562, - 1612.0, - ] - ], - device=device, - ) - ious = pairwise_iou_rotated(boxes1, boxes2) - self.assertTrue(ious.min() >= 0, ious) - - def test_iou_issue_2154(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor( - [ - [ - 296.6620178222656, - 458.73883056640625, - 23.515729904174805, - 47.677001953125, - 0.08795166015625, - ] - ], - device=device, - ) - boxes2 = torch.tensor( - [[296.66201, 458.73882000000003, 23.51573, 47.67702, 0.087951]], - device=device, - ) - ious = pairwise_iou_rotated(boxes1, boxes2) - expected_ious = torch.tensor([[1.0]], dtype=torch.float32) - self.assertTrue(torch.allclose(ious.cpu(), expected_ious)) - - def test_iou_issue_2167(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor( - [ - [ - 2563.74462890625000000000, - 1436.79016113281250000000, - 2174.70336914062500000000, - 214.09500122070312500000, - 115.11834716796875000000, - ] - ], - device=device, - ) - boxes2 = torch.tensor( - [ - [ - 2563.74462890625000000000, - 1436.79028320312500000000, - 2174.70288085937500000000, - 214.09495544433593750000, - 115.11835479736328125000, - ] - ], - device=device, - ) - ious = pairwise_iou_rotated(boxes1, boxes2) - expected_ious = torch.tensor([[1.0]], dtype=torch.float32) - self.assertTrue(torch.allclose(ious.cpu(), expected_ious)) - - -class TestRotatedBoxesStructure(unittest.TestCase): - def test_clip_area_0_degree(self): - for _ in range(50): - num_boxes = 100 - boxes_5d = torch.zeros(num_boxes, 5) - boxes_5d[:, 0] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 1] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 2] = torch.FloatTensor(num_boxes).uniform_(0, 500) - boxes_5d[:, 3] = torch.FloatTensor(num_boxes).uniform_(0, 500) - # Convert from (x_ctr, y_ctr, w, h, 0) to (x1, y1, x2, y2) - boxes_4d = torch.zeros(num_boxes, 4) - boxes_4d[:, 0] = boxes_5d[:, 0] - boxes_5d[:, 2] / 2.0 - boxes_4d[:, 1] = boxes_5d[:, 1] - boxes_5d[:, 3] / 2.0 - boxes_4d[:, 2] = boxes_5d[:, 0] + boxes_5d[:, 2] / 2.0 - boxes_4d[:, 3] = boxes_5d[:, 1] + boxes_5d[:, 3] / 2.0 - - image_size = (500, 600) - test_boxes_4d = Boxes(boxes_4d) - test_boxes_5d = RotatedBoxes(boxes_5d) - # Before clip - areas_4d = test_boxes_4d.area() - areas_5d = test_boxes_5d.area() - self.assertTrue(torch.allclose(areas_4d, areas_5d, atol=1e-1, rtol=1e-5)) - # After clip - test_boxes_4d.clip(image_size) - test_boxes_5d.clip(image_size) - areas_4d = test_boxes_4d.area() - areas_5d = test_boxes_5d.area() - self.assertTrue(torch.allclose(areas_4d, areas_5d, atol=1e-1, rtol=1e-5)) - - def test_clip_area_arbitrary_angle(self): - num_boxes = 100 - boxes_5d = torch.zeros(num_boxes, 5) - boxes_5d[:, 0] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 1] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 2] = torch.FloatTensor(num_boxes).uniform_(0, 500) - boxes_5d[:, 3] = torch.FloatTensor(num_boxes).uniform_(0, 500) - boxes_5d[:, 4] = torch.FloatTensor(num_boxes).uniform_(-1800, 1800) - clip_angle_threshold = random.uniform(0, 180) - - image_size = (500, 600) - test_boxes_5d = RotatedBoxes(boxes_5d) - # Before clip - areas_before = test_boxes_5d.area() - # After clip - test_boxes_5d.clip(image_size, clip_angle_threshold) - areas_diff = test_boxes_5d.area() - areas_before - - # the areas should only decrease after clipping - self.assertTrue(torch.all(areas_diff <= 0)) - # whenever the box is clipped (thus the area shrinks), - # the angle for the box must be within the clip_angle_threshold - # Note that the clip function will normalize the angle range - # to be within (-180, 180] - - self.assertTrue( - torch.all( - torch.abs(test_boxes_5d.tensor[:, 4][torch.where(areas_diff < 0)]) - < clip_angle_threshold - ) - ) - - def test_normalize_angles(self): - # torch.manual_seed(0) - for _ in range(50): - num_boxes = 100 - boxes_5d = torch.zeros(num_boxes, 5) - boxes_5d[:, 0] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 1] = torch.FloatTensor(num_boxes).uniform_(-100, 500) - boxes_5d[:, 2] = torch.FloatTensor(num_boxes).uniform_(0, 500) - boxes_5d[:, 3] = torch.FloatTensor(num_boxes).uniform_(0, 500) - boxes_5d[:, 4] = torch.FloatTensor(num_boxes).uniform_(-1800, 1800) - rotated_boxes = RotatedBoxes(boxes_5d) - normalized_boxes = rotated_boxes.clone() - normalized_boxes.normalize_angles() - self.assertTrue(torch.all(normalized_boxes.tensor[:, 4] >= -180)) - self.assertTrue(torch.all(normalized_boxes.tensor[:, 4] < 180)) - # x, y, w, h should not change - self.assertTrue(torch.allclose(boxes_5d[:, :4], normalized_boxes.tensor[:, :4])) - # the cos/sin values of the angles should stay the same - - self.assertTrue( - torch.allclose( - torch.cos(boxes_5d[:, 4] * math.pi / 180), - torch.cos(normalized_boxes.tensor[:, 4] * math.pi / 180), - atol=1e-5, - ) - ) - - self.assertTrue( - torch.allclose( - torch.sin(boxes_5d[:, 4] * math.pi / 180), - torch.sin(normalized_boxes.tensor[:, 4] * math.pi / 180), - atol=1e-5, - ) - ) - - def test_pairwise_iou_0_degree(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor( - [[0.5, 0.5, 1.0, 1.0, 0.0], [0.5, 0.5, 1.0, 1.0, 0.0]], - dtype=torch.float32, - device=device, - ) - boxes2 = torch.tensor( - [ - [0.5, 0.5, 1.0, 1.0, 0.0], - [0.25, 0.5, 0.5, 1.0, 0.0], - [0.5, 0.25, 1.0, 0.5, 0.0], - [0.25, 0.25, 0.5, 0.5, 0.0], - [0.75, 0.75, 0.5, 0.5, 0.0], - [1.0, 1.0, 1.0, 1.0, 0.0], - ], - dtype=torch.float32, - device=device, - ) - expected_ious = torch.tensor( - [ - [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)], - [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)], - ], - dtype=torch.float32, - device=device, - ) - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_45_degrees(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor( - [ - [1, 1, math.sqrt(2), math.sqrt(2), 45], - [1, 1, 2 * math.sqrt(2), 2 * math.sqrt(2), -45], - ], - dtype=torch.float32, - device=device, - ) - boxes2 = torch.tensor([[1, 1, 2, 2, 0]], dtype=torch.float32, device=device) - expected_ious = torch.tensor([[0.5], [0.5]], dtype=torch.float32, device=device) - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_orthogonal(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor([[5, 5, 10, 6, 55]], dtype=torch.float32, device=device) - boxes2 = torch.tensor([[5, 5, 10, 6, -35]], dtype=torch.float32, device=device) - iou = (6.0 * 6.0) / (6.0 * 6.0 + 4.0 * 6.0 + 4.0 * 6.0) - expected_ious = torch.tensor([[iou]], dtype=torch.float32, device=device) - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_large_close_boxes(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - boxes1 = torch.tensor( - [[299.500000, 417.370422, 600.000000, 364.259186, 27.1828]], - dtype=torch.float32, - device=device, - ) - boxes2 = torch.tensor( - [[299.500000, 417.370422, 600.000000, 364.259155, 27.1828]], - dtype=torch.float32, - device=device, - ) - iou = 364.259155 / 364.259186 - expected_ious = torch.tensor([[iou]], dtype=torch.float32, device=device) - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_many_boxes(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - num_boxes1 = 100 - num_boxes2 = 200 - boxes1 = torch.stack( - [ - torch.tensor( - [5 + 20 * i, 5 + 20 * i, 10, 10, 0], - dtype=torch.float32, - device=device, - ) - for i in range(num_boxes1) - ] - ) - boxes2 = torch.stack( - [ - torch.tensor( - [5 + 20 * i, 5 + 20 * i, 10, 1 + 9 * i / num_boxes2, 0], - dtype=torch.float32, - device=device, - ) - for i in range(num_boxes2) - ] - ) - expected_ious = torch.zeros(num_boxes1, num_boxes2, dtype=torch.float32, device=device) - for i in range(min(num_boxes1, num_boxes2)): - expected_ious[i][i] = (1 + 9 * i / num_boxes2) / 10.0 - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_issue1207_simplified(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - # Simplified test case of D2-issue-1207 - boxes1 = torch.tensor([[3, 3, 8, 2, -45.0]], device=device) - boxes2 = torch.tensor([[6, 0, 8, 2, -45.0]], device=device) - iou = 0.0 - expected_ious = torch.tensor([[iou]], dtype=torch.float32, device=device) - - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_pairwise_iou_issue1207(self): - for device in ["cpu"] + (["cuda"] if torch.cuda.is_available() else []): - # The original test case in D2-issue-1207 - boxes1 = torch.tensor([[160.0, 153.0, 230.0, 23.0, -37.0]], device=device) - boxes2 = torch.tensor([[190.0, 127.0, 80.0, 21.0, -46.0]], device=device) - - iou = 0.0 - expected_ious = torch.tensor([[iou]], dtype=torch.float32, device=device) - - ious = pairwise_iou(RotatedBoxes(boxes1), RotatedBoxes(boxes2)) - self.assertTrue(torch.allclose(ious, expected_ious)) - - def test_empty_cat(self): - x = RotatedBoxes.cat([]) - self.assertTrue(x.tensor.shape, (0, 5)) - - def test_scriptability(self): - def func(x): - boxes = RotatedBoxes(x) - test = boxes.to(torch.device("cpu")).tensor - return boxes.area(), test - - f = torch.jit.script(func) - f = reload_script_model(f) - f(torch.rand((3, 5))) - - data = torch.rand((3, 5)) - - def func_cat(x: torch.Tensor): - boxes1 = RotatedBoxes(x) - boxes2 = RotatedBoxes(x) - # this is not supported by torchscript for now. - # boxes3 = RotatedBoxes.cat([boxes1, boxes2]) - boxes3 = boxes1.cat([boxes1, boxes2]) - return boxes3 - - f = torch.jit.script(func_cat) - script_box = f(data) - self.assertTrue(torch.equal(torch.cat([data, data]), script_box.tensor)) - - -def benchmark_rotated_iou(): - num_boxes1 = 200 - num_boxes2 = 500 - boxes1 = torch.stack( - [ - torch.tensor([5 + 20 * i, 5 + 20 * i, 10, 10, 0], dtype=torch.float32) - for i in range(num_boxes1) - ] - ) - boxes2 = torch.stack( - [ - torch.tensor( - [5 + 20 * i, 5 + 20 * i, 10, 1 + 9 * i / num_boxes2, 0], - dtype=torch.float32, - ) - for i in range(num_boxes2) - ] - ) - - def func(dev, n=1): - b1 = boxes1.to(device=dev) - b2 = boxes2.to(device=dev) - - def bench(): - for _ in range(n): - pairwise_iou_rotated(b1, b2) - if dev.type == "cuda": - torch.cuda.synchronize() - - return bench - - # only run it once per timed loop, since it's slow - args = [{"dev": torch.device("cpu"), "n": 1}] - if torch.cuda.is_available(): - args.append({"dev": torch.device("cuda"), "n": 10}) - - benchmark(func, "rotated_iou", args, warmup_iters=3) - - -if __name__ == "__main__": - unittest.main() - benchmark_rotated_iou() diff --git a/spaces/nyanko7/sd-diffusers-webui/modules/prompt_parser.py b/spaces/nyanko7/sd-diffusers-webui/modules/prompt_parser.py deleted file mode 100644 index 42cbbb3038612a44571765905e8526553f462663..0000000000000000000000000000000000000000 --- a/spaces/nyanko7/sd-diffusers-webui/modules/prompt_parser.py +++ /dev/null @@ -1,391 +0,0 @@ - -import re -import math -import numpy as np -import torch - -# Code from https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/8e2aeee4a127b295bfc880800e4a312e0f049b85, modified. - -class PromptChunk: - """ - This object contains token ids, weight (multipliers:1.4) and textual inversion embedding info for a chunk of prompt. - If a prompt is short, it is represented by one PromptChunk, otherwise, multiple are necessary. - Each PromptChunk contains an exact amount of tokens - 77, which includes one for start and end token, - so just 75 tokens from prompt. - """ - - def __init__(self): - self.tokens = [] - self.multipliers = [] - self.fixes = [] - - -class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module): - """A pytorch module that is a wrapper for FrozenCLIPEmbedder module. it enhances FrozenCLIPEmbedder, making it possible to - have unlimited prompt length and assign weights to tokens in prompt. - """ - - def __init__(self, text_encoder, enable_emphasis=True): - super().__init__() - - self.device = lambda: text_encoder.device - self.enable_emphasis = enable_emphasis - """Original FrozenCLIPEmbedder module; can also be FrozenOpenCLIPEmbedder or xlmr.BertSeriesModelWithTransformation, - depending on model.""" - - self.chunk_length = 75 - - def empty_chunk(self): - """creates an empty PromptChunk and returns it""" - - chunk = PromptChunk() - chunk.tokens = [self.id_start] + [self.id_end] * (self.chunk_length + 1) - chunk.multipliers = [1.0] * (self.chunk_length + 2) - return chunk - - def get_target_prompt_token_count(self, token_count): - """returns the maximum number of tokens a prompt of a known length can have before it requires one more PromptChunk to be represented""" - - return math.ceil(max(token_count, 1) / self.chunk_length) * self.chunk_length - - def tokenize_line(self, line): - """ - this transforms a single prompt into a list of PromptChunk objects - as many as needed to - represent the prompt. - Returns the list and the total number of tokens in the prompt. - """ - - if self.enable_emphasis: - parsed = parse_prompt_attention(line) - else: - parsed = [[line, 1.0]] - - tokenized = self.tokenize([text for text, _ in parsed]) - - chunks = [] - chunk = PromptChunk() - token_count = 0 - last_comma = -1 - - def next_chunk(is_last=False): - """puts current chunk into the list of results and produces the next one - empty; - if is_last is true, tokens tokens at the end won't add to token_count""" - nonlocal token_count - nonlocal last_comma - nonlocal chunk - - if is_last: - token_count += len(chunk.tokens) - else: - token_count += self.chunk_length - - to_add = self.chunk_length - len(chunk.tokens) - if to_add > 0: - chunk.tokens += [self.id_end] * to_add - chunk.multipliers += [1.0] * to_add - - chunk.tokens = [self.id_start] + chunk.tokens + [self.id_end] - chunk.multipliers = [1.0] + chunk.multipliers + [1.0] - - last_comma = -1 - chunks.append(chunk) - chunk = PromptChunk() - - comma_padding_backtrack = 20 # default value in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/6cff4401824299a983c8e13424018efc347b4a2b/modules/shared.py#L410 - for tokens, (text, weight) in zip(tokenized, parsed): - if text == "BREAK" and weight == -1: - next_chunk() - continue - - position = 0 - while position < len(tokens): - token = tokens[position] - - if token == self.comma_token: - last_comma = len(chunk.tokens) - - # this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack - # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next. - elif ( - comma_padding_backtrack != 0 - and len(chunk.tokens) == self.chunk_length - and last_comma != -1 - and len(chunk.tokens) - last_comma <= comma_padding_backtrack - ): - break_location = last_comma + 1 - - reloc_tokens = chunk.tokens[break_location:] - reloc_mults = chunk.multipliers[break_location:] - - chunk.tokens = chunk.tokens[:break_location] - chunk.multipliers = chunk.multipliers[:break_location] - - next_chunk() - chunk.tokens = reloc_tokens - chunk.multipliers = reloc_mults - - if len(chunk.tokens) == self.chunk_length: - next_chunk() - - chunk.tokens.append(token) - chunk.multipliers.append(weight) - position += 1 - - if len(chunk.tokens) > 0 or len(chunks) == 0: - next_chunk(is_last=True) - - return chunks, token_count - - def process_texts(self, texts): - """ - Accepts a list of texts and calls tokenize_line() on each, with cache. Returns the list of results and maximum - length, in tokens, of all texts. - """ - - token_count = 0 - - cache = {} - batch_chunks = [] - for line in texts: - if line in cache: - chunks = cache[line] - else: - chunks, current_token_count = self.tokenize_line(line) - token_count = max(current_token_count, token_count) - - cache[line] = chunks - - batch_chunks.append(chunks) - - return batch_chunks, token_count - - def forward(self, texts): - """ - Accepts an array of texts; Passes texts through transformers network to create a tensor with numerical representation of those texts. - Returns a tensor with shape of (B, T, C), where B is length of the array; T is length, in tokens, of texts (including padding) - T will - be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, and for SD2 it's 1024. - An example shape returned by this function can be: (2, 77, 768). - Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet - is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream" - """ - - batch_chunks, token_count = self.process_texts(texts) - chunk_count = max([len(x) for x in batch_chunks]) - - zs = [] - ts = [] - for i in range(chunk_count): - batch_chunk = [ - chunks[i] if i < len(chunks) else self.empty_chunk() - for chunks in batch_chunks - ] - - tokens = [x.tokens for x in batch_chunk] - multipliers = [x.multipliers for x in batch_chunk] - # self.embeddings.fixes = [x.fixes for x in batch_chunk] - - # for fixes in self.embeddings.fixes: - # for position, embedding in fixes: - # used_embeddings[embedding.name] = embedding - - z = self.process_tokens(tokens, multipliers) - zs.append(z) - ts.append(tokens) - - return np.hstack(ts), torch.hstack(zs) - - def process_tokens(self, remade_batch_tokens, batch_multipliers): - """ - sends one single prompt chunk to be encoded by transformers neural network. - remade_batch_tokens is a batch of tokens - a list, where every element is a list of tokens; usually - there are exactly 77 tokens in the list. batch_multipliers is the same but for multipliers instead of tokens. - Multipliers are used to give more or less weight to the outputs of transformers network. Each multiplier - corresponds to one token. - """ - tokens = torch.asarray(remade_batch_tokens).to(self.device()) - - # this is for SD2: SD1 uses the same token for padding and end of text, while SD2 uses different ones. - if self.id_end != self.id_pad: - for batch_pos in range(len(remade_batch_tokens)): - index = remade_batch_tokens[batch_pos].index(self.id_end) - tokens[batch_pos, index + 1 : tokens.shape[1]] = self.id_pad - - z = self.encode_with_transformers(tokens) - - # restoring original mean is likely not correct, but it seems to work well to prevent artifacts that happen otherwise - batch_multipliers = torch.asarray(batch_multipliers).to(self.device()) - original_mean = z.mean() - z = z * batch_multipliers.reshape(batch_multipliers.shape + (1,)).expand(z.shape) - new_mean = z.mean() - z = z * (original_mean / new_mean) - - return z - - -class FrozenCLIPEmbedderWithCustomWords(FrozenCLIPEmbedderWithCustomWordsBase): - def __init__(self, tokenizer, text_encoder): - super().__init__(text_encoder) - self.tokenizer = tokenizer - self.text_encoder = text_encoder - - vocab = self.tokenizer.get_vocab() - - self.comma_token = vocab.get(",", None) - - self.token_mults = {} - tokens_with_parens = [ - (k, v) - for k, v in vocab.items() - if "(" in k or ")" in k or "[" in k or "]" in k - ] - for text, ident in tokens_with_parens: - mult = 1.0 - for c in text: - if c == "[": - mult /= 1.1 - if c == "]": - mult *= 1.1 - if c == "(": - mult *= 1.1 - if c == ")": - mult /= 1.1 - - if mult != 1.0: - self.token_mults[ident] = mult - - self.id_start = self.tokenizer.bos_token_id - self.id_end = self.tokenizer.eos_token_id - self.id_pad = self.id_end - - def tokenize(self, texts): - tokenized = self.tokenizer( - texts, truncation=False, add_special_tokens=False - )["input_ids"] - - return tokenized - - def encode_with_transformers(self, tokens): - CLIP_stop_at_last_layers = 1 - tokens = tokens.to(self.text_encoder.device) - outputs = self.text_encoder(tokens, output_hidden_states=True) - - if CLIP_stop_at_last_layers > 1: - z = outputs.hidden_states[-CLIP_stop_at_last_layers] - z = self.text_encoder.text_model.final_layer_norm(z) - else: - z = outputs.last_hidden_state - - return z - - -re_attention = re.compile( - r""" -\\\(| -\\\)| -\\\[| -\\]| -\\\\| -\\| -\(| -\[| -:([+-]?[.\d]+)\)| -\)| -]| -[^\\()\[\]:]+| -: -""", - re.X, -) - -re_break = re.compile(r"\s*\bBREAK\b\s*", re.S) - - -def parse_prompt_attention(text): - """ - Parses a string with attention tokens and returns a list of pairs: text and its associated weight. - Accepted tokens are: - (abc) - increases attention to abc by a multiplier of 1.1 - (abc:3.12) - increases attention to abc by a multiplier of 3.12 - [abc] - decreases attention to abc by a multiplier of 1.1 - \( - literal character '(' - \[ - literal character '[' - \) - literal character ')' - \] - literal character ']' - \\ - literal character '\' - anything else - just text - - >>> parse_prompt_attention('normal text') - [['normal text', 1.0]] - >>> parse_prompt_attention('an (important) word') - [['an ', 1.0], ['important', 1.1], [' word', 1.0]] - >>> parse_prompt_attention('(unbalanced') - [['unbalanced', 1.1]] - >>> parse_prompt_attention('\(literal\]') - [['(literal]', 1.0]] - >>> parse_prompt_attention('(unnecessary)(parens)') - [['unnecessaryparens', 1.1]] - >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).') - [['a ', 1.0], - ['house', 1.5730000000000004], - [' ', 1.1], - ['on', 1.0], - [' a ', 1.1], - ['hill', 0.55], - [', sun, ', 1.1], - ['sky', 1.4641000000000006], - ['.', 1.1]] - """ - - res = [] - round_brackets = [] - square_brackets = [] - - round_bracket_multiplier = 1.1 - square_bracket_multiplier = 1 / 1.1 - - def multiply_range(start_position, multiplier): - for p in range(start_position, len(res)): - res[p][1] *= multiplier - - for m in re_attention.finditer(text): - text = m.group(0) - weight = m.group(1) - - if text.startswith("\\"): - res.append([text[1:], 1.0]) - elif text == "(": - round_brackets.append(len(res)) - elif text == "[": - square_brackets.append(len(res)) - elif weight is not None and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), float(weight)) - elif text == ")" and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), round_bracket_multiplier) - elif text == "]" and len(square_brackets) > 0: - multiply_range(square_brackets.pop(), square_bracket_multiplier) - else: - parts = re.split(re_break, text) - for i, part in enumerate(parts): - if i > 0: - res.append(["BREAK", -1]) - res.append([part, 1.0]) - - for pos in round_brackets: - multiply_range(pos, round_bracket_multiplier) - - for pos in square_brackets: - multiply_range(pos, square_bracket_multiplier) - - if len(res) == 0: - res = [["", 1.0]] - - # merge runs of identical weights - i = 0 - while i + 1 < len(res): - if res[i][1] == res[i + 1][1]: - res[i][0] += res[i + 1][0] - res.pop(i + 1) - else: - i += 1 - - return res diff --git a/spaces/offside/offsidespace/index.html b/spaces/offside/offsidespace/index.html deleted file mode 100644 index 1a095e8d31bb2d475790ec498924b97916cbd6e2..0000000000000000000000000000000000000000 --- a/spaces/offside/offsidespace/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
                  -

                  Welcome to your static Space!

                  -

                  - You can modify this app directly by editing index.html in the - Files and versions tab2. -

                  -

                  - Also don't forget to check the - Spaces documentation. -

                  -
                  - - diff --git a/spaces/om-app/Promt-to-Image-diffusions/README.md b/spaces/om-app/Promt-to-Image-diffusions/README.md deleted file mode 100644 index 2db787502e6980ff1d80eed060db76f1201e4589..0000000000000000000000000000000000000000 --- a/spaces/om-app/Promt-to-Image-diffusions/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Magic Prompt -emoji: 🎆 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: om-app/magic-diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/os1187/docquery/app.py b/spaces/os1187/docquery/app.py deleted file mode 100644 index b213afce03869f8a6067e2c885b8ac823e911529..0000000000000000000000000000000000000000 --- a/spaces/os1187/docquery/app.py +++ /dev/null @@ -1,423 +0,0 @@ -import os - -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -from PIL import Image, ImageDraw -import traceback - -import gradio as gr - -import torch -from docquery import pipeline -from docquery.document import load_document, ImageDocument -from docquery.ocr_reader import get_ocr_reader - - -def ensure_list(x): - if isinstance(x, list): - return x - else: - return [x] - - -CHECKPOINTS = { - "LayoutLMv1": "impira/layoutlm-document-qa", - "LayoutLMv1 for Invoices": "impira/layoutlm-invoices", - "Donut": "naver-clova-ix/donut-base-finetuned-docvqa", -} - -PIPELINES = {} - - -def construct_pipeline(task, model): - global PIPELINES - if model in PIPELINES: - return PIPELINES[model] - - device = "cuda" if torch.cuda.is_available() else "cpu" - ret = pipeline(task=task, model=CHECKPOINTS[model], device=device) - PIPELINES[model] = ret - return ret - - -def run_pipeline(model, question, document, top_k): - pipeline = construct_pipeline("document-question-answering", model) - return pipeline(question=question, **document.context, top_k=top_k) - - -# TODO: Move into docquery -# TODO: Support words past the first page (or window?) -def lift_word_boxes(document, page): - return document.context["image"][page][1] - - -def expand_bbox(word_boxes): - if len(word_boxes) == 0: - return None - - min_x, min_y, max_x, max_y = zip(*[x[1] for x in word_boxes]) - min_x, min_y, max_x, max_y = [min(min_x), min(min_y), max(max_x), max(max_y)] - return [min_x, min_y, max_x, max_y] - - -# LayoutLM boxes are normalized to 0, 1000 -def normalize_bbox(box, width, height, padding=0.005): - min_x, min_y, max_x, max_y = [c / 1000 for c in box] - if padding != 0: - min_x = max(0, min_x - padding) - min_y = max(0, min_y - padding) - max_x = min(max_x + padding, 1) - max_y = min(max_y + padding, 1) - return [min_x * width, min_y * height, max_x * width, max_y * height] - - -examples = [ - [ - "invoice.png", - "What is the invoice number?", - ], - [ - "contract.jpeg", - "What is the purchase amount?", - ], - [ - "statement.png", - "What are net sales for 2020?", - ], - # [ - # "docquery.png", - # "How many likes does the space have?", - # ], - # [ - # "hacker_news.png", - # "What is the title of post number 5?", - # ], -] - -question_files = { - "What are net sales for 2020?": "statement.pdf", - "How many likes does the space have?": "https://huggingface.co/spaces/impira/docquery", - "What is the title of post number 5?": "https://news.ycombinator.com", -} - - -def process_path(path): - error = None - if path: - try: - document = load_document(path) - return ( - document, - gr.update(visible=True, value=document.preview), - gr.update(visible=True), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - except Exception as e: - traceback.print_exc() - error = str(e) - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=True, value=error) if error is not None else None, - None, - ) - - -def process_upload(file): - if file: - return process_path(file.name) - else: - return ( - None, - gr.update(visible=False, value=None), - gr.update(visible=False), - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - None, - ) - - -colors = ["#64A087", "black", "black"] - - -def process_question(question, document, model=list(CHECKPOINTS.keys())[0]): - if not question or document is None: - return None, None, None - - text_value = None - predictions = run_pipeline(model, question, document, 3) - pages = [x.copy().convert("RGB") for x in document.preview] - for i, p in enumerate(ensure_list(predictions)): - if i == 0: - text_value = p["answer"] - else: - # Keep the code around to produce multiple boxes, but only show the top - # prediction for now - break - - if "word_ids" in p: - image = pages[p["page"]] - draw = ImageDraw.Draw(image, "RGBA") - word_boxes = lift_word_boxes(document, p["page"]) - x1, y1, x2, y2 = normalize_bbox( - expand_bbox([word_boxes[i] for i in p["word_ids"]]), - image.width, - image.height, - ) - draw.rectangle(((x1, y1), (x2, y2)), fill=(0, 255, 0, int(0.4 * 255))) - - return ( - gr.update(visible=True, value=pages), - gr.update(visible=True, value=predictions), - gr.update( - visible=True, - value=text_value, - ), - ) - - -def load_example_document(img, question, model): - if img is not None: - if question in question_files: - document = load_document(question_files[question]) - else: - document = ImageDocument(Image.fromarray(img), get_ocr_reader()) - preview, answer, answer_text = process_question(question, document, model) - return document, question, preview, gr.update(visible=True), answer, answer_text - else: - return None, None, None, gr.update(visible=False), None, None - - -CSS = """ -#question input { - font-size: 16px; -} -#url-textbox { - padding: 0 !important; -} -#short-upload-box .w-full { - min-height: 10rem !important; -} -/* I think something like this can be used to re-shape - * the table - */ -/* -.gr-samples-table tr { - display: inline; -} -.gr-samples-table .p-2 { - width: 100px; -} -*/ -#select-a-file { - width: 100%; -} -#file-clear { - padding-top: 2px !important; - padding-bottom: 2px !important; - padding-left: 8px !important; - padding-right: 8px !important; - margin-top: 10px; -} -.gradio-container .gr-button-primary { - background: linear-gradient(180deg, #FAED27 0%, #FAED27 100%); - border: 1px solid #000000; - border-radius: 8px; - color: #000000; -} -.gradio-container.dark button#submit-button { - background: linear-gradient(180deg, #FAED27 0%, #FAED27 100%); - border: 1px solid #000000; - border-radius: 8px; - color: #000000 -} - -table.gr-samples-table tr td { - border: none; - outline: none; -} - -table.gr-samples-table tr td:first-of-type { - width: 0%; -} - -div#short-upload-box div.absolute { - display: none !important; -} - -gradio-app > div > div > div > div.w-full > div, .gradio-app > div > div > div > div.w-full > div { - gap: 0px 2%; -} - -gradio-app div div div div.w-full, .gradio-app div div div div.w-full { - gap: 0px; -} - -gradio-app h2, .gradio-app h2 { - padding-top: 10px; -} - -#answer { - overflow-y: scroll; - color: white; - background: #666; - border-color: #666; - font-size: 20px; - font-weight: bold; -} - -#answer span { - color: white; -} - -#answer textarea { - color:white; - background: #777; - border-color: #777; - font-size: 18px; -} - -#url-error input { - color: red; -} -""" - -with gr.Blocks(css=CSS) as demo: - gr.Markdown() - gr.Markdown( - - ) - - document = gr.Variable() - example_question = gr.Textbox(visible=False) - example_image = gr.Image(visible=False) - - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - gr.Markdown("## 1. Select a file", elem_id="select-a-file") - img_clear_button = gr.Button( - "Clear", variant="secondary", elem_id="file-clear", visible=False - ) - image = gr.Gallery(visible=False) - with gr.Row(equal_height=True): - with gr.Column(): - with gr.Row(): - url = gr.Textbox( - show_label=False, - placeholder="URL", - lines=1, - max_lines=1, - elem_id="url-textbox", - ) - submit = gr.Button("Get") - url_error = gr.Textbox( - visible=False, - elem_id="url-error", - max_lines=1, - interactive=False, - label="Error", - ) - gr.Markdown("— or —") - upload = gr.File(label=None, interactive=True, elem_id="short-upload-box") - gr.Examples( - examples=examples, - inputs=[example_image, example_question], - ) - - with gr.Column() as col: - gr.Markdown("## 2. Ask a question") - question = gr.Textbox( - label="Question", - placeholder="e.g. What is the invoice number?", - lines=1, - max_lines=1, - ) - model = gr.Radio( - choices=list(CHECKPOINTS.keys()), - value=list(CHECKPOINTS.keys())[0], - label="Model", - ) - - with gr.Row(): - clear_button = gr.Button("Clear", variant="secondary") - submit_button = gr.Button( - "Submit", variant="primary", elem_id="submit-button" - ) - with gr.Column(): - output_text = gr.Textbox( - label="Top Answer", visible=False, elem_id="answer" - ) - output = gr.JSON(label="Output", visible=False) - - for cb in [img_clear_button, clear_button]: - cb.click( - lambda _: ( - gr.update(visible=False, value=None), - None, - gr.update(visible=False, value=None), - gr.update(visible=False, value=None), - gr.update(visible=False), - None, - None, - None, - gr.update(visible=False, value=None), - None, - ), - inputs=clear_button, - outputs=[ - image, - document, - output, - output_text, - img_clear_button, - example_image, - upload, - url, - url_error, - question, - ], - ) - - upload.change( - fn=process_upload, - inputs=[upload], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - submit.click( - fn=process_path, - inputs=[url], - outputs=[document, image, img_clear_button, output, output_text, url_error], - ) - - question.submit( - fn=process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - submit_button.click( - process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - model.change( - process_question, - inputs=[question, document, model], - outputs=[image, output, output_text], - ) - - example_image.change( - fn=load_example_document, - inputs=[example_image, example_question, model], - outputs=[document, question, image, img_clear_button, output, output_text], - ) - -if __name__ == "__main__": - demo.launch(enable_queue=False) \ No newline at end of file diff --git a/spaces/osanseviero/food_classifier_v1/README.md b/spaces/osanseviero/food_classifier_v1/README.md deleted file mode 100644 index e4083181b56e05c3534a6618a5f79aa51e677ba9..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/food_classifier_v1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Food_classifier_v1 -emoji: 🐢 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/unidiffuser.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/unidiffuser.md deleted file mode 100644 index cc59b168711cc49efc2183478ddab7b5c87bd7c4..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/unidiffuser.md +++ /dev/null @@ -1,200 +0,0 @@ - - -# UniDiffuser - -The UniDiffuser model was proposed in [One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale](https://huggingface.co/papers/2303.06555) by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. - -The abstract from the [paper](https://arxiv.org/abs/2303.06555) is: - -*This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is -- learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).* - -You can find the original codebase at [thu-ml/unidiffuser](https://github.com/thu-ml/unidiffuser) and additional checkpoints at [thu-ml](https://huggingface.co/thu-ml). - - - -There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become `NaNs`. This issue can be mitigated by switching to PyTorch 2.X. - - - -This pipeline was contributed by [dg845](https://github.com/dg845). ❤️ - -## Usage Examples - -Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: - -### Unconditional Image and Text Generation - -Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a [`UniDiffuserPipeline`] will produce a (image, text) pair: - -```python -import torch - -from diffusers import UniDiffuserPipeline - -device = "cuda" -model_id_or_path = "thu-ml/unidiffuser-v1" -pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) -pipe.to(device) - -# Unconditional image and text generation. The generation task is automatically inferred. -sample = pipe(num_inference_steps=20, guidance_scale=8.0) -image = sample.images[0] -text = sample.text[0] -image.save("unidiffuser_joint_sample_image.png") -print(text) -``` - -This is also called "joint" generation in the UniDiffusers paper, since we are sampling from the joint image-text distribution. - -Note that the generation task is inferred from the inputs used when calling the pipeline. -It is also possible to manually specify the unconditional generation task ("mode") manually with [`UniDiffuserPipeline.set_joint_mode`]: - -```python -# Equivalent to the above. -pipe.set_joint_mode() -sample = pipe(num_inference_steps=20, guidance_scale=8.0) -``` - -When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting the infer the mode. -You can reset the mode with [`UniDiffuserPipeline.reset_mode`], after which the pipeline will once again infer the mode. - -You can also generate only an image or only text (which the UniDiffuser paper calls "marginal" generation since we sample from the marginal distribution of images and text, respectively): - -```python -# Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance -# Image-only generation -pipe.set_image_mode() -sample_image = pipe(num_inference_steps=20).images[0] -# Text-only generation -pipe.set_text_mode() -sample_text = pipe(num_inference_steps=20).text[0] -``` - -### Text-to-Image Generation - -UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. -Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): - -```python -import torch - -from diffusers import UniDiffuserPipeline - -device = "cuda" -model_id_or_path = "thu-ml/unidiffuser-v1" -pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) -pipe.to(device) - -# Text-to-image generation -prompt = "an elephant under the sea" - -sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) -t2i_image = sample.images[0] -t2i_image.save("unidiffuser_text2img_sample_image.png") -``` - -The `text2img` mode requires that either an input `prompt` or `prompt_embeds` be supplied. You can set the `text2img` mode manually with [`UniDiffuserPipeline.set_text_to_image_mode`]. - -### Image-to-Text Generation - -Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): - -```python -import torch - -from diffusers import UniDiffuserPipeline -from diffusers.utils import load_image - -device = "cuda" -model_id_or_path = "thu-ml/unidiffuser-v1" -pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) -pipe.to(device) - -# Image-to-text generation -image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" -init_image = load_image(image_url).resize((512, 512)) - -sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) -i2t_text = sample.text[0] -print(i2t_text) -``` - -The `img2text` mode requires that an input `image` be supplied. You can set the `img2text` mode manually with [`UniDiffuserPipeline.set_image_to_text_mode`]. - -### Image Variation - -The UniDiffuser authors suggest performing image variation through a "round-trip" generation method, where given an input image, we first perform an image-to-text generation, and the perform a text-to-image generation on the outputs of the first generation. -This produces a new image which is semantically similar to the input image: - -```python -import torch - -from diffusers import UniDiffuserPipeline -from diffusers.utils import load_image - -device = "cuda" -model_id_or_path = "thu-ml/unidiffuser-v1" -pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) -pipe.to(device) - -# Image variation can be performed with a image-to-text generation followed by a text-to-image generation: -# 1. Image-to-text generation -image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" -init_image = load_image(image_url).resize((512, 512)) - -sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) -i2t_text = sample.text[0] -print(i2t_text) - -# 2. Text-to-image generation -sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) -final_image = sample.images[0] -final_image.save("unidiffuser_image_variation_sample.png") -``` - -### Text Variation - - -Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: - -```python -import torch - -from diffusers import UniDiffuserPipeline - -device = "cuda" -model_id_or_path = "thu-ml/unidiffuser-v1" -pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) -pipe.to(device) - -# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: -# 1. Text-to-image generation -prompt = "an elephant under the sea" - -sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) -t2i_image = sample.images[0] -t2i_image.save("unidiffuser_text2img_sample_image.png") - -# 2. Image-to-text generation -sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) -final_prompt = sample.text[0] -print(final_prompt) -``` - -## UniDiffuserPipeline -[[autodoc]] UniDiffuserPipeline - - all - - __call__ - -## ImageTextPipelineOutput -[[autodoc]] pipelines.ImageTextPipelineOutput \ No newline at end of file diff --git a/spaces/padmanabhbosamia/Cifar10_Classfication/model.py b/spaces/padmanabhbosamia/Cifar10_Classfication/model.py deleted file mode 100644 index cf6bb64674a9f5110a80472ed45c1a8e32a08b97..0000000000000000000000000000000000000000 --- a/spaces/padmanabhbosamia/Cifar10_Classfication/model.py +++ /dev/null @@ -1,104 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - - -class BasicBlock(nn.Module): - - def __init__(self, in_planes, planes, stride=1): - super(BasicBlock, self).__init__() - self.conv1 = nn.Conv2d( - in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False - ) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, planes, kernel_size=3, stride=1, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - - self.shortcut = nn.Sequential() - - def forward(self, x): - out = F.relu(self.bn1(self.conv1(x))) - out = self.bn2(self.conv2(out)) - out += self.shortcut(x) - out = F.relu(out) - return out - - -class CustomBlock(nn.Module): - def __init__(self, in_channels, out_channels): - super(CustomBlock, self).__init__() - - self.inner_layer = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - ), - nn.MaxPool2d(kernel_size=2), - nn.BatchNorm2d(out_channels), - nn.ReLU(), - ) - - self.res_block = BasicBlock(out_channels, out_channels) - - def forward(self, x): - x = self.inner_layer(x) - r = self.res_block(x) - - out = x + r - - return out - - -class CustomResNet(nn.Module): - def __init__(self, num_classes=10): - super(CustomResNet, self).__init__() - - self.prep_layer = nn.Sequential( - nn.Conv2d( - in_channels=3, - out_channels=64, - kernel_size=3, - stride=1, - padding=1, - bias=False, - ), - nn.BatchNorm2d(64), - nn.ReLU(), - ) - - self.layer_1 = CustomBlock(in_channels=64, out_channels=128) - - self.layer_2 = nn.Sequential( - nn.Conv2d( - in_channels=128, - out_channels=256, - kernel_size=3, - stride=1, - padding=1, - bias=False, - ), - nn.MaxPool2d(kernel_size=2), - nn.BatchNorm2d(256), - nn.ReLU(), - ) - - self.layer_3 = CustomBlock(in_channels=256, out_channels=512) - - self.max_pool = nn.Sequential(nn.MaxPool2d(kernel_size=4)) - - self.fc = nn.Linear(512, num_classes) - - def forward(self, x): - x = self.prep_layer(x) - x = self.layer_1(x) - x = self.layer_2(x) - x = self.layer_3(x) - x = self.max_pool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - return x diff --git a/spaces/parasmech/Image_captioning_nlpconnect/README.md b/spaces/parasmech/Image_captioning_nlpconnect/README.md deleted file mode 100644 index 81174d4b41c27fd8d344ce9f9bc481f5f3643972..0000000000000000000000000000000000000000 --- a/spaces/parasmech/Image_captioning_nlpconnect/README.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Image Captioning Nlpconnect -emoji: 👀 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/paulbricman/conceptarium/frontend/components/microverses.py b/spaces/paulbricman/conceptarium/frontend/components/microverses.py deleted file mode 100644 index 30d58d8785e53dd8f5f9b6c5e43dc2bf2254dcd0..0000000000000000000000000000000000000000 --- a/spaces/paulbricman/conceptarium/frontend/components/microverses.py +++ /dev/null @@ -1,123 +0,0 @@ -import datetime -import streamlit as st -import requests -import json -import extra_streamlit_components as stx -from time import sleep -import os -from pathlib import Path - - -def paint(): - sleep(0.15) - cookie_manager = stx.CookieManager() - user_state = cookie_manager.get('user_state') - - if not user_state: - with st.spinner('retrieving user data...'): - sleep(2.) - user_state = cookie_manager.get('user_state') - if not user_state: - user_state = {} - - user_state['layout'] = user_state.get('layout', default_layout()) - user_state['microverses'] = user_state.get('microverses', []) - st.session_state['microverses'] = user_state['microverses'] - st.session_state['layout'] = user_state['layout'] - - with st.sidebar: - with st.expander('💻 layout', expanded=True): - user_state['layout']['viewportCols'] = int(st.number_input( - 'viewport cols', 1, 5, user_state['layout'].get('viewportCols', 3), 1)) - - faux_components = ['header', 'knowledge', - 'microverses', 'viewport'] - - components_path = Path('components') - if not components_path.exists(): - components_path = Path('frontend') / 'components' - - components = [e.split('.')[0] for e in os.listdir(components_path) if e.endswith( - '.py') and e.split('.')[0] not in faux_components] - user_state['layout']['leftColumn'] = st.multiselect( - 'left column', components, user_state['layout'].get('leftColumn', ['navigator', 'ranker'])) - user_state['layout']['rightColumn'] = st.multiselect( - 'right column', components, user_state['layout'].get('rightColumn', ['inspector'])) - st.session_state['layout'] = user_state['layout'] - cookie_manager.set('user_state', user_state, expires_at=datetime.datetime.now( - ) + datetime.timedelta(days=30)) - - if len(user_state['microverses']) > 0: - with st.expander('🔌 connected microverses', expanded=True): - for e_idx, e in enumerate(user_state['microverses']): - if e['auth']['custodian']: - display_text = '🗝️ ' + e['url'] - else: - display_text = e['url'] - st.code(display_text) - - if e['auth']['custodian']: - if st.button('create archive', key=e): - archive = requests.get(e['url'] + '/dump', - headers={'Authorization': f"Bearer {e['token']}"}).content - st.download_button( - 'download archive', data=archive, file_name='knowledge.zip') - - if st.button('remove', key=e, help='Remove this source of thoughts.'): - user_state['microverses'].remove(e) - cookie_manager.delete('user_state') - cookie_manager.set( - 'user_state', user_state, expires_at=datetime.datetime.now() + datetime.timedelta(days=30), key='remove') - - with st.expander('🆕 connect to new microverse', expanded=True): - url = st.text_input('conceptarium url', - key=user_state['microverses'], help='Specify the base URL of the conceptarium you wish to access thoughts from. If you\'re trying to connect to your local instance, enter `localhost`.') - token = st.text_input( - 'access token', key=user_state['microverses'], help='Specify the token to be used in authorizing access to this conceptarium. If you\'re the custodian of this conceptarium, enter your custodian token. If this is someone else\'s instance, please use the microverse token they provided you with.', type='password') - - if st.button('add', help='Add this conceptarium as a source of thoughts to be explored.'): - if '://' not in url: - url = 'http://' + url - if url[-1] == '/': - url = url[:-1] - - custodian_check = json.loads( - requests.get(url + '/custodian/check', - headers={'Authorization': f"Bearer {token}"}).content) - if len([e for e in user_state['microverses'] if e['url'] == url]) == 0: - user_state['microverses'] += [{ - 'url': url, - 'token': token, - 'auth': custodian_check - }] - cookie_manager.set( - 'user_state', user_state, expires_at=datetime.datetime.now() + datetime.timedelta(days=30), key='add') - st.session_state['microverses'] = user_state['microverses'] - - custodian_microverse = [ - e for e in user_state['microverses'] if e['auth']['custodian'] == True] - if len(custodian_microverse) > 0: - shared_microverses = json.loads(requests.get(custodian_microverse[0]['url'] + '/microverse/list', - headers={'Authorization': f"Bearer {custodian_microverse[0]['token']}"}).content) - if len(shared_microverses) > 0: - with st.expander('🗝️ shared microverses', expanded=True): - for e_idx, e in enumerate(shared_microverses): - if isinstance(e, dict): - st.code(e['token']) - if e['modality'] == 'text': - st.success(e['content']) - - if st.button('disable', help='Disable the access to this microverse.', key=e): - requests.get(custodian_microverse[0]['url'] + '/microverse/remove', params={ - 'microverse': e['token'] - }, headers={'Authorization': f"Bearer {custodian_microverse[0]['token']}"}) - st.info( - 'The microverse has been removed.') - - -def default_layout(): - return { - 'viewportCols': 3, - 'leftColumn': ['navigator', 'ranker'], - 'rightColumn': ['inspector'] - } diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/multilayer_graph.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/multilayer_graph.py deleted file mode 100644 index b13d080d74375d298d707f79b01415347b0ed422..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/multilayer_graph.py +++ /dev/null @@ -1,132 +0,0 @@ -import argparse, os, shutil, json -from netdissect.easydict import EasyDict -from xml.etree import ElementTree as et -from collections import defaultdict - -def parseargs(): - parser = argparse.ArgumentParser() - def aa(*args, **kwargs): - parser.add_argument(*args, **kwargs) - aa('--model', choices=['resnet18-bn', 'resnet18-hn', 'resnet18-ha'], - default='resnet18-bn') - aa('--iteration', type=int, default=0) - aa('--dataset', choices=['places'], - default='places') - aa('--seg', choices=['net', 'netp', 'netq', 'netpq', - 'netpqc', 'netpqxc', 'human'], - default='net') - aa('--layers', nargs='+') - aa('--quantile', type=float, default=0.01) - aa('--miniou', type=float, default=0.025) - args = parser.parse_args() - return args - -def main(): - args = parseargs() - threshold_iou = args.miniou - layer_report = {} - qdir = '-%d' % (args.quantile * 1000) if args.quantile != 0.01 else '' - for layer in args.layers: - input_filename = 'results/%s-%d-%s-%s-%s%s/report.json' % ( - args.model, args.iteration, args.dataset, args.seg, layer, qdir) - with open(input_filename) as f: - layer_report[layer] = EasyDict(json.load(f)) - # Now assemble the data needed for the graph - # (Layername, [(catname, [unitcount, unitcount, unitcount]), (catname..) - cat_order = ['object', 'part', 'material', 'color'] - graph_data = [] - for layer in args.layers: - layer_data = [] - catmap = defaultdict(lambda: defaultdict(int)) - units = layer_report[layer].get('units', - layer_report[layer].get('images', None)) # old format - for unitrec in units: - if unitrec.iou is None or unitrec.iou < threshold_iou: - continue - catmap[unitrec.cat][unitrec.label] += 1 - for cat in cat_order: - if cat not in catmap: - continue - # For this graph we do not need labels - cat_data = list(catmap[cat].values()) - cat_data.sort(key=lambda x: -x) - layer_data.append((cat, cat_data)) - graph_data.append((layer, layer_data)) - # Now make the actual graph - largest_layer = max(sum(len(cat_data) - for cat, cat_data in layer_data) - for layer, layer_data in graph_data) - layer_height = 14 - layer_gap = 2 - barwidth = 3 - bargap = 0 - leftmargin = 48 - margin = 8 - svgwidth = largest_layer * (barwidth + bargap) + margin + leftmargin - svgheight = ((layer_height + layer_gap) * len(args.layers) - layer_gap + - 2 * margin) - textsize = 10 - - # create an SVG XML element - svg = et.Element('svg', width=str(svgwidth), height=str(svgheight), - version='1.1', xmlns='http://www.w3.org/2000/svg') - - # Draw big category background rectangles - y = margin - for layer, layer_data in graph_data: - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;' + - 'text-anchor:end;alignment-baseline:hanging;' + - 'transform:translate(%dpx, %dpx);') % - (textsize, leftmargin - 4, y + (layer_height - textsize) / 2) - ).text = str(layer) - barmax = max(max(cat_data) if len(cat_data) else 1 - for cat, cat_data in layer_data) if len(layer_data) else 1 - barscale = float(layer_height) / barmax - x = leftmargin - for cat, cat_data in layer_data: - catwidth = len(cat_data) * (barwidth + bargap) - et.SubElement(svg, 'rect', - x=str(x), y=str(y), - width=str(catwidth), - height=str(layer_height), - fill=cat_palette[cat][1]) - for bar in cat_data: - barheight = barscale * bar - et.SubElement(svg, 'rect', - x=str(x), y=str(y + layer_height - barheight), - width=str(barwidth), - height=str(barheight), - fill=cat_palette[cat][0]) - x += barwidth + bargap - y += layer_height + layer_gap - - # Output - this is the bare svg. - result = et.tostring(svg).decode('utf-8') - # Now add the file header. - result = ''.join([ - '\n', - '\n', - result]) - output_filename = 'results/%s-%s-%s-%s%s/multilayer-%d.svg' % ( - args.model, args.iteration, args.dataset, args.seg, qdir, - args.miniou * 1000) - os.makedirs(os.path.dirname(output_filename), exist_ok=True) - print('writing to %s' % output_filename) - with open(output_filename, 'w') as f: - f.write(result) - -cat_palette = { - 'object': ('#4B4CBF', '#B6B6F2'), - 'part': ('#55B05B', '#B6F2BA'), - 'material': ('#50BDAC', '#A5E5DB'), - 'texture': ('#81C679', '#C0FF9B'), - 'color': ('#F0883B', '#F2CFB6'), - 'other1': ('#D4CF24', '#F2F1B6'), - 'other2': ('#D92E2B', '#F2B6B6'), - 'other3': ('#AB6BC6', '#CFAAFF') -} - -if __name__ == '__main__': - main() diff --git a/spaces/pinecone/semantic-query-trainer/README.md b/spaces/pinecone/semantic-query-trainer/README.md deleted file mode 100644 index 8febfab7df7f63ec45d353842658ff177123f484..0000000000000000000000000000000000000000 --- a/spaces/pinecone/semantic-query-trainer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Semantic Query Trainer -emoji: 🕹🗺 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pkiage/time_series_decomposition_demo/setup.sh b/spaces/pkiage/time_series_decomposition_demo/setup.sh deleted file mode 100644 index 2aea680e9e2290bc67b643cd6ef284de4bb922b3..0000000000000000000000000000000000000000 --- a/spaces/pkiage/time_series_decomposition_demo/setup.sh +++ /dev/null @@ -1,13 +0,0 @@ -mkdir -p ~/.streamlit/ - -cat << EOF > ~/.streamlit/credentials.toml -[general] -email = "paul.r.kiage@gmail.com" -EOF - -cat << EOF > ~/.streamlit/config.toml -[server] -headless = true -enableCORS = true -port = $PORT -EOF \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py deleted file mode 100644 index 30446ceb3f0235721e435f5fbd53f2e306f078cd..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_inspect.py +++ /dev/null @@ -1,270 +0,0 @@ -from __future__ import absolute_import - -import inspect -from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature -from typing import Any, Collection, Iterable, Optional, Tuple, Type, Union - -from .console import Group, RenderableType -from .control import escape_control_codes -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - - -def _first_paragraph(doc: str) -> str: - """Get the first paragraph from a docstring.""" - paragraph, _, _ = doc.partition("\n\n") - return paragraph - - -class Inspect(JupyterMixin): - """A renderable to inspect any Python Object. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value of object. Defaults to True. - """ - - def __init__( - self, - obj: Any, - *, - title: Optional[TextType] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = True, - value: bool = True, - ) -> None: - self.highlighter = ReprHighlighter() - self.obj = obj - self.title = title or self._make_title(obj) - if all: - methods = private = dunder = True - self.help = help - self.methods = methods - self.docs = docs or help - self.private = private or dunder - self.dunder = dunder - self.sort = sort - self.value = value - - def _make_title(self, obj: Any) -> Text: - """Make a default title.""" - title_str = ( - str(obj) - if (isclass(obj) or callable(obj) or ismodule(obj)) - else str(type(obj)) - ) - title_text = self.highlighter(title_str) - return title_text - - def __rich__(self) -> Panel: - return Panel.fit( - Group(*self._render()), - title=self.title, - border_style="scope.border", - padding=(0, 1), - ) - - def _get_signature(self, name: str, obj: Any) -> Optional[Text]: - """Get a signature for a callable.""" - try: - _signature = str(signature(obj)) + ":" - except ValueError: - _signature = "(...)" - except TypeError: - return None - - source_filename: Optional[str] = None - try: - source_filename = getfile(obj) - except (OSError, TypeError): - # OSError is raised if obj has no source file, e.g. when defined in REPL. - pass - - callable_name = Text(name, style="inspect.callable") - if source_filename: - callable_name.stylize(f"link file://{source_filename}") - signature_text = self.highlighter(_signature) - - qualname = name or getattr(obj, "__qualname__", name) - - # If obj is a module, there may be classes (which are callable) to display - if inspect.isclass(obj): - prefix = "class" - elif inspect.iscoroutinefunction(obj): - prefix = "async def" - else: - prefix = "def" - - qual_signature = Text.assemble( - (f"{prefix} ", f"inspect.{prefix.replace(' ', '_')}"), - (qualname, "inspect.callable"), - signature_text, - ) - - return qual_signature - - def _render(self) -> Iterable[RenderableType]: - """Render object.""" - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - key, (_error, value) = item - return (callable(value), key.strip("_").lower()) - - def safe_getattr(attr_name: str) -> Tuple[Any, Any]: - """Get attribute or any exception.""" - try: - return (None, getattr(obj, attr_name)) - except Exception as error: - return (error, None) - - obj = self.obj - keys = dir(obj) - total_items = len(keys) - if not self.dunder: - keys = [key for key in keys if not key.startswith("__")] - if not self.private: - keys = [key for key in keys if not key.startswith("_")] - not_shown_count = total_items - len(keys) - items = [(key, safe_getattr(key)) for key in keys] - if self.sort: - items.sort(key=sort_items) - - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - add_row = items_table.add_row - highlighter = self.highlighter - - if callable(obj): - signature = self._get_signature("", obj) - if signature is not None: - yield signature - yield "" - - if self.docs: - _doc = self._get_formatted_doc(obj) - if _doc is not None: - doc_text = Text(_doc, style="inspect.help") - doc_text = highlighter(doc_text) - yield doc_text - yield "" - - if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)): - yield Panel( - Pretty(obj, indent_guides=True, max_length=10, max_string=60), - border_style="inspect.value.border", - ) - yield "" - - for key, (error, value) in items: - key_text = Text.assemble( - ( - key, - "inspect.attr.dunder" if key.startswith("__") else "inspect.attr", - ), - (" =", "inspect.equals"), - ) - if error is not None: - warning = key_text.copy() - warning.stylize("inspect.error") - add_row(warning, highlighter(repr(error))) - continue - - if callable(value): - if not self.methods: - continue - - _signature_text = self._get_signature(key, value) - if _signature_text is None: - add_row(key_text, Pretty(value, highlighter=highlighter)) - else: - if self.docs: - docs = self._get_formatted_doc(value) - if docs is not None: - _signature_text.append("\n" if "\n" in docs else " ") - doc = highlighter(docs) - doc.stylize("inspect.doc") - _signature_text.append(doc) - - add_row(key_text, _signature_text) - else: - add_row(key_text, Pretty(value, highlighter=highlighter)) - if items_table.row_count: - yield items_table - elif not_shown_count: - yield Text.from_markup( - f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] " - f"Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options." - ) - - def _get_formatted_doc(self, object_: Any) -> Optional[str]: - """ - Extract the docstring of an object, process it and returns it. - The processing consists in cleaning up the doctring's indentation, - taking only its 1st paragraph if `self.help` is not True, - and escape its control codes. - - Args: - object_ (Any): the object to get the docstring from. - - Returns: - Optional[str]: the processed docstring, or None if no docstring was found. - """ - docs = getdoc(object_) - if docs is None: - return None - docs = cleandoc(docs).strip() - if not self.help: - docs = _first_paragraph(docs) - return escape_control_codes(docs) - - -def get_object_types_mro(obj: Union[object, Type[Any]]) -> Tuple[type, ...]: - """Returns the MRO of an object's class, or of the object itself if it's a class.""" - if not hasattr(obj, "__mro__"): - # N.B. we cannot use `if type(obj) is type` here because it doesn't work with - # some types of classes, such as the ones that use abc.ABCMeta. - obj = type(obj) - return getattr(obj, "__mro__", ()) - - -def get_object_types_mro_as_strings(obj: object) -> Collection[str]: - """ - Returns the MRO of an object's class as full qualified names, or of the object itself if it's a class. - - Examples: - `object_types_mro_as_strings(JSONDecoder)` will return `['json.decoder.JSONDecoder', 'builtins.object']` - """ - return [ - f'{getattr(type_, "__module__", "")}.{getattr(type_, "__qualname__", "")}' - for type_ in get_object_types_mro(obj) - ] - - -def is_object_one_of_types( - obj: object, fully_qualified_types_names: Collection[str] -) -> bool: - """ - Returns `True` if the given object's class (or the object itself, if it's a class) has one of the - fully qualified names in its MRO. - """ - for type_name in get_object_types_mro_as_strings(obj): - if type_name in fully_qualified_types_names: - return True - return False diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/vendored/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/vendored/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/playgrdstar/compare-llms/README.md b/spaces/playgrdstar/compare-llms/README.md deleted file mode 100644 index 88017f20043ddb943fed02f5d2494f3ae4d2f303..0000000000000000000000000000000000000000 --- a/spaces/playgrdstar/compare-llms/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Compare Llms -emoji: 🌍 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/consts.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/consts.py deleted file mode 100644 index 974fb06a3c756a7e27106f4d1bb9c17b78a094fd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/consts.py +++ /dev/null @@ -1,29 +0,0 @@ -from typing import Dict - -from .core import ConstExpression - - -CONST_LISTING = { - "NaN": "not a number (same as JavaScript literal NaN)", - "LN10": "the natural log of 10 (alias to Math.LN10)", - "E": "the transcendental number e (alias to Math.E)", - "LOG10E": "the base 10 logarithm e (alias to Math.LOG10E)", - "LOG2E": "the base 2 logarithm of e (alias to Math.LOG2E)", - "SQRT1_2": "the square root of 0.5 (alias to Math.SQRT1_2)", - "LN2": "the natural log of 2 (alias to Math.LN2)", - "SQRT2": "the square root of 2 (alias to Math.SQRT1_2)", - "PI": "the transcendental number pi (alias to Math.PI)", -} - -NAME_MAP: Dict[str, str] = {} - - -def _populate_namespace(): - globals_ = globals() - for name, doc in CONST_LISTING.items(): - py_name = NAME_MAP.get(name, name) - globals_[py_name] = ConstExpression(name, doc) - yield py_name - - -__all__ = list(_populate_namespace()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/core.py deleted file mode 100644 index 9cc258c8b723613453d4033c85035e335a537318..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/expr/core.py +++ /dev/null @@ -1,234 +0,0 @@ -from ..utils import SchemaBase - - -class DatumType: - """An object to assist in building Vega-Lite Expressions""" - - def __repr__(self): - return "datum" - - def __getattr__(self, attr): - if attr.startswith("__") and attr.endswith("__"): - raise AttributeError(attr) - return GetAttrExpression("datum", attr) - - def __getitem__(self, attr): - return GetItemExpression("datum", attr) - - def __call__(self, datum, **kwargs): - """Specify a datum for use in an encoding""" - return dict(datum=datum, **kwargs) - - -datum = DatumType() - - -def _js_repr(val): - """Return a javascript-safe string representation of val""" - if val is True: - return "true" - elif val is False: - return "false" - elif val is None: - return "null" - elif isinstance(val, OperatorMixin): - return val._to_expr() - else: - return repr(val) - - -# Designed to work with Expression and VariableParameter -class OperatorMixin: - def _to_expr(self): - return repr(self) - - def _from_expr(self, expr): - return expr - - def __add__(self, other): - comp_value = BinaryExpression("+", self, other) - return self._from_expr(comp_value) - - def __radd__(self, other): - comp_value = BinaryExpression("+", other, self) - return self._from_expr(comp_value) - - def __sub__(self, other): - comp_value = BinaryExpression("-", self, other) - return self._from_expr(comp_value) - - def __rsub__(self, other): - comp_value = BinaryExpression("-", other, self) - return self._from_expr(comp_value) - - def __mul__(self, other): - comp_value = BinaryExpression("*", self, other) - return self._from_expr(comp_value) - - def __rmul__(self, other): - comp_value = BinaryExpression("*", other, self) - return self._from_expr(comp_value) - - def __truediv__(self, other): - comp_value = BinaryExpression("/", self, other) - return self._from_expr(comp_value) - - def __rtruediv__(self, other): - comp_value = BinaryExpression("/", other, self) - return self._from_expr(comp_value) - - __div__ = __truediv__ - - __rdiv__ = __rtruediv__ - - def __mod__(self, other): - comp_value = BinaryExpression("%", self, other) - return self._from_expr(comp_value) - - def __rmod__(self, other): - comp_value = BinaryExpression("%", other, self) - return self._from_expr(comp_value) - - def __pow__(self, other): - # "**" Javascript operator is not supported in all browsers - comp_value = FunctionExpression("pow", (self, other)) - return self._from_expr(comp_value) - - def __rpow__(self, other): - # "**" Javascript operator is not supported in all browsers - comp_value = FunctionExpression("pow", (other, self)) - return self._from_expr(comp_value) - - def __neg__(self): - comp_value = UnaryExpression("-", self) - return self._from_expr(comp_value) - - def __pos__(self): - comp_value = UnaryExpression("+", self) - return self._from_expr(comp_value) - - # comparison operators - - def __eq__(self, other): - comp_value = BinaryExpression("===", self, other) - return self._from_expr(comp_value) - - def __ne__(self, other): - comp_value = BinaryExpression("!==", self, other) - return self._from_expr(comp_value) - - def __gt__(self, other): - comp_value = BinaryExpression(">", self, other) - return self._from_expr(comp_value) - - def __lt__(self, other): - comp_value = BinaryExpression("<", self, other) - return self._from_expr(comp_value) - - def __ge__(self, other): - comp_value = BinaryExpression(">=", self, other) - return self._from_expr(comp_value) - - def __le__(self, other): - comp_value = BinaryExpression("<=", self, other) - return self._from_expr(comp_value) - - def __abs__(self): - comp_value = FunctionExpression("abs", (self,)) - return self._from_expr(comp_value) - - # logical operators - - def __and__(self, other): - comp_value = BinaryExpression("&&", self, other) - return self._from_expr(comp_value) - - def __rand__(self, other): - comp_value = BinaryExpression("&&", other, self) - return self._from_expr(comp_value) - - def __or__(self, other): - comp_value = BinaryExpression("||", self, other) - return self._from_expr(comp_value) - - def __ror__(self, other): - comp_value = BinaryExpression("||", other, self) - return self._from_expr(comp_value) - - def __invert__(self): - comp_value = UnaryExpression("!", self) - return self._from_expr(comp_value) - - -class Expression(OperatorMixin, SchemaBase): - """Expression - - Base object for enabling build-up of Javascript expressions using - a Python syntax. Calling ``repr(obj)`` will return a Javascript - representation of the object and the operations it encodes. - """ - - _schema = {"type": "string"} - - def to_dict(self, *args, **kwargs): - return repr(self) - - def __setattr__(self, attr, val): - # We don't need the setattr magic defined in SchemaBase - return object.__setattr__(self, attr, val) - - # item access - def __getitem__(self, val): - return GetItemExpression(self, val) - - -class UnaryExpression(Expression): - def __init__(self, op, val): - super(UnaryExpression, self).__init__(op=op, val=val) - - def __repr__(self): - return "({op}{val})".format(op=self.op, val=_js_repr(self.val)) - - -class BinaryExpression(Expression): - def __init__(self, op, lhs, rhs): - super(BinaryExpression, self).__init__(op=op, lhs=lhs, rhs=rhs) - - def __repr__(self): - return "({lhs} {op} {rhs})".format( - op=self.op, lhs=_js_repr(self.lhs), rhs=_js_repr(self.rhs) - ) - - -class FunctionExpression(Expression): - def __init__(self, name, args): - super(FunctionExpression, self).__init__(name=name, args=args) - - def __repr__(self): - args = ",".join(_js_repr(arg) for arg in self.args) - return "{name}({args})".format(name=self.name, args=args) - - -class ConstExpression(Expression): - def __init__(self, name, doc): - self.__doc__ = """{}: {}""".format(name, doc) - super(ConstExpression, self).__init__(name=name, doc=doc) - - def __repr__(self): - return str(self.name) - - -class GetAttrExpression(Expression): - def __init__(self, group, name): - super(GetAttrExpression, self).__init__(group=group, name=name) - - def __repr__(self): - return "{}.{}".format(self.group, self.name) - - -class GetItemExpression(Expression): - def __init__(self, group, name): - super(GetItemExpression, self).__init__(group=group, name=name) - - def __repr__(self): - return "{}[{!r}]".format(self.group, self.name) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/display.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/display.py deleted file mode 100644 index f2bb99bad25753b259c8f8f42f7fc7567af8a7ed..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/display.py +++ /dev/null @@ -1,215 +0,0 @@ -import json -import pkgutil -import textwrap -from typing import Callable, Dict, Optional, Tuple, Any, Union -import uuid - -from ._vegafusion_data import compile_with_vegafusion, using_vegafusion -from .plugin_registry import PluginRegistry, PluginEnabler -from .mimebundle import spec_to_mimebundle -from .schemapi import validate_jsonschema - - -# ============================================================================== -# Renderer registry -# ============================================================================== -# MimeBundleType needs to be the same as what are acceptable return values -# for _repr_mimebundle_, -# see https://ipython.readthedocs.io/en/stable/config/integrating.html#MyObject._repr_mimebundle_ -MimeBundleDataType = Dict[str, Any] -MimeBundleMetaDataType = Dict[str, Any] -MimeBundleType = Union[ - MimeBundleDataType, Tuple[MimeBundleDataType, MimeBundleMetaDataType] -] -RendererType = Callable[..., MimeBundleType] -# Subtype of MimeBundleType as more specific in the values of the dictionaries -DefaultRendererReturnType = Tuple[ - Dict[str, Union[str, dict]], Dict[str, Dict[str, Any]] -] - - -class RendererRegistry(PluginRegistry[RendererType]): - entrypoint_err_messages = { - "notebook": textwrap.dedent( - """ - To use the 'notebook' renderer, you must install the vega package - and the associated Jupyter extension. - See https://altair-viz.github.io/getting_started/installation.html - for more information. - """ - ), - "altair_viewer": textwrap.dedent( - """ - To use the 'altair_viewer' renderer, you must install the altair_viewer - package; see http://github.com/altair-viz/altair_viewer/ - for more information. - """ - ), - } - - def set_embed_options( - self, - defaultStyle: Optional[Union[bool, str]] = None, - renderer: Optional[str] = None, - width: Optional[int] = None, - height: Optional[int] = None, - padding: Optional[int] = None, - scaleFactor: Optional[float] = None, - actions: Optional[Union[bool, Dict[str, bool]]] = None, - **kwargs, - ) -> PluginEnabler: - """Set options for embeddings of Vega & Vega-Lite charts. - - Options are fully documented at https://github.com/vega/vega-embed. - Similar to the `enable()` method, this can be used as either - a persistent global switch, or as a temporary local setting using - a context manager (i.e. a `with` statement). - - Parameters - ---------- - defaultStyle : bool or string - Specify a default stylesheet for embed actions. - renderer : string - The renderer to use for the view. One of "canvas" (default) or "svg" - width : integer - The view width in pixels - height : integer - The view height in pixels - padding : integer - The view padding in pixels - scaleFactor : number - The number by which to multiply the width and height (default 1) - of an exported PNG or SVG image. - actions : bool or dict - Determines if action links ("Export as PNG/SVG", "View Source", - "View Vega" (only for Vega-Lite), "Open in Vega Editor") are - included with the embedded view. If the value is true, all action - links will be shown and none if the value is false. This property - can take a key-value mapping object that maps keys (export, source, - compiled, editor) to boolean values for determining if - each action link should be shown. - **kwargs : - Additional options are passed directly to embed options. - """ - options: Dict[str, Optional[Union[bool, str, float, Dict[str, bool]]]] = { - "defaultStyle": defaultStyle, - "renderer": renderer, - "width": width, - "height": height, - "padding": padding, - "scaleFactor": scaleFactor, - "actions": actions, - } - kwargs.update({key: val for key, val in options.items() if val is not None}) - return self.enable(None, embed_options=kwargs) - - -# ============================================================================== -# VegaLite v1/v2 renderer logic -# ============================================================================== - - -class Displayable: - """A base display class for VegaLite v1/v2. - - This class takes a VegaLite v1/v2 spec and does the following: - - 1. Optionally validates the spec against a schema. - 2. Uses the RendererPlugin to grab a renderer and call it when the - IPython/Jupyter display method (_repr_mimebundle_) is called. - - The spec passed to this class must be fully schema compliant and already - have the data portion of the spec fully processed and ready to serialize. - In practice, this means, the data portion of the spec should have been passed - through appropriate data model transformers. - """ - - renderers: Optional[RendererRegistry] = None - schema_path = ("altair", "") - - def __init__(self, spec: dict, validate: bool = False) -> None: - self.spec = spec - self.validate = validate - self._validate() - - def _validate(self) -> None: - """Validate the spec against the schema.""" - data = pkgutil.get_data(*self.schema_path) - assert data is not None - schema_dict: dict = json.loads(data.decode("utf-8")) - validate_jsonschema( - self.spec, - schema_dict, - ) - - def _repr_mimebundle_( - self, include: Any = None, exclude: Any = None - ) -> MimeBundleType: - """Return a MIME bundle for display in Jupyter frontends.""" - if self.renderers is not None: - renderer_func = self.renderers.get() - assert renderer_func is not None - return renderer_func(self.spec) - else: - return {} - - -def default_renderer_base( - spec: dict, mime_type: str, str_repr: str, **options -) -> DefaultRendererReturnType: - """A default renderer for Vega or VegaLite that works for modern frontends. - - This renderer works with modern frontends (JupyterLab, nteract) that know - how to render the custom VegaLite MIME type listed above. - """ - # Local import to avoid circular ImportError - from altair.vegalite.v5.display import VEGA_MIME_TYPE, VEGALITE_MIME_TYPE - - assert isinstance(spec, dict) - bundle: Dict[str, Union[str, dict]] = {} - metadata: Dict[str, Dict[str, Any]] = {} - - if using_vegafusion(): - spec = compile_with_vegafusion(spec) - - # Swap mimetype from Vega-Lite to Vega. - # If mimetype was JSON, leave it alone - if mime_type == VEGALITE_MIME_TYPE: - mime_type = VEGA_MIME_TYPE - - bundle[mime_type] = spec - bundle["text/plain"] = str_repr - if options: - metadata[mime_type] = options - return bundle, metadata - - -def json_renderer_base( - spec: dict, str_repr: str, **options -) -> DefaultRendererReturnType: - """A renderer that returns a MIME type of application/json. - - In JupyterLab/nteract this is rendered as a nice JSON tree. - """ - return default_renderer_base( - spec, mime_type="application/json", str_repr=str_repr, **options - ) - - -class HTMLRenderer: - """Object to render charts as HTML, with a unique output div each time""" - - def __init__(self, output_div: str = "altair-viz-{}", **kwargs) -> None: - self._output_div = output_div - self.kwargs = kwargs - - @property - def output_div(self) -> str: - return self._output_div.format(uuid.uuid4().hex) - - def __call__(self, spec: dict, **metadata) -> Dict[str, str]: - kwargs = self.kwargs.copy() - kwargs.update(metadata) - return spec_to_mimebundle( - spec, format="html", output_div=self.output_div, **kwargs - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/middleware/cors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/middleware/cors.py deleted file mode 100644 index 8dfaad0dbb3ff5300cccb2023748cd30f54bc920..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/middleware/cors.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.middleware.cors import CORSMiddleware as CORSMiddleware # noqa diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py deleted file mode 100644 index 84aa63f36301ec9a4ae21acff0cbc95010d956b7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py +++ /dev/null @@ -1,593 +0,0 @@ -"""ttLib.tables.ttProgram.py -- Assembler/disassembler for TrueType bytecode programs.""" -from __future__ import annotations - -from fontTools.misc.textTools import num2binary, binary2num, readHex, strjoin -import array -from io import StringIO -from typing import List -import re -import logging - - -log = logging.getLogger(__name__) - -# fmt: off - -# first, the list of instructions that eat bytes or words from the instruction stream - -streamInstructions = [ -# -# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes -# - (0x40, 'NPUSHB', 0, 'PushNBytes', 0, -1), # n, b1, b2,...bn b1,b2...bn - (0x41, 'NPUSHW', 0, 'PushNWords', 0, -1), # n, w1, w2,...w w1,w2...wn - (0xb0, 'PUSHB', 3, 'PushBytes', 0, -1), # b0, b1,..bn b0, b1, ...,bn - (0xb8, 'PUSHW', 3, 'PushWords', 0, -1), # w0,w1,..wn w0 ,w1, ...wn -] - - -# next, the list of "normal" instructions - -instructions = [ -# -# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes -# - (0x7f, 'AA', 0, 'AdjustAngle', 1, 0), # p - - (0x64, 'ABS', 0, 'Absolute', 1, 1), # n |n| - (0x60, 'ADD', 0, 'Add', 2, 1), # n2, n1 (n1 + n2) - (0x27, 'ALIGNPTS', 0, 'AlignPts', 2, 0), # p2, p1 - - (0x3c, 'ALIGNRP', 0, 'AlignRelativePt', -1, 0), # p1, p2, ... , ploopvalue - - (0x5a, 'AND', 0, 'LogicalAnd', 2, 1), # e2, e1 b - (0x2b, 'CALL', 0, 'CallFunction', 1, 0), # f - - (0x67, 'CEILING', 0, 'Ceiling', 1, 1), # n ceil(n) - (0x25, 'CINDEX', 0, 'CopyXToTopStack', 1, 1), # k ek - (0x22, 'CLEAR', 0, 'ClearStack', -1, 0), # all items on the stack - - (0x4f, 'DEBUG', 0, 'DebugCall', 1, 0), # n - - (0x73, 'DELTAC1', 0, 'DeltaExceptionC1', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x74, 'DELTAC2', 0, 'DeltaExceptionC2', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x75, 'DELTAC3', 0, 'DeltaExceptionC3', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x5d, 'DELTAP1', 0, 'DeltaExceptionP1', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x71, 'DELTAP2', 0, 'DeltaExceptionP2', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x72, 'DELTAP3', 0, 'DeltaExceptionP3', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x24, 'DEPTH', 0, 'GetDepthStack', 0, 1), # - n - (0x62, 'DIV', 0, 'Divide', 2, 1), # n2, n1 (n1 * 64)/ n2 - (0x20, 'DUP', 0, 'DuplicateTopStack', 1, 2), # e e, e - (0x59, 'EIF', 0, 'EndIf', 0, 0), # - - - (0x1b, 'ELSE', 0, 'Else', 0, 0), # - - - (0x2d, 'ENDF', 0, 'EndFunctionDefinition', 0, 0), # - - - (0x54, 'EQ', 0, 'Equal', 2, 1), # e2, e1 b - (0x57, 'EVEN', 0, 'Even', 1, 1), # e b - (0x2c, 'FDEF', 0, 'FunctionDefinition', 1, 0), # f - - (0x4e, 'FLIPOFF', 0, 'SetAutoFlipOff', 0, 0), # - - - (0x4d, 'FLIPON', 0, 'SetAutoFlipOn', 0, 0), # - - - (0x80, 'FLIPPT', 0, 'FlipPoint', -1, 0), # p1, p2, ..., ploopvalue - - (0x82, 'FLIPRGOFF', 0, 'FlipRangeOff', 2, 0), # h, l - - (0x81, 'FLIPRGON', 0, 'FlipRangeOn', 2, 0), # h, l - - (0x66, 'FLOOR', 0, 'Floor', 1, 1), # n floor(n) - (0x46, 'GC', 1, 'GetCoordOnPVector', 1, 1), # p c - (0x88, 'GETINFO', 0, 'GetInfo', 1, 1), # selector result - (0x91, 'GETVARIATION', 0, 'GetVariation', 0, -1), # - a1,..,an - (0x0d, 'GFV', 0, 'GetFVector', 0, 2), # - px, py - (0x0c, 'GPV', 0, 'GetPVector', 0, 2), # - px, py - (0x52, 'GT', 0, 'GreaterThan', 2, 1), # e2, e1 b - (0x53, 'GTEQ', 0, 'GreaterThanOrEqual', 2, 1), # e2, e1 b - (0x89, 'IDEF', 0, 'InstructionDefinition', 1, 0), # f - - (0x58, 'IF', 0, 'If', 1, 0), # e - - (0x8e, 'INSTCTRL', 0, 'SetInstrExecControl', 2, 0), # s, v - - (0x39, 'IP', 0, 'InterpolatePts', -1, 0), # p1, p2, ... , ploopvalue - - (0x0f, 'ISECT', 0, 'MovePtToIntersect', 5, 0), # a1, a0, b1, b0, p - - (0x30, 'IUP', 1, 'InterpolateUntPts', 0, 0), # - - - (0x1c, 'JMPR', 0, 'Jump', 1, 0), # offset - - (0x79, 'JROF', 0, 'JumpRelativeOnFalse', 2, 0), # e, offset - - (0x78, 'JROT', 0, 'JumpRelativeOnTrue', 2, 0), # e, offset - - (0x2a, 'LOOPCALL', 0, 'LoopAndCallFunction', 2, 0), # f, count - - (0x50, 'LT', 0, 'LessThan', 2, 1), # e2, e1 b - (0x51, 'LTEQ', 0, 'LessThenOrEqual', 2, 1), # e2, e1 b - (0x8b, 'MAX', 0, 'Maximum', 2, 1), # e2, e1 max(e1, e2) - (0x49, 'MD', 1, 'MeasureDistance', 2, 1), # p2,p1 d - (0x2e, 'MDAP', 1, 'MoveDirectAbsPt', 1, 0), # p - - (0xc0, 'MDRP', 5, 'MoveDirectRelPt', 1, 0), # p - - (0x3e, 'MIAP', 1, 'MoveIndirectAbsPt', 2, 0), # n, p - - (0x8c, 'MIN', 0, 'Minimum', 2, 1), # e2, e1 min(e1, e2) - (0x26, 'MINDEX', 0, 'MoveXToTopStack', 1, 1), # k ek - (0xe0, 'MIRP', 5, 'MoveIndirectRelPt', 2, 0), # n, p - - (0x4b, 'MPPEM', 0, 'MeasurePixelPerEm', 0, 1), # - ppem - (0x4c, 'MPS', 0, 'MeasurePointSize', 0, 1), # - pointSize - (0x3a, 'MSIRP', 1, 'MoveStackIndirRelPt', 2, 0), # d, p - - (0x63, 'MUL', 0, 'Multiply', 2, 1), # n2, n1 (n1 * n2)/64 - (0x65, 'NEG', 0, 'Negate', 1, 1), # n -n - (0x55, 'NEQ', 0, 'NotEqual', 2, 1), # e2, e1 b - (0x5c, 'NOT', 0, 'LogicalNot', 1, 1), # e ( not e ) - (0x6c, 'NROUND', 2, 'NoRound', 1, 1), # n1 n2 - (0x56, 'ODD', 0, 'Odd', 1, 1), # e b - (0x5b, 'OR', 0, 'LogicalOr', 2, 1), # e2, e1 b - (0x21, 'POP', 0, 'PopTopStack', 1, 0), # e - - (0x45, 'RCVT', 0, 'ReadCVT', 1, 1), # location value - (0x7d, 'RDTG', 0, 'RoundDownToGrid', 0, 0), # - - - (0x7a, 'ROFF', 0, 'RoundOff', 0, 0), # - - - (0x8a, 'ROLL', 0, 'RollTopThreeStack', 3, 3), # a,b,c b,a,c - (0x68, 'ROUND', 2, 'Round', 1, 1), # n1 n2 - (0x43, 'RS', 0, 'ReadStore', 1, 1), # n v - (0x3d, 'RTDG', 0, 'RoundToDoubleGrid', 0, 0), # - - - (0x18, 'RTG', 0, 'RoundToGrid', 0, 0), # - - - (0x19, 'RTHG', 0, 'RoundToHalfGrid', 0, 0), # - - - (0x7c, 'RUTG', 0, 'RoundUpToGrid', 0, 0), # - - - (0x77, 'S45ROUND', 0, 'SuperRound45Degrees', 1, 0), # n - - (0x7e, 'SANGW', 0, 'SetAngleWeight', 1, 0), # weight - - (0x85, 'SCANCTRL', 0, 'ScanConversionControl', 1, 0), # n - - (0x8d, 'SCANTYPE', 0, 'ScanType', 1, 0), # n - - (0x48, 'SCFS', 0, 'SetCoordFromStackFP', 2, 0), # c, p - - (0x1d, 'SCVTCI', 0, 'SetCVTCutIn', 1, 0), # n - - (0x5e, 'SDB', 0, 'SetDeltaBaseInGState', 1, 0), # n - - (0x86, 'SDPVTL', 1, 'SetDualPVectorToLine', 2, 0), # p2, p1 - - (0x5f, 'SDS', 0, 'SetDeltaShiftInGState', 1, 0), # n - - (0x0b, 'SFVFS', 0, 'SetFVectorFromStack', 2, 0), # y, x - - (0x04, 'SFVTCA', 1, 'SetFVectorToAxis', 0, 0), # - - - (0x08, 'SFVTL', 1, 'SetFVectorToLine', 2, 0), # p2, p1 - - (0x0e, 'SFVTPV', 0, 'SetFVectorToPVector', 0, 0), # - - - (0x34, 'SHC', 1, 'ShiftContourByLastPt', 1, 0), # c - - (0x32, 'SHP', 1, 'ShiftPointByLastPoint', -1, 0), # p1, p2, ..., ploopvalue - - (0x38, 'SHPIX', 0, 'ShiftZoneByPixel', -1, 0), # d, p1, p2, ..., ploopvalue - - (0x36, 'SHZ', 1, 'ShiftZoneByLastPoint', 1, 0), # e - - (0x17, 'SLOOP', 0, 'SetLoopVariable', 1, 0), # n - - (0x1a, 'SMD', 0, 'SetMinimumDistance', 1, 0), # distance - - (0x0a, 'SPVFS', 0, 'SetPVectorFromStack', 2, 0), # y, x - - (0x02, 'SPVTCA', 1, 'SetPVectorToAxis', 0, 0), # - - - (0x06, 'SPVTL', 1, 'SetPVectorToLine', 2, 0), # p2, p1 - - (0x76, 'SROUND', 0, 'SuperRound', 1, 0), # n - - (0x10, 'SRP0', 0, 'SetRefPoint0', 1, 0), # p - - (0x11, 'SRP1', 0, 'SetRefPoint1', 1, 0), # p - - (0x12, 'SRP2', 0, 'SetRefPoint2', 1, 0), # p - - (0x1f, 'SSW', 0, 'SetSingleWidth', 1, 0), # n - - (0x1e, 'SSWCI', 0, 'SetSingleWidthCutIn', 1, 0), # n - - (0x61, 'SUB', 0, 'Subtract', 2, 1), # n2, n1 (n1 - n2) - (0x00, 'SVTCA', 1, 'SetFPVectorToAxis', 0, 0), # - - - (0x23, 'SWAP', 0, 'SwapTopStack', 2, 2), # e2, e1 e1, e2 - (0x13, 'SZP0', 0, 'SetZonePointer0', 1, 0), # n - - (0x14, 'SZP1', 0, 'SetZonePointer1', 1, 0), # n - - (0x15, 'SZP2', 0, 'SetZonePointer2', 1, 0), # n - - (0x16, 'SZPS', 0, 'SetZonePointerS', 1, 0), # n - - (0x29, 'UTP', 0, 'UnTouchPt', 1, 0), # p - - (0x70, 'WCVTF', 0, 'WriteCVTInFUnits', 2, 0), # n, l - - (0x44, 'WCVTP', 0, 'WriteCVTInPixels', 2, 0), # v, l - - (0x42, 'WS', 0, 'WriteStore', 2, 0), # v, l - -] - -# fmt: on - - -def bitRepr(value, bits): - s = "" - for i in range(bits): - s = "01"[value & 0x1] + s - value = value >> 1 - return s - - -_mnemonicPat = re.compile(r"[A-Z][A-Z0-9]*$") - - -def _makeDict(instructionList): - opcodeDict = {} - mnemonicDict = {} - for op, mnemonic, argBits, name, pops, pushes in instructionList: - assert _mnemonicPat.match(mnemonic) - mnemonicDict[mnemonic] = op, argBits, name - if argBits: - argoffset = op - for i in range(1 << argBits): - opcodeDict[op + i] = mnemonic, argBits, argoffset, name - else: - opcodeDict[op] = mnemonic, 0, 0, name - return opcodeDict, mnemonicDict - - -streamOpcodeDict, streamMnemonicDict = _makeDict(streamInstructions) -opcodeDict, mnemonicDict = _makeDict(instructions) - - -class tt_instructions_error(Exception): - def __init__(self, error): - self.error = error - - def __str__(self): - return "TT instructions error: %s" % repr(self.error) - - -_comment = r"/\*.*?\*/" -_instruction = r"([A-Z][A-Z0-9]*)\s*\[(.*?)\]" -_number = r"-?[0-9]+" -_token = "(%s)|(%s)|(%s)" % (_instruction, _number, _comment) - -_tokenRE = re.compile(_token) -_whiteRE = re.compile(r"\s*") - -_pushCountPat = re.compile(r"[A-Z][A-Z0-9]*\s*\[.*?\]\s*/\* ([0-9]+).*?\*/") - -_indentRE = re.compile(r"^FDEF|IF|ELSE\[ \]\t.+") -_unindentRE = re.compile(r"^ELSE|ENDF|EIF\[ \]\t.+") - - -def _skipWhite(data, pos): - m = _whiteRE.match(data, pos) - newPos = m.regs[0][1] - assert newPos >= pos - return newPos - - -class Program(object): - def __init__(self) -> None: - pass - - def fromBytecode(self, bytecode: bytes) -> None: - self.bytecode = array.array("B", bytecode) - if hasattr(self, "assembly"): - del self.assembly - - def fromAssembly(self, assembly: List[str] | str) -> None: - if isinstance(assembly, list): - self.assembly = assembly - elif isinstance(assembly, str): - self.assembly = assembly.splitlines() - else: - raise TypeError(f"expected str or List[str], got {type(assembly).__name__}") - if hasattr(self, "bytecode"): - del self.bytecode - - def getBytecode(self) -> bytes: - if not hasattr(self, "bytecode"): - self._assemble() - return self.bytecode.tobytes() - - def getAssembly(self, preserve=True) -> List[str]: - if not hasattr(self, "assembly"): - self._disassemble(preserve=preserve) - return self.assembly - - def toXML(self, writer, ttFont) -> None: - if ( - not hasattr(ttFont, "disassembleInstructions") - or ttFont.disassembleInstructions - ): - try: - assembly = self.getAssembly() - except: - import traceback - - tmp = StringIO() - traceback.print_exc(file=tmp) - msg = "An exception occurred during the decompilation of glyph program:\n\n" - msg += tmp.getvalue() - log.error(msg) - writer.begintag("bytecode") - writer.newline() - writer.comment(msg.strip()) - writer.newline() - writer.dumphex(self.getBytecode()) - writer.endtag("bytecode") - writer.newline() - else: - if not assembly: - return - writer.begintag("assembly") - writer.newline() - i = 0 - indent = 0 - nInstr = len(assembly) - while i < nInstr: - instr = assembly[i] - if _unindentRE.match(instr): - indent -= 1 - writer.write(writer.indentwhite * indent) - writer.write(instr) - writer.newline() - m = _pushCountPat.match(instr) - i = i + 1 - if m: - nValues = int(m.group(1)) - line: List[str] = [] - j = 0 - for j in range(nValues): - if j and not (j % 25): - writer.write(writer.indentwhite * indent) - writer.write(" ".join(line)) - writer.newline() - line = [] - line.append(assembly[i + j]) - writer.write(writer.indentwhite * indent) - writer.write(" ".join(line)) - writer.newline() - i = i + j + 1 - if _indentRE.match(instr): - indent += 1 - writer.endtag("assembly") - writer.newline() - else: - bytecode = self.getBytecode() - if not bytecode: - return - writer.begintag("bytecode") - writer.newline() - writer.dumphex(bytecode) - writer.endtag("bytecode") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont) -> None: - if name == "assembly": - self.fromAssembly(strjoin(content)) - self._assemble() - del self.assembly - else: - assert name == "bytecode" - self.fromBytecode(readHex(content)) - - def _assemble(self) -> None: - assembly = " ".join(getattr(self, "assembly", [])) - bytecode: List[int] = [] - push = bytecode.append - lenAssembly = len(assembly) - pos = _skipWhite(assembly, 0) - while pos < lenAssembly: - m = _tokenRE.match(assembly, pos) - if m is None: - raise tt_instructions_error( - "Syntax error in TT program (%s)" % assembly[pos - 5 : pos + 15] - ) - dummy, mnemonic, arg, number, comment = m.groups() - pos = m.regs[0][1] - if comment: - pos = _skipWhite(assembly, pos) - continue - - arg = arg.strip() - if mnemonic.startswith("INSTR"): - # Unknown instruction - op = int(mnemonic[5:]) - push(op) - elif mnemonic not in ("PUSH", "NPUSHB", "NPUSHW", "PUSHB", "PUSHW"): - op, argBits, name = mnemonicDict[mnemonic] - if len(arg) != argBits: - raise tt_instructions_error( - "Incorrect number of argument bits (%s[%s])" % (mnemonic, arg) - ) - if arg: - arg = binary2num(arg) - push(op + arg) - else: - push(op) - else: - args = [] - pos = _skipWhite(assembly, pos) - while pos < lenAssembly: - m = _tokenRE.match(assembly, pos) - if m is None: - raise tt_instructions_error( - "Syntax error in TT program (%s)" % assembly[pos : pos + 15] - ) - dummy, _mnemonic, arg, number, comment = m.groups() - if number is None and comment is None: - break - pos = m.regs[0][1] - pos = _skipWhite(assembly, pos) - if comment is not None: - continue - args.append(int(number)) - nArgs = len(args) - if mnemonic == "PUSH": - # Automatically choose the most compact representation - nWords = 0 - while nArgs: - while ( - nWords < nArgs - and nWords < 255 - and not (0 <= args[nWords] <= 255) - ): - nWords += 1 - nBytes = 0 - while ( - nWords + nBytes < nArgs - and nBytes < 255 - and 0 <= args[nWords + nBytes] <= 255 - ): - nBytes += 1 - if ( - nBytes < 2 - and nWords + nBytes < 255 - and nWords + nBytes != nArgs - ): - # Will write bytes as words - nWords += nBytes - continue - - # Write words - if nWords: - if nWords <= 8: - op, argBits, name = streamMnemonicDict["PUSHW"] - op = op + nWords - 1 - push(op) - else: - op, argBits, name = streamMnemonicDict["NPUSHW"] - push(op) - push(nWords) - for value in args[:nWords]: - assert -32768 <= value < 32768, ( - "PUSH value out of range %d" % value - ) - push((value >> 8) & 0xFF) - push(value & 0xFF) - - # Write bytes - if nBytes: - pass - if nBytes <= 8: - op, argBits, name = streamMnemonicDict["PUSHB"] - op = op + nBytes - 1 - push(op) - else: - op, argBits, name = streamMnemonicDict["NPUSHB"] - push(op) - push(nBytes) - for value in args[nWords : nWords + nBytes]: - push(value) - - nTotal = nWords + nBytes - args = args[nTotal:] - nArgs -= nTotal - nWords = 0 - else: - # Write exactly what we've been asked to - words = mnemonic[-1] == "W" - op, argBits, name = streamMnemonicDict[mnemonic] - if mnemonic[0] != "N": - assert nArgs <= 8, nArgs - op = op + nArgs - 1 - push(op) - else: - assert nArgs < 256 - push(op) - push(nArgs) - if words: - for value in args: - assert -32768 <= value < 32768, ( - "PUSHW value out of range %d" % value - ) - push((value >> 8) & 0xFF) - push(value & 0xFF) - else: - for value in args: - assert 0 <= value < 256, ( - "PUSHB value out of range %d" % value - ) - push(value) - - pos = _skipWhite(assembly, pos) - - if bytecode: - assert max(bytecode) < 256 and min(bytecode) >= 0 - self.bytecode = array.array("B", bytecode) - - def _disassemble(self, preserve=False) -> None: - assembly = [] - i = 0 - bytecode = getattr(self, "bytecode", []) - numBytecode = len(bytecode) - while i < numBytecode: - op = bytecode[i] - try: - mnemonic, argBits, argoffset, name = opcodeDict[op] - except KeyError: - if op in streamOpcodeDict: - values = [] - - # Merge consecutive PUSH operations - while bytecode[i] in streamOpcodeDict: - op = bytecode[i] - mnemonic, argBits, argoffset, name = streamOpcodeDict[op] - words = mnemonic[-1] == "W" - if argBits: - nValues = op - argoffset + 1 - else: - i = i + 1 - nValues = bytecode[i] - i = i + 1 - assert nValues > 0 - if not words: - for j in range(nValues): - value = bytecode[i] - values.append(repr(value)) - i = i + 1 - else: - for j in range(nValues): - # cast to signed int16 - value = (bytecode[i] << 8) | bytecode[i + 1] - if value >= 0x8000: - value = value - 0x10000 - values.append(repr(value)) - i = i + 2 - if preserve: - break - - if not preserve: - mnemonic = "PUSH" - nValues = len(values) - if nValues == 1: - assembly.append("%s[ ] /* 1 value pushed */" % mnemonic) - else: - assembly.append( - "%s[ ] /* %s values pushed */" % (mnemonic, nValues) - ) - assembly.extend(values) - else: - assembly.append("INSTR%d[ ]" % op) - i = i + 1 - else: - if argBits: - assembly.append( - mnemonic - + "[%s] /* %s */" % (num2binary(op - argoffset, argBits), name) - ) - else: - assembly.append(mnemonic + "[ ] /* %s */" % name) - i = i + 1 - self.assembly = assembly - - def __bool__(self) -> bool: - """ - >>> p = Program() - >>> bool(p) - False - >>> bc = array.array("B", [0]) - >>> p.fromBytecode(bc) - >>> bool(p) - True - >>> p.bytecode.pop() - 0 - >>> bool(p) - False - - >>> p = Program() - >>> asm = ['SVTCA[0]'] - >>> p.fromAssembly(asm) - >>> bool(p) - True - >>> p.assembly.pop() - 'SVTCA[0]' - >>> bool(p) - False - """ - return (hasattr(self, "assembly") and len(self.assembly) > 0) or ( - hasattr(self, "bytecode") and len(self.bytecode) > 0 - ) - - __nonzero__ = __bool__ - - def __eq__(self, other) -> bool: - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other) -> bool: - result = self.__eq__(other) - return result if result is NotImplemented else not result - - -def _test(): - """ - >>> _test() - True - """ - - bc = b"""@;:9876543210/.-,+*)(\'&%$#"! \037\036\035\034\033\032\031\030\027\026\025\024\023\022\021\020\017\016\015\014\013\012\011\010\007\006\005\004\003\002\001\000,\001\260\030CXEj\260\031C`\260F#D#\020 \260FN\360M/\260\000\022\033!#\0213Y-,\001\260\030CX\260\005+\260\000\023K\260\024PX\261\000@8Y\260\006+\033!#\0213Y-,\001\260\030CXN\260\003%\020\362!\260\000\022M\033 E\260\004%\260\004%#Jad\260(RX!#\020\326\033\260\003%\020\362!\260\000\022YY-,\260\032CX!!\033\260\002%\260\002%I\260\003%\260\003%Ja d\260\020PX!!!\033\260\003%\260\003%I\260\000PX\260\000PX\270\377\3428!\033\260\0208!Y\033\260\000RX\260\0368!\033\270\377\3608!YYYY-,\001\260\030CX\260\005+\260\000\023K\260\024PX\271\000\000\377\3008Y\260\006+\033!#\0213Y-,N\001\212\020\261F\031CD\260\000\024\261\000F\342\260\000\025\271\000\000\377\3608\000\260\000<\260(+\260\002%\020\260\000<-,\001\030\260\000/\260\001\024\362\260\001\023\260\001\025M\260\000\022-,\001\260\030CX\260\005+\260\000\023\271\000\000\377\3408\260\006+\033!#\0213Y-,\001\260\030CXEdj#Edi\260\031Cd``\260F#D#\020 \260F\360/\260\000\022\033!! \212 \212RX\0213\033!!YY-,\001\261\013\012C#Ce\012-,\000\261\012\013C#C\013-,\000\260F#p\261\001F>\001\260F#p\261\002FE:\261\002\000\010\015-,\260\022+\260\002%E\260\002%Ej\260@\213`\260\002%#D!!!-,\260\023+\260\002%E\260\002%Ej\270\377\300\214`\260\002%#D!!!-,\260\000\260\022+!!!-,\260\000\260\023+!!!-,\001\260\006C\260\007Ce\012-, i\260@a\260\000\213 \261,\300\212\214\270\020\000b`+\014d#da\\X\260\003aY-,\261\000\003%EhT\260\034KPZX\260\003%E\260\003%E`h \260\004%#D\260\004%#D\033\260\003% Eh \212#D\260\003%Eh`\260\003%#DY-,\260\003% Eh \212#D\260\003%Edhe`\260\004%\260\001`#D-,\260\011CX\207!\300\033\260\022CX\207E\260\021+\260G#D\260Gz\344\033\003\212E\030i \260G#D\212\212\207 \260\240QX\260\021+\260G#D\260Gz\344\033!\260Gz\344YYY\030-, \212E#Eh`D-,EjB-,\001\030/-,\001\260\030CX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260\031C`\260F#D!\212\020\260F\366!\033!!!!Y-,\001\260\030CX\260\002%E\260\002%Ed`j\260\003%Eja \260\004%Ej \212\213e\260\004%#D\214\260\003%#D!!\033 EjD EjDY-,\001 E\260\000U\260\030CZXEh#Ei\260@\213a \260\200bj \212#a \260\003%\213e\260\004%#D\214\260\003%#D!!\033!!\260\031+Y-,\001\212\212Ed#EdadB-,\260\004%\260\004%\260\031+\260\030CX\260\004%\260\004%\260\003%\260\033+\001\260\002%C\260@T\260\002%C\260\000TZX\260\003% E\260@aDY\260\002%C\260\000T\260\002%C\260@TZX\260\004% E\260@`DYY!!!!-,\001KRXC\260\002%E#aD\033!!Y-,\001KRXC\260\002%E#`D\033!!Y-,KRXED\033!!Y-,\001 \260\003%#I\260@`\260 c \260\000RX#\260\002%8#\260\002%e8\000\212c8\033!!!!!Y\001-,KPXED\033!!Y-,\001\260\005%\020# \212\365\000\260\001`#\355\354-,\001\260\005%\020# \212\365\000\260\001a#\355\354-,\001\260\006%\020\365\000\355\354-,F#F`\212\212F# F\212`\212a\270\377\200b# \020#\212\261KK\212pE` \260\000PX\260\001a\270\377\272\213\033\260F\214Y\260\020`h\001:-, E\260\003%FRX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-, E\260\003%FPX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-,\000\260\007C\260\006C\013-,\212\020\354-,\260\014CX!\033 F\260\000RX\270\377\3608\033\260\0208YY-, \260\000UX\270\020\000c\260\003%Ed\260\003%Eda\260\000SX\260\002\033\260@a\260\003Y%EiSXED\033!!Y\033!\260\002%E\260\002%Ead\260(QXED\033!!YY-,!!\014d#d\213\270@\000b-,!\260\200QX\014d#d\213\270 \000b\033\262\000@/+Y\260\002`-,!\260\300QX\014d#d\213\270\025Ub\033\262\000\200/+Y\260\002`-,\014d#d\213\270@\000b`#!-,KSX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260F#D!\212\020\260F\366!\033!\212\021#\022 9/Y-,\260\002%\260\002%Id\260\300TX\270\377\3708\260\0108\033!!Y-,\260\023CX\003\033\002Y-,\260\023CX\002\033\003Y-,\260\012+#\020 <\260\027+-,\260\002%\270\377\3608\260(+\212\020# \320#\260\020+\260\005CX\300\033t[5].call(e)),_(e,"table",t[1]==="table"),_(e,"gallery",t[1]==="gallery"),_(e,"selected",t[2])},m(l,s){S(l,e,s),v(e,i),r=b(e,t[5].bind(e)),t[6](e)},p(l,[s]){s&1&&C(i,l[0]),s&2&&_(e,"table",l[1]==="table"),s&2&&_(e,"gallery",l[1]==="gallery"),s&4&&_(e,"selected",l[2])},i:f,o:f,d(l){l&&z(e),r(),t[6](null)}}}function W(t,e,i){let{value:r}=e,{type:l}=e,{selected:s=!1}=e,d,a;function u(n,c){!n||!c||(a.style.setProperty("--local-text-width",`${c<150?c:200}px`),i(4,a.style.whiteSpace="unset",a))}M(()=>{u(a,d)});function o(){d=this.clientWidth,i(3,d)}function g(n){w[n?"unshift":"push"](()=>{a=n,i(4,a)})}return t.$$set=n=>{"value"in n&&i(0,r=n.value),"type"in n&&i(1,l=n.type),"selected"in n&&i(2,s=n.selected)},[r,l,s,d,a,o,g]}class A extends y{constructor(e){super(),k(this,e,W,P,q,{value:0,type:1,selected:2})}}export{A as default}; -//# sourceMappingURL=Example-d0f6c2cc.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/parasite_axes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/parasite_axes.py deleted file mode 100644 index 2a2b5957e844a00a7db8e04f9dc07b637ab85199..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/parasite_axes.py +++ /dev/null @@ -1,257 +0,0 @@ -from matplotlib import _api, cbook -import matplotlib.artist as martist -import matplotlib.transforms as mtransforms -from matplotlib.transforms import Bbox -from .mpl_axes import Axes - - -class ParasiteAxesBase: - - def __init__(self, parent_axes, aux_transform=None, - *, viewlim_mode=None, **kwargs): - self._parent_axes = parent_axes - self.transAux = aux_transform - self.set_viewlim_mode(viewlim_mode) - kwargs["frameon"] = False - super().__init__(parent_axes.figure, parent_axes._position, **kwargs) - - def clear(self): - super().clear() - martist.setp(self.get_children(), visible=False) - self._get_lines = self._parent_axes._get_lines - self._parent_axes.callbacks._connect_picklable( - "xlim_changed", self._sync_lims) - self._parent_axes.callbacks._connect_picklable( - "ylim_changed", self._sync_lims) - - def pick(self, mouseevent): - # This most likely goes to Artist.pick (depending on axes_class given - # to the factory), which only handles pick events registered on the - # axes associated with each child: - super().pick(mouseevent) - # But parasite axes are additionally given pick events from their host - # axes (cf. HostAxesBase.pick), which we handle here: - for a in self.get_children(): - if (hasattr(mouseevent.inaxes, "parasites") - and self in mouseevent.inaxes.parasites): - a.pick(mouseevent) - - # aux_transform support - - def _set_lim_and_transforms(self): - if self.transAux is not None: - self.transAxes = self._parent_axes.transAxes - self.transData = self.transAux + self._parent_axes.transData - self._xaxis_transform = mtransforms.blended_transform_factory( - self.transData, self.transAxes) - self._yaxis_transform = mtransforms.blended_transform_factory( - self.transAxes, self.transData) - else: - super()._set_lim_and_transforms() - - def set_viewlim_mode(self, mode): - _api.check_in_list([None, "equal", "transform"], mode=mode) - self._viewlim_mode = mode - - def get_viewlim_mode(self): - return self._viewlim_mode - - def _sync_lims(self, parent): - viewlim = parent.viewLim.frozen() - mode = self.get_viewlim_mode() - if mode is None: - pass - elif mode == "equal": - self.viewLim.set(viewlim) - elif mode == "transform": - self.viewLim.set(viewlim.transformed(self.transAux.inverted())) - else: - _api.check_in_list([None, "equal", "transform"], mode=mode) - - # end of aux_transform support - - -parasite_axes_class_factory = cbook._make_class_factory( - ParasiteAxesBase, "{}Parasite") -ParasiteAxes = parasite_axes_class_factory(Axes) - - -class HostAxesBase: - def __init__(self, *args, **kwargs): - self.parasites = [] - super().__init__(*args, **kwargs) - - def get_aux_axes( - self, tr=None, viewlim_mode="equal", axes_class=None, **kwargs): - """ - Add a parasite axes to this host. - - Despite this method's name, this should actually be thought of as an - ``add_parasite_axes`` method. - - .. versionchanged:: 3.7 - Defaults to same base axes class as host axes. - - Parameters - ---------- - tr : `~matplotlib.transforms.Transform` or None, default: None - If a `.Transform`, the following relation will hold: - ``parasite.transData = tr + host.transData``. - If None, the parasite's and the host's ``transData`` are unrelated. - viewlim_mode : {"equal", "transform", None}, default: "equal" - How the parasite's view limits are set: directly equal to the - parent axes ("equal"), equal after application of *tr* - ("transform"), or independently (None). - axes_class : subclass type of `~matplotlib.axes.Axes`, optional - The `~.axes.Axes` subclass that is instantiated. If None, the base - class of the host axes is used. - **kwargs - Other parameters are forwarded to the parasite axes constructor. - """ - if axes_class is None: - axes_class = self._base_axes_class - parasite_axes_class = parasite_axes_class_factory(axes_class) - ax2 = parasite_axes_class( - self, tr, viewlim_mode=viewlim_mode, **kwargs) - # note that ax2.transData == tr + ax1.transData - # Anything you draw in ax2 will match the ticks and grids of ax1. - self.parasites.append(ax2) - ax2._remove_method = self.parasites.remove - return ax2 - - def draw(self, renderer): - orig_children_len = len(self._children) - - locator = self.get_axes_locator() - if locator: - pos = locator(self, renderer) - self.set_position(pos, which="active") - self.apply_aspect(pos) - else: - self.apply_aspect() - - rect = self.get_position() - for ax in self.parasites: - ax.apply_aspect(rect) - self._children.extend(ax.get_children()) - - super().draw(renderer) - del self._children[orig_children_len:] - - def clear(self): - super().clear() - for ax in self.parasites: - ax.clear() - - def pick(self, mouseevent): - super().pick(mouseevent) - # Also pass pick events on to parasite axes and, in turn, their - # children (cf. ParasiteAxesBase.pick) - for a in self.parasites: - a.pick(mouseevent) - - def twinx(self, axes_class=None): - """ - Create a twin of Axes with a shared x-axis but independent y-axis. - - The y-axis of self will have ticks on the left and the returned axes - will have ticks on the right. - """ - ax = self._add_twin_axes(axes_class, sharex=self) - self.axis["right"].set_visible(False) - ax.axis["right"].set_visible(True) - ax.axis["left", "top", "bottom"].set_visible(False) - return ax - - def twiny(self, axes_class=None): - """ - Create a twin of Axes with a shared y-axis but independent x-axis. - - The x-axis of self will have ticks on the bottom and the returned axes - will have ticks on the top. - """ - ax = self._add_twin_axes(axes_class, sharey=self) - self.axis["top"].set_visible(False) - ax.axis["top"].set_visible(True) - ax.axis["left", "right", "bottom"].set_visible(False) - return ax - - def twin(self, aux_trans=None, axes_class=None): - """ - Create a twin of Axes with no shared axis. - - While self will have ticks on the left and bottom axis, the returned - axes will have ticks on the top and right axis. - """ - if aux_trans is None: - aux_trans = mtransforms.IdentityTransform() - ax = self._add_twin_axes( - axes_class, aux_transform=aux_trans, viewlim_mode="transform") - self.axis["top", "right"].set_visible(False) - ax.axis["top", "right"].set_visible(True) - ax.axis["left", "bottom"].set_visible(False) - return ax - - def _add_twin_axes(self, axes_class, **kwargs): - """ - Helper for `.twinx`/`.twiny`/`.twin`. - - *kwargs* are forwarded to the parasite axes constructor. - """ - if axes_class is None: - axes_class = self._base_axes_class - ax = parasite_axes_class_factory(axes_class)(self, **kwargs) - self.parasites.append(ax) - ax._remove_method = self._remove_any_twin - return ax - - def _remove_any_twin(self, ax): - self.parasites.remove(ax) - restore = ["top", "right"] - if ax._sharex: - restore.remove("top") - if ax._sharey: - restore.remove("right") - self.axis[tuple(restore)].set_visible(True) - self.axis[tuple(restore)].toggle(ticklabels=False, label=False) - - @_api.make_keyword_only("3.8", "call_axes_locator") - def get_tightbbox(self, renderer=None, call_axes_locator=True, - bbox_extra_artists=None): - bbs = [ - *[ax.get_tightbbox(renderer, call_axes_locator=call_axes_locator) - for ax in self.parasites], - super().get_tightbbox(renderer, - call_axes_locator=call_axes_locator, - bbox_extra_artists=bbox_extra_artists)] - return Bbox.union([b for b in bbs if b.width != 0 or b.height != 0]) - - -host_axes_class_factory = host_subplot_class_factory = \ - cbook._make_class_factory(HostAxesBase, "{}HostAxes", "_base_axes_class") -HostAxes = SubplotHost = host_axes_class_factory(Axes) - - -def host_axes(*args, axes_class=Axes, figure=None, **kwargs): - """ - Create axes that can act as a hosts to parasitic axes. - - Parameters - ---------- - figure : `~matplotlib.figure.Figure` - Figure to which the axes will be added. Defaults to the current figure - `.pyplot.gcf()`. - - *args, **kwargs - Will be passed on to the underlying `~.axes.Axes` object creation. - """ - import matplotlib.pyplot as plt - host_axes_class = host_axes_class_factory(axes_class) - if figure is None: - figure = plt.gcf() - ax = host_axes_class(figure, *args, **kwargs) - figure.add_axes(ax) - return ax - - -host_subplot = host_axes diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/negative_bounds/issue_20853.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/negative_bounds/issue_20853.f90 deleted file mode 100644 index bf1fa92853316cc31f825c024855088f42577a1c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/negative_bounds/issue_20853.f90 +++ /dev/null @@ -1,7 +0,0 @@ -subroutine foo(is_, ie_, arr, tout) - implicit none - integer :: is_,ie_ - real, intent(in) :: arr(is_:ie_) - real, intent(out) :: tout(is_:ie_) - tout = arr -end diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_both.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_both.f90 deleted file mode 100644 index ac90cedc525a6172a9b72f3bc76f57b79d641b6c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_both.f90 +++ /dev/null @@ -1,57 +0,0 @@ -! Check that parameters are correct intercepted. -! Constants with comma separations are commonly -! used, for instance Pi = 3._dp -subroutine foo(x) - implicit none - integer, parameter :: sp = selected_real_kind(6) - integer, parameter :: dp = selected_real_kind(15) - integer, parameter :: ii = selected_int_kind(9) - integer, parameter :: il = selected_int_kind(18) - real(dp), intent(inout) :: x - dimension x(3) - real(sp), parameter :: three_s = 3._sp - real(dp), parameter :: three_d = 3._dp - integer(ii), parameter :: three_i = 3_ii - integer(il), parameter :: three_l = 3_il - x(1) = x(1) + x(2) * three_s * three_i + x(3) * three_d * three_l - x(2) = x(2) * three_s - x(3) = x(3) * three_l - return -end subroutine - - -subroutine foo_no(x) - implicit none - integer, parameter :: sp = selected_real_kind(6) - integer, parameter :: dp = selected_real_kind(15) - integer, parameter :: ii = selected_int_kind(9) - integer, parameter :: il = selected_int_kind(18) - real(dp), intent(inout) :: x - dimension x(3) - real(sp), parameter :: three_s = 3. - real(dp), parameter :: three_d = 3. - integer(ii), parameter :: three_i = 3 - integer(il), parameter :: three_l = 3 - x(1) = x(1) + x(2) * three_s * three_i + x(3) * three_d * three_l - x(2) = x(2) * three_s - x(3) = x(3) * three_l - return -end subroutine - -subroutine foo_sum(x) - implicit none - integer, parameter :: sp = selected_real_kind(6) - integer, parameter :: dp = selected_real_kind(15) - integer, parameter :: ii = selected_int_kind(9) - integer, parameter :: il = selected_int_kind(18) - real(dp), intent(inout) :: x - dimension x(3) - real(sp), parameter :: three_s = 2._sp + 1._sp - real(dp), parameter :: three_d = 1._dp + 2._dp - integer(ii), parameter :: three_i = 2_ii + 1_ii - integer(il), parameter :: three_l = 1_il + 2_il - x(1) = x(1) + x(2) * three_s * three_i + x(3) * three_d * three_l - x(2) = x(2) * three_s - x(3) = x(3) * three_l - return -end subroutine diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/engine.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/engine.py deleted file mode 100644 index 5a0c467c2f30d8723c1d02a874204eaeb8f7da9e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/engine.py +++ /dev/null @@ -1,50 +0,0 @@ -import time -import warnings - -from openai import util -from openai.api_resources.abstract import ListableAPIResource, UpdateableAPIResource -from openai.error import TryAgain - - -class Engine(ListableAPIResource, UpdateableAPIResource): - OBJECT_NAME = "engines" - - def generate(self, timeout=None, **params): - start = time.time() - while True: - try: - return self.request( - "post", - self.instance_url() + "/generate", - params, - stream=params.get("stream"), - plain_old_data=True, - ) - except TryAgain as e: - if timeout is not None and time.time() > start + timeout: - raise - - util.log_info("Waiting for model to warm up", error=e) - - async def agenerate(self, timeout=None, **params): - start = time.time() - while True: - try: - return await self.arequest( - "post", - self.instance_url() + "/generate", - params, - stream=params.get("stream"), - plain_old_data=True, - ) - except TryAgain as e: - if timeout is not None and time.time() > start + timeout: - raise - - util.log_info("Waiting for model to warm up", error=e) - - def embeddings(self, **params): - warnings.warn( - "Engine.embeddings is deprecated, use Embedding.create", DeprecationWarning - ) - return self.request("post", self.instance_url() + "/embeddings", params) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_constructors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_constructors.py deleted file mode 100644 index e25e31e2f2e9e9b36102915bda83bc935a4ef1e7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_constructors.py +++ /dev/null @@ -1,778 +0,0 @@ -from datetime import ( - date, - datetime, -) - -import numpy as np -import pytest - -from pandas.core.dtypes.common import ( - is_float_dtype, - is_integer_dtype, -) -from pandas.core.dtypes.dtypes import CategoricalDtype - -import pandas as pd -from pandas import ( - Categorical, - CategoricalIndex, - DatetimeIndex, - Index, - Interval, - IntervalIndex, - MultiIndex, - NaT, - Series, - Timestamp, - date_range, - period_range, - timedelta_range, -) -import pandas._testing as tm - - -class TestCategoricalConstructors: - def test_fastpath_deprecated(self): - codes = np.array([1, 2, 3]) - dtype = CategoricalDtype(categories=["a", "b", "c", "d"], ordered=False) - msg = "The 'fastpath' keyword in Categorical is deprecated" - with tm.assert_produces_warning(DeprecationWarning, match=msg): - Categorical(codes, dtype=dtype, fastpath=True) - - def test_categorical_from_cat_and_dtype_str_preserve_ordered(self): - # GH#49309 we should preserve orderedness in `res` - cat = Categorical([3, 1], categories=[3, 2, 1], ordered=True) - - res = Categorical(cat, dtype="category") - assert res.dtype.ordered - - def test_categorical_disallows_scalar(self): - # GH#38433 - with pytest.raises(TypeError, match="Categorical input must be list-like"): - Categorical("A", categories=["A", "B"]) - - def test_categorical_1d_only(self): - # ndim > 1 - msg = "> 1 ndim Categorical are not supported at this time" - with pytest.raises(NotImplementedError, match=msg): - Categorical(np.array([list("abcd")])) - - def test_validate_ordered(self): - # see gh-14058 - exp_msg = "'ordered' must either be 'True' or 'False'" - exp_err = TypeError - - # This should be a boolean. - ordered = np.array([0, 1, 2]) - - with pytest.raises(exp_err, match=exp_msg): - Categorical([1, 2, 3], ordered=ordered) - - with pytest.raises(exp_err, match=exp_msg): - Categorical.from_codes( - [0, 0, 1], categories=["a", "b", "c"], ordered=ordered - ) - - def test_constructor_empty(self): - # GH 17248 - c = Categorical([]) - expected = Index([]) - tm.assert_index_equal(c.categories, expected) - - c = Categorical([], categories=[1, 2, 3]) - expected = Index([1, 2, 3], dtype=np.int64) - tm.assert_index_equal(c.categories, expected) - - def test_constructor_empty_boolean(self): - # see gh-22702 - cat = Categorical([], categories=[True, False]) - categories = sorted(cat.categories.tolist()) - assert categories == [False, True] - - def test_constructor_tuples(self): - values = np.array([(1,), (1, 2), (1,), (1, 2)], dtype=object) - result = Categorical(values) - expected = Index([(1,), (1, 2)], tupleize_cols=False) - tm.assert_index_equal(result.categories, expected) - assert result.ordered is False - - def test_constructor_tuples_datetimes(self): - # numpy will auto reshape when all of the tuples are the - # same len, so add an extra one with 2 items and slice it off - values = np.array( - [ - (Timestamp("2010-01-01"),), - (Timestamp("2010-01-02"),), - (Timestamp("2010-01-01"),), - (Timestamp("2010-01-02"),), - ("a", "b"), - ], - dtype=object, - )[:-1] - result = Categorical(values) - expected = Index( - [(Timestamp("2010-01-01"),), (Timestamp("2010-01-02"),)], - tupleize_cols=False, - ) - tm.assert_index_equal(result.categories, expected) - - def test_constructor_unsortable(self): - # it works! - arr = np.array([1, 2, 3, datetime.now()], dtype="O") - factor = Categorical(arr, ordered=False) - assert not factor.ordered - - # this however will raise as cannot be sorted - msg = ( - "'values' is not ordered, please explicitly specify the " - "categories order by passing in a categories argument." - ) - with pytest.raises(TypeError, match=msg): - Categorical(arr, ordered=True) - - def test_constructor_interval(self): - result = Categorical( - [Interval(1, 2), Interval(2, 3), Interval(3, 6)], ordered=True - ) - ii = IntervalIndex([Interval(1, 2), Interval(2, 3), Interval(3, 6)]) - exp = Categorical(ii, ordered=True) - tm.assert_categorical_equal(result, exp) - tm.assert_index_equal(result.categories, ii) - - def test_constructor(self): - exp_arr = np.array(["a", "b", "c", "a", "b", "c"], dtype=np.object_) - c1 = Categorical(exp_arr) - tm.assert_numpy_array_equal(c1.__array__(), exp_arr) - c2 = Categorical(exp_arr, categories=["a", "b", "c"]) - tm.assert_numpy_array_equal(c2.__array__(), exp_arr) - c2 = Categorical(exp_arr, categories=["c", "b", "a"]) - tm.assert_numpy_array_equal(c2.__array__(), exp_arr) - - # categories must be unique - msg = "Categorical categories must be unique" - with pytest.raises(ValueError, match=msg): - Categorical([1, 2], [1, 2, 2]) - - with pytest.raises(ValueError, match=msg): - Categorical(["a", "b"], ["a", "b", "b"]) - - # The default should be unordered - c1 = Categorical(["a", "b", "c", "a"]) - assert not c1.ordered - - # Categorical as input - c1 = Categorical(["a", "b", "c", "a"]) - c2 = Categorical(c1) - tm.assert_categorical_equal(c1, c2) - - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) - c2 = Categorical(c1) - tm.assert_categorical_equal(c1, c2) - - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) - c2 = Categorical(c1) - tm.assert_categorical_equal(c1, c2) - - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) - c2 = Categorical(c1, categories=["a", "b", "c"]) - tm.assert_numpy_array_equal(c1.__array__(), c2.__array__()) - tm.assert_index_equal(c2.categories, Index(["a", "b", "c"])) - - # Series of dtype category - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) - c2 = Categorical(Series(c1)) - tm.assert_categorical_equal(c1, c2) - - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "c", "b"]) - c2 = Categorical(Series(c1)) - tm.assert_categorical_equal(c1, c2) - - # Series - c1 = Categorical(["a", "b", "c", "a"]) - c2 = Categorical(Series(["a", "b", "c", "a"])) - tm.assert_categorical_equal(c1, c2) - - c1 = Categorical(["a", "b", "c", "a"], categories=["a", "b", "c", "d"]) - c2 = Categorical(Series(["a", "b", "c", "a"]), categories=["a", "b", "c", "d"]) - tm.assert_categorical_equal(c1, c2) - - # This should result in integer categories, not float! - cat = Categorical([1, 2, 3, np.nan], categories=[1, 2, 3]) - assert is_integer_dtype(cat.categories) - - # https://github.com/pandas-dev/pandas/issues/3678 - cat = Categorical([np.nan, 1, 2, 3]) - assert is_integer_dtype(cat.categories) - - # this should result in floats - cat = Categorical([np.nan, 1, 2.0, 3]) - assert is_float_dtype(cat.categories) - - cat = Categorical([np.nan, 1.0, 2.0, 3.0]) - assert is_float_dtype(cat.categories) - - # This doesn't work -> this would probably need some kind of "remember - # the original type" feature to try to cast the array interface result - # to... - - # vals = np.asarray(cat[cat.notna()]) - # assert is_integer_dtype(vals) - - # corner cases - cat = Categorical([1]) - assert len(cat.categories) == 1 - assert cat.categories[0] == 1 - assert len(cat.codes) == 1 - assert cat.codes[0] == 0 - - cat = Categorical(["a"]) - assert len(cat.categories) == 1 - assert cat.categories[0] == "a" - assert len(cat.codes) == 1 - assert cat.codes[0] == 0 - - # two arrays - # - when the first is an integer dtype and the second is not - # - when the resulting codes are all -1/NaN - with tm.assert_produces_warning(None): - Categorical([0, 1, 2, 0, 1, 2], categories=["a", "b", "c"]) - - with tm.assert_produces_warning(None): - Categorical([0, 1, 2, 0, 1, 2], categories=[3, 4, 5]) - - # the next one are from the old docs - with tm.assert_produces_warning(None): - Categorical([0, 1, 2, 0, 1, 2], [1, 2, 3]) - cat = Categorical([1, 2], categories=[1, 2, 3]) - - # this is a legitimate constructor - with tm.assert_produces_warning(None): - Categorical(np.array([], dtype="int64"), categories=[3, 2, 1], ordered=True) - - def test_constructor_with_existing_categories(self): - # GH25318: constructing with pd.Series used to bogusly skip recoding - # categories - c0 = Categorical(["a", "b", "c", "a"]) - c1 = Categorical(["a", "b", "c", "a"], categories=["b", "c"]) - - c2 = Categorical(c0, categories=c1.categories) - tm.assert_categorical_equal(c1, c2) - - c3 = Categorical(Series(c0), categories=c1.categories) - tm.assert_categorical_equal(c1, c3) - - def test_constructor_not_sequence(self): - # https://github.com/pandas-dev/pandas/issues/16022 - msg = r"^Parameter 'categories' must be list-like, was" - with pytest.raises(TypeError, match=msg): - Categorical(["a", "b"], categories="a") - - def test_constructor_with_null(self): - # Cannot have NaN in categories - msg = "Categorical categories cannot be null" - with pytest.raises(ValueError, match=msg): - Categorical([np.nan, "a", "b", "c"], categories=[np.nan, "a", "b", "c"]) - - with pytest.raises(ValueError, match=msg): - Categorical([None, "a", "b", "c"], categories=[None, "a", "b", "c"]) - - with pytest.raises(ValueError, match=msg): - Categorical( - DatetimeIndex(["nat", "20160101"]), - categories=[NaT, Timestamp("20160101")], - ) - - def test_constructor_with_index(self): - ci = CategoricalIndex(list("aabbca"), categories=list("cab")) - tm.assert_categorical_equal(ci.values, Categorical(ci)) - - ci = CategoricalIndex(list("aabbca"), categories=list("cab")) - tm.assert_categorical_equal( - ci.values, Categorical(ci.astype(object), categories=ci.categories) - ) - - def test_constructor_with_generator(self): - # This was raising an Error in isna(single_val).any() because isna - # returned a scalar for a generator - - exp = Categorical([0, 1, 2]) - cat = Categorical(x for x in [0, 1, 2]) - tm.assert_categorical_equal(cat, exp) - cat = Categorical(range(3)) - tm.assert_categorical_equal(cat, exp) - - MultiIndex.from_product([range(5), ["a", "b", "c"]]) - - # check that categories accept generators and sequences - cat = Categorical([0, 1, 2], categories=(x for x in [0, 1, 2])) - tm.assert_categorical_equal(cat, exp) - cat = Categorical([0, 1, 2], categories=range(3)) - tm.assert_categorical_equal(cat, exp) - - def test_constructor_with_rangeindex(self): - # RangeIndex is preserved in Categories - rng = Index(range(3)) - - cat = Categorical(rng) - tm.assert_index_equal(cat.categories, rng, exact=True) - - cat = Categorical([1, 2, 0], categories=rng) - tm.assert_index_equal(cat.categories, rng, exact=True) - - @pytest.mark.parametrize( - "dtl", - [ - date_range("1995-01-01 00:00:00", periods=5, freq="s"), - date_range("1995-01-01 00:00:00", periods=5, freq="s", tz="US/Eastern"), - timedelta_range("1 day", periods=5, freq="s"), - ], - ) - def test_constructor_with_datetimelike(self, dtl): - # see gh-12077 - # constructor with a datetimelike and NaT - - s = Series(dtl) - c = Categorical(s) - - expected = type(dtl)(s) - expected._data.freq = None - - tm.assert_index_equal(c.categories, expected) - tm.assert_numpy_array_equal(c.codes, np.arange(5, dtype="int8")) - - # with NaT - s2 = s.copy() - s2.iloc[-1] = NaT - c = Categorical(s2) - - expected = type(dtl)(s2.dropna()) - expected._data.freq = None - - tm.assert_index_equal(c.categories, expected) - - exp = np.array([0, 1, 2, 3, -1], dtype=np.int8) - tm.assert_numpy_array_equal(c.codes, exp) - - result = repr(c) - assert "NaT" in result - - def test_constructor_from_index_series_datetimetz(self): - idx = date_range("2015-01-01 10:00", freq="D", periods=3, tz="US/Eastern") - idx = idx._with_freq(None) # freq not preserved in result.categories - result = Categorical(idx) - tm.assert_index_equal(result.categories, idx) - - result = Categorical(Series(idx)) - tm.assert_index_equal(result.categories, idx) - - def test_constructor_date_objects(self): - # we dont cast date objects to timestamps, matching Index constructor - v = date.today() - - cat = Categorical([v, v]) - assert cat.categories.dtype == object - assert type(cat.categories[0]) is date - - def test_constructor_from_index_series_timedelta(self): - idx = timedelta_range("1 days", freq="D", periods=3) - idx = idx._with_freq(None) # freq not preserved in result.categories - result = Categorical(idx) - tm.assert_index_equal(result.categories, idx) - - result = Categorical(Series(idx)) - tm.assert_index_equal(result.categories, idx) - - def test_constructor_from_index_series_period(self): - idx = period_range("2015-01-01", freq="D", periods=3) - result = Categorical(idx) - tm.assert_index_equal(result.categories, idx) - - result = Categorical(Series(idx)) - tm.assert_index_equal(result.categories, idx) - - @pytest.mark.parametrize( - "values", - [ - np.array([1.0, 1.2, 1.8, np.nan]), - np.array([1, 2, 3], dtype="int64"), - ["a", "b", "c", np.nan], - [pd.Period("2014-01"), pd.Period("2014-02"), NaT], - [Timestamp("2014-01-01"), Timestamp("2014-01-02"), NaT], - [ - Timestamp("2014-01-01", tz="US/Eastern"), - Timestamp("2014-01-02", tz="US/Eastern"), - NaT, - ], - ], - ) - def test_constructor_invariant(self, values): - # GH 14190 - c = Categorical(values) - c2 = Categorical(c) - tm.assert_categorical_equal(c, c2) - - @pytest.mark.parametrize("ordered", [True, False]) - def test_constructor_with_dtype(self, ordered): - categories = ["b", "a", "c"] - dtype = CategoricalDtype(categories, ordered=ordered) - result = Categorical(["a", "b", "a", "c"], dtype=dtype) - expected = Categorical( - ["a", "b", "a", "c"], categories=categories, ordered=ordered - ) - tm.assert_categorical_equal(result, expected) - assert result.ordered is ordered - - def test_constructor_dtype_and_others_raises(self): - dtype = CategoricalDtype(["a", "b"], ordered=True) - msg = "Cannot specify `categories` or `ordered` together with `dtype`." - with pytest.raises(ValueError, match=msg): - Categorical(["a", "b"], categories=["a", "b"], dtype=dtype) - - with pytest.raises(ValueError, match=msg): - Categorical(["a", "b"], ordered=True, dtype=dtype) - - with pytest.raises(ValueError, match=msg): - Categorical(["a", "b"], ordered=False, dtype=dtype) - - @pytest.mark.parametrize("categories", [None, ["a", "b"], ["a", "c"]]) - @pytest.mark.parametrize("ordered", [True, False]) - def test_constructor_str_category(self, categories, ordered): - result = Categorical( - ["a", "b"], categories=categories, ordered=ordered, dtype="category" - ) - expected = Categorical(["a", "b"], categories=categories, ordered=ordered) - tm.assert_categorical_equal(result, expected) - - def test_constructor_str_unknown(self): - with pytest.raises(ValueError, match="Unknown dtype"): - Categorical([1, 2], dtype="foo") - - def test_constructor_np_strs(self): - # GH#31499 Hashtable.map_locations needs to work on np.str_ objects - cat = Categorical(["1", "0", "1"], [np.str_("0"), np.str_("1")]) - assert all(isinstance(x, np.str_) for x in cat.categories) - - def test_constructor_from_categorical_with_dtype(self): - dtype = CategoricalDtype(["a", "b", "c"], ordered=True) - values = Categorical(["a", "b", "d"]) - result = Categorical(values, dtype=dtype) - # We use dtype.categories, not values.categories - expected = Categorical( - ["a", "b", "d"], categories=["a", "b", "c"], ordered=True - ) - tm.assert_categorical_equal(result, expected) - - def test_constructor_from_categorical_with_unknown_dtype(self): - dtype = CategoricalDtype(None, ordered=True) - values = Categorical(["a", "b", "d"]) - result = Categorical(values, dtype=dtype) - # We use values.categories, not dtype.categories - expected = Categorical( - ["a", "b", "d"], categories=["a", "b", "d"], ordered=True - ) - tm.assert_categorical_equal(result, expected) - - def test_constructor_from_categorical_string(self): - values = Categorical(["a", "b", "d"]) - # use categories, ordered - result = Categorical( - values, categories=["a", "b", "c"], ordered=True, dtype="category" - ) - expected = Categorical( - ["a", "b", "d"], categories=["a", "b", "c"], ordered=True - ) - tm.assert_categorical_equal(result, expected) - - # No string - result = Categorical(values, categories=["a", "b", "c"], ordered=True) - tm.assert_categorical_equal(result, expected) - - def test_constructor_with_categorical_categories(self): - # GH17884 - expected = Categorical(["a", "b"], categories=["a", "b", "c"]) - - result = Categorical(["a", "b"], categories=Categorical(["a", "b", "c"])) - tm.assert_categorical_equal(result, expected) - - result = Categorical(["a", "b"], categories=CategoricalIndex(["a", "b", "c"])) - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("klass", [lambda x: np.array(x, dtype=object), list]) - def test_construction_with_null(self, klass, nulls_fixture): - # https://github.com/pandas-dev/pandas/issues/31927 - values = klass(["a", nulls_fixture, "b"]) - result = Categorical(values) - - dtype = CategoricalDtype(["a", "b"]) - codes = [0, -1, 1] - expected = Categorical.from_codes(codes=codes, dtype=dtype) - - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("validate", [True, False]) - def test_from_codes_nullable_int_categories(self, any_numeric_ea_dtype, validate): - # GH#39649 - cats = pd.array(range(5), dtype=any_numeric_ea_dtype) - codes = np.random.default_rng(2).integers(5, size=3) - dtype = CategoricalDtype(cats) - arr = Categorical.from_codes(codes, dtype=dtype, validate=validate) - assert arr.categories.dtype == cats.dtype - tm.assert_index_equal(arr.categories, Index(cats)) - - def test_from_codes_empty(self): - cat = ["a", "b", "c"] - result = Categorical.from_codes([], categories=cat) - expected = Categorical([], categories=cat) - - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("validate", [True, False]) - def test_from_codes_validate(self, validate): - # GH53122 - dtype = CategoricalDtype(["a", "b"]) - if validate: - with pytest.raises(ValueError, match="codes need to be between "): - Categorical.from_codes([4, 5], dtype=dtype, validate=validate) - else: - # passes, though has incorrect codes, but that's the user responsibility - Categorical.from_codes([4, 5], dtype=dtype, validate=validate) - - def test_from_codes_too_few_categories(self): - dtype = CategoricalDtype(categories=[1, 2]) - msg = "codes need to be between " - with pytest.raises(ValueError, match=msg): - Categorical.from_codes([1, 2], categories=dtype.categories) - with pytest.raises(ValueError, match=msg): - Categorical.from_codes([1, 2], dtype=dtype) - - def test_from_codes_non_int_codes(self): - dtype = CategoricalDtype(categories=[1, 2]) - msg = "codes need to be array-like integers" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes(["a"], categories=dtype.categories) - with pytest.raises(ValueError, match=msg): - Categorical.from_codes(["a"], dtype=dtype) - - def test_from_codes_non_unique_categories(self): - with pytest.raises(ValueError, match="Categorical categories must be unique"): - Categorical.from_codes([0, 1, 2], categories=["a", "a", "b"]) - - def test_from_codes_nan_cat_included(self): - with pytest.raises(ValueError, match="Categorical categories cannot be null"): - Categorical.from_codes([0, 1, 2], categories=["a", "b", np.nan]) - - def test_from_codes_too_negative(self): - dtype = CategoricalDtype(categories=["a", "b", "c"]) - msg = r"codes need to be between -1 and len\(categories\)-1" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes([-2, 1, 2], categories=dtype.categories) - with pytest.raises(ValueError, match=msg): - Categorical.from_codes([-2, 1, 2], dtype=dtype) - - def test_from_codes(self): - dtype = CategoricalDtype(categories=["a", "b", "c"]) - exp = Categorical(["a", "b", "c"], ordered=False) - res = Categorical.from_codes([0, 1, 2], categories=dtype.categories) - tm.assert_categorical_equal(exp, res) - - res = Categorical.from_codes([0, 1, 2], dtype=dtype) - tm.assert_categorical_equal(exp, res) - - @pytest.mark.parametrize("klass", [Categorical, CategoricalIndex]) - def test_from_codes_with_categorical_categories(self, klass): - # GH17884 - expected = Categorical(["a", "b"], categories=["a", "b", "c"]) - - result = Categorical.from_codes([0, 1], categories=klass(["a", "b", "c"])) - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("klass", [Categorical, CategoricalIndex]) - def test_from_codes_with_non_unique_categorical_categories(self, klass): - with pytest.raises(ValueError, match="Categorical categories must be unique"): - Categorical.from_codes([0, 1], klass(["a", "b", "a"])) - - def test_from_codes_with_nan_code(self): - # GH21767 - codes = [1, 2, np.nan] - dtype = CategoricalDtype(categories=["a", "b", "c"]) - with pytest.raises(ValueError, match="codes need to be array-like integers"): - Categorical.from_codes(codes, categories=dtype.categories) - with pytest.raises(ValueError, match="codes need to be array-like integers"): - Categorical.from_codes(codes, dtype=dtype) - - @pytest.mark.parametrize("codes", [[1.0, 2.0, 0], [1.1, 2.0, 0]]) - def test_from_codes_with_float(self, codes): - # GH21767 - # float codes should raise even if values are equal to integers - dtype = CategoricalDtype(categories=["a", "b", "c"]) - - msg = "codes need to be array-like integers" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes(codes, dtype.categories) - with pytest.raises(ValueError, match=msg): - Categorical.from_codes(codes, dtype=dtype) - - def test_from_codes_with_dtype_raises(self): - msg = "Cannot specify" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes( - [0, 1], categories=["a", "b"], dtype=CategoricalDtype(["a", "b"]) - ) - - with pytest.raises(ValueError, match=msg): - Categorical.from_codes( - [0, 1], ordered=True, dtype=CategoricalDtype(["a", "b"]) - ) - - def test_from_codes_neither(self): - msg = "Both were None" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes([0, 1]) - - def test_from_codes_with_nullable_int(self): - codes = pd.array([0, 1], dtype="Int64") - categories = ["a", "b"] - - result = Categorical.from_codes(codes, categories=categories) - expected = Categorical.from_codes(codes.to_numpy(int), categories=categories) - - tm.assert_categorical_equal(result, expected) - - def test_from_codes_with_nullable_int_na_raises(self): - codes = pd.array([0, None], dtype="Int64") - categories = ["a", "b"] - - msg = "codes cannot contain NA values" - with pytest.raises(ValueError, match=msg): - Categorical.from_codes(codes, categories=categories) - - @pytest.mark.parametrize("dtype", [None, "category"]) - def test_from_inferred_categories(self, dtype): - cats = ["a", "b"] - codes = np.array([0, 0, 1, 1], dtype="i8") - result = Categorical._from_inferred_categories(cats, codes, dtype) - expected = Categorical.from_codes(codes, cats) - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("dtype", [None, "category"]) - def test_from_inferred_categories_sorts(self, dtype): - cats = ["b", "a"] - codes = np.array([0, 1, 1, 1], dtype="i8") - result = Categorical._from_inferred_categories(cats, codes, dtype) - expected = Categorical.from_codes([1, 0, 0, 0], ["a", "b"]) - tm.assert_categorical_equal(result, expected) - - def test_from_inferred_categories_dtype(self): - cats = ["a", "b", "d"] - codes = np.array([0, 1, 0, 2], dtype="i8") - dtype = CategoricalDtype(["c", "b", "a"], ordered=True) - result = Categorical._from_inferred_categories(cats, codes, dtype) - expected = Categorical( - ["a", "b", "a", "d"], categories=["c", "b", "a"], ordered=True - ) - tm.assert_categorical_equal(result, expected) - - def test_from_inferred_categories_coerces(self): - cats = ["1", "2", "bad"] - codes = np.array([0, 0, 1, 2], dtype="i8") - dtype = CategoricalDtype([1, 2]) - result = Categorical._from_inferred_categories(cats, codes, dtype) - expected = Categorical([1, 1, 2, np.nan]) - tm.assert_categorical_equal(result, expected) - - @pytest.mark.parametrize("ordered", [None, True, False]) - def test_construction_with_ordered(self, ordered): - # GH 9347, 9190 - cat = Categorical([0, 1, 2], ordered=ordered) - assert cat.ordered == bool(ordered) - - def test_constructor_imaginary(self): - values = [1, 2, 3 + 1j] - c1 = Categorical(values) - tm.assert_index_equal(c1.categories, Index(values)) - tm.assert_numpy_array_equal(np.array(c1), np.array(values)) - - def test_constructor_string_and_tuples(self): - # GH 21416 - c = Categorical(np.array(["c", ("a", "b"), ("b", "a"), "c"], dtype=object)) - expected_index = Index([("a", "b"), ("b", "a"), "c"]) - assert c.categories.equals(expected_index) - - def test_interval(self): - idx = pd.interval_range(0, 10, periods=10) - cat = Categorical(idx, categories=idx) - expected_codes = np.arange(10, dtype="int8") - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - # infer categories - cat = Categorical(idx) - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - # list values - cat = Categorical(list(idx)) - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - # list values, categories - cat = Categorical(list(idx), categories=list(idx)) - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - # shuffled - values = idx.take([1, 2, 0]) - cat = Categorical(values, categories=idx) - tm.assert_numpy_array_equal(cat.codes, np.array([1, 2, 0], dtype="int8")) - tm.assert_index_equal(cat.categories, idx) - - # extra - values = pd.interval_range(8, 11, periods=3) - cat = Categorical(values, categories=idx) - expected_codes = np.array([8, 9, -1], dtype="int8") - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - # overlapping - idx = IntervalIndex([Interval(0, 2), Interval(0, 1)]) - cat = Categorical(idx, categories=idx) - expected_codes = np.array([0, 1], dtype="int8") - tm.assert_numpy_array_equal(cat.codes, expected_codes) - tm.assert_index_equal(cat.categories, idx) - - def test_categorical_extension_array_nullable(self, nulls_fixture): - # GH: - arr = pd.arrays.StringArray._from_sequence([nulls_fixture] * 2) - result = Categorical(arr) - assert arr.dtype == result.categories.dtype - expected = Categorical(Series([pd.NA, pd.NA], dtype=arr.dtype)) - tm.assert_categorical_equal(result, expected) - - def test_from_sequence_copy(self): - cat = Categorical(np.arange(5).repeat(2)) - result = Categorical._from_sequence(cat, dtype=None, copy=False) - - # more generally, we'd be OK with a view - assert result._codes is cat._codes - - result = Categorical._from_sequence(cat, dtype=None, copy=True) - - assert not tm.shares_memory(result, cat) - - def test_constructor_datetime64_non_nano(self): - categories = np.arange(10).view("M8[D]") - values = categories[::2].copy() - - cat = Categorical(values, categories=categories) - assert (cat == values).all() - - def test_constructor_preserves_freq(self): - # GH33830 freq retention in categorical - dti = date_range("2016-01-01", periods=5) - - expected = dti.freq - - cat = Categorical(dti) - result = cat.categories.freq - - assert expected == result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_asyncio.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_asyncio.py deleted file mode 100644 index 0f32b5f6207441753482e8b24e0f4ff10c5614d8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_asyncio.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright 2016 Étienne Bersac -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import functools -import sys -import typing -from asyncio import sleep - -from pip._vendor.tenacity import AttemptManager -from pip._vendor.tenacity import BaseRetrying -from pip._vendor.tenacity import DoAttempt -from pip._vendor.tenacity import DoSleep -from pip._vendor.tenacity import RetryCallState - -WrappedFn = typing.TypeVar("WrappedFn", bound=typing.Callable) -_RetValT = typing.TypeVar("_RetValT") - - -class AsyncRetrying(BaseRetrying): - def __init__(self, sleep: typing.Callable[[float], typing.Awaitable] = sleep, **kwargs: typing.Any) -> None: - super().__init__(**kwargs) - self.sleep = sleep - - async def __call__( # type: ignore # Change signature from supertype - self, - fn: typing.Callable[..., typing.Awaitable[_RetValT]], - *args: typing.Any, - **kwargs: typing.Any, - ) -> _RetValT: - self.begin() - - retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) - while True: - do = self.iter(retry_state=retry_state) - if isinstance(do, DoAttempt): - try: - result = await fn(*args, **kwargs) - except BaseException: # noqa: B902 - retry_state.set_exception(sys.exc_info()) - else: - retry_state.set_result(result) - elif isinstance(do, DoSleep): - retry_state.prepare_for_next_attempt() - await self.sleep(do) - else: - return do - - def __aiter__(self) -> "AsyncRetrying": - self.begin() - self._retry_state = RetryCallState(self, fn=None, args=(), kwargs={}) - return self - - async def __anext__(self) -> typing.Union[AttemptManager, typing.Any]: - while True: - do = self.iter(retry_state=self._retry_state) - if do is None: - raise StopAsyncIteration - elif isinstance(do, DoAttempt): - return AttemptManager(retry_state=self._retry_state) - elif isinstance(do, DoSleep): - self._retry_state.prepare_for_next_attempt() - await self.sleep(do) - else: - return do - - def wraps(self, fn: WrappedFn) -> WrappedFn: - fn = super().wraps(fn) - # Ensure wrapper is recognized as a coroutine function. - - @functools.wraps(fn) - async def async_wrapped(*args: typing.Any, **kwargs: typing.Any) -> typing.Any: - return await fn(*args, **kwargs) - - # Preserve attributes - async_wrapped.retry = fn.retry - async_wrapped.retry_with = fn.retry_with - - return async_wrapped diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette-0.27.0.dist-info/licenses/LICENSE.md b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette-0.27.0.dist-info/licenses/LICENSE.md deleted file mode 100644 index d16a60ec5b9963ef86b35a52ac92227014618e6c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/starlette-0.27.0.dist-info/licenses/LICENSE.md +++ /dev/null @@ -1,27 +0,0 @@ -Copyright © 2018, [Encode OSS Ltd](https://www.encode.io/). -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/spaces/pseudolab/KOMUChat/app.py b/spaces/pseudolab/KOMUChat/app.py deleted file mode 100644 index 90645557642c0328968e8dc8d8d0d651dbec6322..0000000000000000000000000000000000000000 --- a/spaces/pseudolab/KOMUChat/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import gradio as gr -from gradio.themes.utils import colors -from t5 import T5 -from koalpaca import KoAlpaca - -LOCAL_TEST = False -MODEL_STRS = ['T5', 'KoAlpaca'] -MODELS = [] - -def prepare_theme(): - theme = gr.themes.Default(primary_hue=colors.gray, - secondary_hue=colors.emerald, - neutral_hue=colors.emerald).set( - body_background_fill="*primary_800", - body_background_fill_dark="*primary_800", - - block_background_fill="*primary_700", - block_background_fill_dark="*primary_700", - - border_color_primary="*secondary_300", - border_color_primary_dark="*secondary_300", - - block_border_width="3px", - input_border_width="2px", - - input_background_fill="*primary_700", - input_background_fill_dark="*primary_700", - - background_fill_primary="*neutral_950", - background_fill_primary_dark="*neutral_950", - - background_fill_secondary="*primary_700", - background_fill_secondary_dark="*primary_700", - - body_text_color="white", - body_text_color_dark="white", - - block_label_text_color="*secondary_300", - block_label_text_color_dark="*secondary_300", - - block_label_background_fill="*primary_800", - block_label_background_fill_dark="*primary_800", - - color_accent_soft="*primary_600", - color_accent_soft_dark="*primary_600", - ) - return theme - -if __name__=='__main__': - theme = prepare_theme() - with open('README.txt', 'r') as f: - readme = f.read() - - MODELS.append(T5()) - MODELS[0].placeholder = '연애 관련 질문을 입력하세요!' - if not LOCAL_TEST: - MODELS.append(KoAlpaca()) - MODELS[1].placeholder = '연애 관련 질문을 입력하세요. (KoAlpaca는 추론 시 1분 이상 소요됩니다!)' - - with gr.Blocks(theme=prepare_theme()) as demo: - gr.HTML("

                  KOMUChat : Korean community-style relationship counseling chatbot

                  ") - with gr.Tab("소개"): - gr.Markdown(readme) - for i in range(len(MODELS)): - with gr.Tab(MODEL_STRS[i]): - chatbot = gr.Chatbot(label=MODEL_STRS[i], bubble_full_width=False) - txt = gr.Textbox(show_label=False, placeholder=MODELS[i].placeholder, container=False, elem_id=i) - txt.submit(MODELS[i].chat, [txt, chatbot], [txt, chatbot]) - - demo.launch(debug=True, share=True) \ No newline at end of file diff --git a/spaces/pyodide-demo/self-hosted/distutils.js b/spaces/pyodide-demo/self-hosted/distutils.js deleted file mode 100644 index 8f67ba4c1cdfe693e245a88405a48ccb2ece09a2..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/distutils.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="build/distutils.data";var REMOTE_PACKAGE_BASE="distutils.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","distutils",true,true);Module["FS_createPath"]("/lib/python3.9/distutils","command",true,true);Module["FS_createPath"]("/lib/python3.9/distutils","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:511906,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1431,2734,4250,5542,6414,7564,8576,9576,10641,11802,13008,14317,15328,16601,18056,19142,20220,21220,22456,23639,24712,25994,27458,28598,29994,31213,32429,33346,34555,35651,36646,37764,39082,40370,41623,42925,43507,44672,45962,46937,48215,49432,50810,52253,53658,55082,56387,57578,58971,59978,61138,62337,63450,64871,65933,66993,68336,69669,70818,72098,73504,74718,75866,77174,78405,79621,80936,82328,83689,85055,86536,87892,89111,90498,92039,93033,94370,95704,96996,98113,99273,100316,101565,102873,104136,105141,106394,107435,108545,109414,110537,111686,112843,113969,114990,115860,116835,117565,118865,120050,121555,122777,123711,124980,125839,127284,128541,129655,130740,131945,133140,134342,135585,137041,138156,139669,141008,142168,143484,144503,145041,146404,147575,148912,150058,151404,152459,153768,155059,156385,157744,158835,159862,160761,161737,162620,163721,165051,166111,167386,168510,169714,171083,172290,173183,174178,175172,176122,177105,178300,179550,180947,182156,183563,184963,186213,187475,188444,189696,191055,192113,193569,194905,196308,197621,198848,200076,201402,202470,203692,205139,206382,207506,208565,209908,211156,212290,213653,214996,216307,217742,219069,220531,221915,223245,224534,225849,227244,228379,229490,230767,232373,233837,235148,236202,237432,238577,239666,240886,241891,242890,244238,245004,246028,246947,248137,249329,250383,251270,252361,253518,254302,255411,256303,257395,258464,259673,260736,261878,263020,263892,264842,265921,267086,268272,269342,270595,271674,272773,273840,274955,276028,277281,278479,279598,280805,281964,283073,284238,285325,286634,287640,288611,289718,291152,292165,293176,294226,295276,296495,297657,298687,299790,300981,302179,303372,304677,305920,307201,308354,309687,310892,312058,313264,314464,315517,316705,317938,319115,320028,321108,322307,323412,324519,325632,326949,328041,329130,330376,331111,332372,333568,334610,335749,336959,338332,339498,340475,341736,342646,343691,344726,345949,346976,348040,349119,350405,351597,352773,353793,355006,356038,357241,358367,359538,360732,361669,362767,363988,365202,366349,367585,368734,369747,370884,372141,373470,374734,375902,377054,378267,379564,380773,382094,383424,384784,386079,386987,388114,388981,389977,390932,391688,392902,394123,395440,396651,397777,398962,400263,401291,402103,403438,404587,405711,406689,407454,408445,409519,410474,411332,412555,413726,414843,415796,416984,417907,419155,420190,421212,422408,423398,424346,425521,426613,427778,428896,429984,431127,432286,433078,434026,435111,436006,437201,438232,439029,440005,441094,441774,442463,443105,444174,445338,446551,447724,448632,449893,450797,451706,452474,452966,453846,455036,456033,456864,457849,459015,460008,461103,461938,462942,464119,465377,466259,467328,468498,469585,470755,471906,472870,473826,475080,476172,477237,478273,479416,480629,481639,482653,483713,484717,485348,486501,487603,488447,489164,490204,491503,492324,493372,493849,494897,496032,497320,498505,499622,500699,501119,502168,503108,504294,505041,506143,507359,508046,509127,509880,510603],sizes:[1431,1303,1516,1292,872,1150,1012,1e3,1065,1161,1206,1309,1011,1273,1455,1086,1078,1e3,1236,1183,1073,1282,1464,1140,1396,1219,1216,917,1209,1096,995,1118,1318,1288,1253,1302,582,1165,1290,975,1278,1217,1378,1443,1405,1424,1305,1191,1393,1007,1160,1199,1113,1421,1062,1060,1343,1333,1149,1280,1406,1214,1148,1308,1231,1216,1315,1392,1361,1366,1481,1356,1219,1387,1541,994,1337,1334,1292,1117,1160,1043,1249,1308,1263,1005,1253,1041,1110,869,1123,1149,1157,1126,1021,870,975,730,1300,1185,1505,1222,934,1269,859,1445,1257,1114,1085,1205,1195,1202,1243,1456,1115,1513,1339,1160,1316,1019,538,1363,1171,1337,1146,1346,1055,1309,1291,1326,1359,1091,1027,899,976,883,1101,1330,1060,1275,1124,1204,1369,1207,893,995,994,950,983,1195,1250,1397,1209,1407,1400,1250,1262,969,1252,1359,1058,1456,1336,1403,1313,1227,1228,1326,1068,1222,1447,1243,1124,1059,1343,1248,1134,1363,1343,1311,1435,1327,1462,1384,1330,1289,1315,1395,1135,1111,1277,1606,1464,1311,1054,1230,1145,1089,1220,1005,999,1348,766,1024,919,1190,1192,1054,887,1091,1157,784,1109,892,1092,1069,1209,1063,1142,1142,872,950,1079,1165,1186,1070,1253,1079,1099,1067,1115,1073,1253,1198,1119,1207,1159,1109,1165,1087,1309,1006,971,1107,1434,1013,1011,1050,1050,1219,1162,1030,1103,1191,1198,1193,1305,1243,1281,1153,1333,1205,1166,1206,1200,1053,1188,1233,1177,913,1080,1199,1105,1107,1113,1317,1092,1089,1246,735,1261,1196,1042,1139,1210,1373,1166,977,1261,910,1045,1035,1223,1027,1064,1079,1286,1192,1176,1020,1213,1032,1203,1126,1171,1194,937,1098,1221,1214,1147,1236,1149,1013,1137,1257,1329,1264,1168,1152,1213,1297,1209,1321,1330,1360,1295,908,1127,867,996,955,756,1214,1221,1317,1211,1126,1185,1301,1028,812,1335,1149,1124,978,765,991,1074,955,858,1223,1171,1117,953,1188,923,1248,1035,1022,1196,990,948,1175,1092,1165,1118,1088,1143,1159,792,948,1085,895,1195,1031,797,976,1089,680,689,642,1069,1164,1213,1173,908,1261,904,909,768,492,880,1190,997,831,985,1166,993,1095,835,1004,1177,1258,882,1069,1170,1087,1170,1151,964,956,1254,1092,1065,1036,1143,1213,1010,1014,1060,1004,631,1153,1102,844,717,1040,1299,821,1048,477,1048,1135,1288,1185,1117,1077,420,1049,940,1186,747,1102,1216,687,1081,753,723,1303],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_build/distutils.data")}Module["addRunDependency"]("datafile_build/distutils.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/distutils/README",start:0,end:242,audio:0},{filename:"/lib/python3.9/distutils/__init__.py",start:242,end:478,audio:0},{filename:"/lib/python3.9/distutils/_msvccompiler.py",start:478,end:20485,audio:0},{filename:"/lib/python3.9/distutils/archive_util.py",start:20485,end:29057,audio:0},{filename:"/lib/python3.9/distutils/bcppcompiler.py",start:29057,end:43951,audio:0},{filename:"/lib/python3.9/distutils/ccompiler.py",start:43951,end:91368,audio:0},{filename:"/lib/python3.9/distutils/cmd.py",start:91368,end:109447,audio:0},{filename:"/lib/python3.9/distutils/config.py",start:109447,end:114274,audio:0},{filename:"/lib/python3.9/distutils/core.py",start:114274,end:123150,audio:0},{filename:"/lib/python3.9/distutils/cygwinccompiler.py",start:123150,end:139530,audio:0},{filename:"/lib/python3.9/distutils/debug.py",start:139530,end:139669,audio:0},{filename:"/lib/python3.9/distutils/dep_util.py",start:139669,end:143160,audio:0},{filename:"/lib/python3.9/distutils/dir_util.py",start:143160,end:150938,audio:0},{filename:"/lib/python3.9/distutils/dist.py",start:150938,end:201323,audio:0},{filename:"/lib/python3.9/distutils/errors.py",start:201323,end:204900,audio:0},{filename:"/lib/python3.9/distutils/extension.py",start:204900,end:215415,audio:0},{filename:"/lib/python3.9/distutils/fancy_getopt.py",start:215415,end:233199,audio:0},{filename:"/lib/python3.9/distutils/file_util.py",start:233199,end:241347,audio:0},{filename:"/lib/python3.9/distutils/filelist.py",start:241347,end:254179,audio:0},{filename:"/lib/python3.9/distutils/log.py",start:254179,end:256148,audio:0},{filename:"/lib/python3.9/distutils/msvc9compiler.py",start:256148,end:286601,audio:0},{filename:"/lib/python3.9/distutils/msvccompiler.py",start:286601,end:310141,audio:0},{filename:"/lib/python3.9/distutils/spawn.py",start:310141,end:314533,audio:0},{filename:"/lib/python3.9/distutils/sysconfig.py",start:314533,end:335165,audio:0},{filename:"/lib/python3.9/distutils/text_file.py",start:335165,end:347648,audio:0},{filename:"/lib/python3.9/distutils/unixccompiler.py",start:347648,end:362402,audio:0},{filename:"/lib/python3.9/distutils/util.py",start:362402,end:383315,audio:0},{filename:"/lib/python3.9/distutils/version.py",start:383315,end:395829,audio:0},{filename:"/lib/python3.9/distutils/versionpredicate.py",start:395829,end:400962,audio:0},{filename:"/lib/python3.9/distutils/command/__init__.py",start:400962,end:401761,audio:0},{filename:"/lib/python3.9/distutils/command/bdist.py",start:401761,end:407323,audio:0},{filename:"/lib/python3.9/distutils/command/bdist_dumb.py",start:407323,end:412236,audio:0},{filename:"/lib/python3.9/distutils/command/bdist_msi.py",start:412236,end:447815,audio:0},{filename:"/lib/python3.9/distutils/command/bdist_rpm.py",start:447815,end:469352,audio:0},{filename:"/lib/python3.9/distutils/command/bdist_wininst.py",start:469352,end:485382,audio:0},{filename:"/lib/python3.9/distutils/command/build.py",start:485382,end:491149,audio:0},{filename:"/lib/python3.9/distutils/command/build_clib.py",start:491149,end:499171,audio:0},{filename:"/lib/python3.9/distutils/command/build_ext.py",start:499171,end:530806,audio:0},{filename:"/lib/python3.9/distutils/command/build_py.py",start:530806,end:547996,audio:0},{filename:"/lib/python3.9/distutils/command/build_scripts.py",start:547996,end:554228,audio:0},{filename:"/lib/python3.9/distutils/command/check.py",start:554228,end:559865,audio:0},{filename:"/lib/python3.9/distutils/command/clean.py",start:559865,end:562641,audio:0},{filename:"/lib/python3.9/distutils/command/command_template",start:562641,end:563274,audio:0},{filename:"/lib/python3.9/distutils/command/config.py",start:563274,end:576391,audio:0},{filename:"/lib/python3.9/distutils/command/install.py",start:576391,end:603196,audio:0},{filename:"/lib/python3.9/distutils/command/install_data.py",start:603196,end:606018,audio:0},{filename:"/lib/python3.9/distutils/command/install_egg_info.py",start:606018,end:608621,audio:0},{filename:"/lib/python3.9/distutils/command/install_headers.py",start:608621,end:609919,audio:0},{filename:"/lib/python3.9/distutils/command/install_lib.py",start:609919,end:618316,audio:0},{filename:"/lib/python3.9/distutils/command/install_scripts.py",start:618316,end:620333,audio:0},{filename:"/lib/python3.9/distutils/command/register.py",start:620333,end:632045,audio:0},{filename:"/lib/python3.9/distutils/command/sdist.py",start:632045,end:651050,audio:0},{filename:"/lib/python3.9/distutils/command/upload.py",start:651050,end:658647,audio:0},{filename:"/lib/python3.9/distutils/tests/Setup.sample",start:658647,end:660896,audio:0},{filename:"/lib/python3.9/distutils/tests/__init__.py",start:660896,end:662240,audio:0},{filename:"/lib/python3.9/distutils/tests/includetest.rst",start:662240,end:662265,audio:0},{filename:"/lib/python3.9/distutils/tests/support.py",start:662265,end:668743,audio:0},{filename:"/lib/python3.9/distutils/tests/test_archive_util.py",start:668743,end:683044,audio:0},{filename:"/lib/python3.9/distutils/tests/test_bdist.py",start:683044,end:684937,audio:0},{filename:"/lib/python3.9/distutils/tests/test_bdist_dumb.py",start:684937,end:687842,audio:0},{filename:"/lib/python3.9/distutils/tests/test_bdist_msi.py",start:687842,end:688645,audio:0},{filename:"/lib/python3.9/distutils/tests/test_bdist_rpm.py",start:688645,end:693653,audio:0},{filename:"/lib/python3.9/distutils/tests/test_bdist_wininst.py",start:693653,end:695043,audio:0},{filename:"/lib/python3.9/distutils/tests/test_build.py",start:695043,end:697008,audio:0},{filename:"/lib/python3.9/distutils/tests/test_build_clib.py",start:697008,end:701639,audio:0},{filename:"/lib/python3.9/distutils/tests/test_build_ext.py",start:701639,end:722272,audio:0},{filename:"/lib/python3.9/distutils/tests/test_build_py.py",start:722272,end:728607,audio:0},{filename:"/lib/python3.9/distutils/tests/test_build_scripts.py",start:728607,end:732200,audio:0},{filename:"/lib/python3.9/distutils/tests/test_check.py",start:732200,end:737911,audio:0},{filename:"/lib/python3.9/distutils/tests/test_clean.py",start:737911,end:739352,audio:0},{filename:"/lib/python3.9/distutils/tests/test_cmd.py",start:739352,end:743187,audio:0},{filename:"/lib/python3.9/distutils/tests/test_config.py",start:743187,end:747079,audio:0},{filename:"/lib/python3.9/distutils/tests/test_config_cmd.py",start:747079,end:750102,audio:0},{filename:"/lib/python3.9/distutils/tests/test_core.py",start:750102,end:754179,audio:0},{filename:"/lib/python3.9/distutils/tests/test_cygwinccompiler.py",start:754179,end:759815,audio:0},{filename:"/lib/python3.9/distutils/tests/test_dep_util.py",start:759815,end:762635,audio:0},{filename:"/lib/python3.9/distutils/tests/test_dir_util.py",start:762635,end:767289,audio:0},{filename:"/lib/python3.9/distutils/tests/test_dist.py",start:767289,end:786369,audio:0},{filename:"/lib/python3.9/distutils/tests/test_extension.py",start:786369,end:789137,audio:0},{filename:"/lib/python3.9/distutils/tests/test_file_util.py",start:789137,end:793550,audio:0},{filename:"/lib/python3.9/distutils/tests/test_filelist.py",start:793550,end:805025,audio:0},{filename:"/lib/python3.9/distutils/tests/test_install.py",start:805025,end:813637,audio:0},{filename:"/lib/python3.9/distutils/tests/test_install_data.py",start:813637,end:816214,audio:0},{filename:"/lib/python3.9/distutils/tests/test_install_headers.py",start:816214,end:817452,audio:0},{filename:"/lib/python3.9/distutils/tests/test_install_lib.py",start:817452,end:821426,audio:0},{filename:"/lib/python3.9/distutils/tests/test_install_scripts.py",start:821426,end:824051,audio:0},{filename:"/lib/python3.9/distutils/tests/test_log.py",start:824051,end:825915,audio:0},{filename:"/lib/python3.9/distutils/tests/test_msvc9compiler.py",start:825915,end:831953,audio:0},{filename:"/lib/python3.9/distutils/tests/test_msvccompiler.py",start:831953,end:834798,audio:0},{filename:"/lib/python3.9/distutils/tests/test_register.py",start:834798,end:844563,audio:0},{filename:"/lib/python3.9/distutils/tests/test_sdist.py",start:844563,end:861610,audio:0},{filename:"/lib/python3.9/distutils/tests/test_spawn.py",start:861610,end:867070,audio:0},{filename:"/lib/python3.9/distutils/tests/test_sysconfig.py",start:867070,end:878115,audio:0},{filename:"/lib/python3.9/distutils/tests/test_text_file.py",start:878115,end:881551,audio:0},{filename:"/lib/python3.9/distutils/tests/test_unixccompiler.py",start:881551,end:886179,audio:0},{filename:"/lib/python3.9/distutils/tests/test_upload.py",start:886179,end:893318,audio:0},{filename:"/lib/python3.9/distutils/tests/test_util.py",start:893318,end:904890,audio:0},{filename:"/lib/python3.9/distutils/tests/test_version.py",start:904890,end:908340,audio:0},{filename:"/lib/python3.9/distutils/tests/test_versionpredicate.py",start:908340,end:908620,audio:0},{filename:"/lib/python3.9/distutils/tests/xxmodule.c",start:908620,end:921535,audio:0}],remote_package_size:516002,package_uuid:"1f6ab67c-38cc-4639-b35f-5ea6ac86747e"})})(); \ No newline at end of file diff --git a/spaces/qinzhu/diy-girlfriend/modules.py b/spaces/qinzhu/diy-girlfriend/modules.py deleted file mode 100644 index 3484f6a1f4c1c06855c37a1ff4e66c58864acb38..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dilated and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/qtoino/form_matcher/README.md b/spaces/qtoino/form_matcher/README.md deleted file mode 100644 index f0dc1f9bcddadf96a6856b06e8c16d90870c7c41..0000000000000000000000000000000000000000 --- a/spaces/qtoino/form_matcher/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Form Matcher -emoji: 📊 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/quidiaMuxgu/Expedit-SAM/2nzfeecupinoutpdf186.md b/spaces/quidiaMuxgu/Expedit-SAM/2nzfeecupinoutpdf186.md deleted file mode 100644 index 36d3e9d07cc78f2725d64795d1d402bb34454a61..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/2nzfeecupinoutpdf186.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  2nzfeecupinoutpdf186


                  Download ->>->>->> https://geags.com/2uCrja



                  - -2nzfeecupinoutpdf186 · Din 16742 Pdf 37 · Ancestors Legacy Saladins Conquest Update Build 63982-CODEX · Scrubs Download Legendado ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md deleted file mode 100644 index 830e43ce5727550d1ebff33fbdb828b01722f7bd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121.md +++ /dev/null @@ -1,40 +0,0 @@ -

                  Adobe.acrobat.pro.x.v10.0.multilingual.incl.keymaker-core 121


                  Downloadhttps://geags.com/2uCrvE



                  - -com. - -As long as you remain secure, your personal data will only be stored in our secure system.Detection of Retinoblastoma Tumor Viral Genome in Adjacent Tissue with Nested Polymerase Chain Reaction. - -Retinoblastoma is a childhood neoplasia that often presents with intraocular invasion. Even when not localized to the retina, the tumor can give rise to a carcinoma, and be associated with metastasis. We present a patient with metastatic retinoblastoma with evidence of an extraneural retinoblastoma. Analysis of the patient's tumors revealed the presence of a viral genome consistent with the Alu-Jockey-Alu-like region of human endogenous retrovirus K. This case underscores the importance of the inclusion of possible viral contamination when working with materials obtained from the central nervous system and provides an example of the utility of nested polymerase chain reaction.Q: - -How do I insert some text in the error message in Django - -How do I add some text to the error message, which is displayed when the user enters invalid data in the input field. I mean, if the user enters a zip code, he should be warned that he entered a wrong data, and the correct data should be filled in the error message. - -A: - -According to documentation you can override template_name and change your template for the given form. - -The default template for every form is form.html and you can find it in /apps/your_project/your_app/templates/base.html. - -So for example if you want to change the validation message for email form, you can change the template inside base.html with your form.html. The default template for the email form is email/form.html. - -% extends 'email/base.html' % - -% block title %Register% endblock % - -% block content % - - % block heading %% endblock % - - % if form.errors % - - % block error_0 %% endblock % - - form.errors - - % endif % - -
                  -
                  -
                  -

                  diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Hindi Font Walkman Chanakya 905 Free Download WORK.md b/spaces/quidiaMuxgu/Expedit-SAM/Hindi Font Walkman Chanakya 905 Free Download WORK.md deleted file mode 100644 index bb9d5137eb1e50a4e52f6292fe8014d8aecc9029..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Hindi Font Walkman Chanakya 905 Free Download WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Hindi Font Walkman Chanakya 905 Free Download


                  Download Zip ····· https://geags.com/2uCqE7



                  -
                  -... hindi font for windows 7, walkman chanakya 905 normal hindi font free download, walkman chanakya 901 normal hindi font free download, ... 1fdad05405
                  -
                  -
                  -

                  diff --git a/spaces/quidiaMuxgu/Expedit-SAM/L Ull De La Momia.pdfl.md b/spaces/quidiaMuxgu/Expedit-SAM/L Ull De La Momia.pdfl.md deleted file mode 100644 index 90a309ef856b12445bbd5148a318cda44041c229..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/L Ull De La Momia.pdfl.md +++ /dev/null @@ -1,11 +0,0 @@ - -

                  L'ull de la mòmia: una novel·la d'aventures i misteri ambientada a l'Egipte faraònic

                  -

                  L'ull de la mòmia és una novel·la escrita per Jesús Cortés i il·lustrada per Francesc Santana, publicada per Edicions Bromera en el 2000. Es tracta d'una obra de ficció que combina elements de la literatura d'aventures i de la literatura de misteri, amb referències a la cultura i la història de l'Egipte antic.

                  -

                  La història narra les peripècies d'un grup de joves que es veuen involucrats en una perillosa aventura quan un d'ells, el Max, toca la mòmia del faraó Amenemhet III en una visita al museu i se li contagia una suposada maledicció. Per a desfer-se del malefici, hauran de seguir les indicacions de l'avi del Manel, un egiptòleg jubilat que els acompanya en el seu viatge. Els protagonistes hauran de recórrer diversos països i enfrontar-se a un temible traficant d'antiguitats que ha robat la mòmia i que vol aprofitar-se del seu poderós ull de cristall.

                  -

                  L Ull De La Momia.pdfl


                  Download Zip ✏ ✏ ✏ https://geags.com/2uCs9o



                  -

                  L'ull de la mòmia és una novel·la que captiva l'interès dels lectors i les lectores des del primer moment, gràcies al ritme àgil i trepidant de la narració, als diàlegs divertits i col·loquials dels personatges i a les descripcions detallades i suggeridores dels escenaris. A més, l'obra també té un valor educatiu, ja que introdueix aspectes rellevants de la civilització egípcia, com ara els seus déus, els seus faraons, els seus monuments, els seus costums o els seus mites. L'ull de la mòmia és, doncs, una proposta literària que combina entreteniment i cultura, fantasia i realitat, humor i suspens.

                  - -

                  L'autor de la novel·la, Jesús Cortés, és un escriptor i professor valencià que ha publicat diverses obres de literatura infantil i juvenil, així com de narrativa i assaig. Entre els seus llibres destaquen títols com Els ulls de la nit (1998), El secret de la piràmide (2002), La llum de l'atzar (2004), Els secrets del faraó (2006) o La ciutat dels somnis (2010). Cortés ha rebut diversos premis literaris, com el Premi Samaruc de Literatura Infantil i Juvenil, el Premi Bancaixa de Narrativa Juvenil o el Premi Alfons el Magnànim de Narrativa. A més, ha participat en projectes de cooperació educativa i cultural amb països com el Marroc, el Senegal o el Perú.

                  -

                  L'ull de la mòmia és una novel·la que es pot llegir com una entretinguda història d'aventures i misteri, però que també ofereix una reflexió sobre els valors de l'amistat, la solidaritat, el respecte i la tolerància. Els personatges aprenen a conèixer i a estimar una cultura diferent de la seua, a superar els seus prejudicis i pors i a defensar el patrimoni històric i cultural de l'humanitat. L'ull de la mòmia és, en definitiva, una invitació a viatjar pel món i per la imaginació, a descobrir els secrets del passat i a gaudir de la lectura.

                  d5da3c52bf
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Loslyf Magazine HOT.md b/spaces/quidiaMuxgu/Expedit-SAM/Loslyf Magazine HOT.md deleted file mode 100644 index 213a739352f40f0ceb5c0bd33c432cc5e818e670..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Loslyf Magazine HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

                  Loslyf magazine


                  Download Ziphttps://geags.com/2uCrwk



                  -
                  - 4d29de3e1b
                  -
                  -
                  -

                  diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/models/__init__.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/aesthetic-style-nsfw-classifier/README.md b/spaces/radames/aesthetic-style-nsfw-classifier/README.md deleted file mode 100644 index d4c365de7670938ccb1d9eb627c0851f9593b411..0000000000000000000000000000000000000000 --- a/spaces/radames/aesthetic-style-nsfw-classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Aesthetic Style Nsfw Classifier -emoji: 🌍 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AVG PC TuneUp Utilities 2019 Crack Keygen (Latest) Download A Comprehensive Guide to This Amazing Program.md b/spaces/raedeXanto/academic-chatgpt-beta/AVG PC TuneUp Utilities 2019 Crack Keygen (Latest) Download A Comprehensive Guide to This Amazing Program.md deleted file mode 100644 index dded7388849211744be4b116b0571932c74107bc..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/AVG PC TuneUp Utilities 2019 Crack Keygen (Latest) Download A Comprehensive Guide to This Amazing Program.md +++ /dev/null @@ -1,142 +0,0 @@ -
                  -

                  AVG PC TuneUp Utilities 2019 Crack Keygen (Latest) Download

                  -

                  If you are looking for a way to boost your PC performance, clean up your system, and fix common issues, you might want to check out AVG PC TuneUp Utilities 2019. This is a comprehensive software that offers a range of tools and features to optimize your PC and make it run faster, smoother, and more secure. In this article, we will review AVG PC TuneUp Utilities 2019 and show you how to download it for free using a crack keygen. We will also cover the main features, pros and cons, and FAQs of this software.

                  -

                  Introduction

                  -

                  AVG PC TuneUp Utilities 2019 is a product of AVG Technologies, a well-known company that provides antivirus and security software for personal and business use. AVG PC TuneUp Utilities 2019 is designed to help you improve your PC performance by cleaning up junk files, removing unwanted programs, fixing registry errors, updating outdated software, and more. It also has a user-friendly interface that allows you to customize your settings and preferences according to your needs.

                  -

                  AVG PC TuneUp Utilities 2019 Crack Keygen (Latest) Download


                  Download Filehttps://tinourl.com/2uL0pq



                  -

                  What is AVG PC TuneUp Utilities 2019?

                  -

                  AVG PC TuneUp Utilities 2019 is a software that helps you optimize your PC in various ways. It has four main categories of tools: Speed up, Clean up, Fix and update, and All functions. Each category has several sub-tools that perform specific tasks to enhance your PC performance. For example, you can use the Speed up tools to disable unnecessary programs and processes that slow down your PC, or use the Clean up tools to delete temporary files and browser data that take up disk space. You can also use the Fix and update tools to repair registry errors and update outdated software that can cause problems on your PC.

                  -

                  Why do you need AVG PC TuneUp Utilities 2019?

                  -

                  You might need AVG PC TuneUp Utilities 2019 if you experience any of the following issues on your PC:

                  -
                    -
                  • Your PC is slow or sluggish.
                  • -
                  • Your PC crashes or freezes frequently.
                  • -
                  • Your PC has low disk space or memory.
                  • -
                  • Your PC has a lot of junk files or duplicate files.
                  • -
                  • Your PC has a lot of unwanted programs or browser extensions.
                  • -
                  • Your PC has outdated or missing drivers or software.
                  • -
                  • Your PC has registry errors or broken shortcuts.
                  • -
                  -

                  By using AVG PC TuneUp Utilities 2019, you can solve these issues and improve your PC performance significantly. You can also enjoy other benefits such as:

                  -
                    -
                  • Better battery life for your laptop.
                  • -
                  • Faster boot time and shutdown time for your PC.
                  • -
                  • More disk space and memory for your files and programs.
                  • -
                  • More security and privacy for your online activities.
                  • -
                  • More customization and control over your PC settings.
                  • -
                  -

                  How to download AVG PC TuneUp Utilities 2019 Crack Keygen?

                  -

                  If you want to download AVG PC TuneUp Utilities 2019 for free, you can use a crack keygen that generates a valid license key for the software. A crack keygen is a tool that bypasses the activation process of the software and allows you to use it without paying for it. However, using a crack keygen is illegal and risky, as it may contain viruses or malware that can harm your PC or compromise your data. Therefore, we do not recommend using a crack keygen to download AVG PC TuneUp Utilities 2019.

                  -

                  The best way to download AVG PC TuneUp Utilities 2019 is to get it from the official website of AVG Technologies. You can get a free trial version for 30 days or buy the full version for $49.99 per year. The official website also offers customer support, updates, and guarantees for the software. You can also get discounts and offers if you buy other products from AVG Technologies.

                  -

                  Features of AVG PC TuneUp Utilities 2019

                  -

                  AVG PC TuneUp Utilities 2019 has a lot of features that can help you optimize your PC in different ways. Here are some of the main features:

                  -

                  Speed up your PC

                  -

                  The Speed up category of tools helps you boost your PC speed by disabling unnecessary programs and processes that consume CPU power and RAM. You can use these tools:

                  -

                  How to get AVG PC TuneUp Utilities 2019 Crack Keygen for free
                  -AVG PC TuneUp Utilities 2019 Crack Keygen full version download link
                  -AVG PC TuneUp Utilities 2019 Crack Keygen activation code generator
                  -AVG PC TuneUp Utilities 2019 Crack Keygen license key serial number
                  -AVG PC TuneUp Utilities 2019 Crack Keygen review and features
                  -AVG PC TuneUp Utilities 2019 Crack Keygen system requirements and compatibility
                  -AVG PC TuneUp Utilities 2019 Crack Keygen installation guide and troubleshooting
                  -AVG PC TuneUp Utilities 2019 Crack Keygen best alternative software
                  -AVG PC TuneUp Utilities 2019 Crack Keygen comparison with other tuneup tools
                  -AVG PC TuneUp Utilities 2019 Crack Keygen pros and cons
                  -AVG PC TuneUp Utilities 2019 Crack Keygen discount coupon code and offer
                  -AVG PC TuneUp Utilities 2019 Crack Keygen official website and support
                  -AVG PC TuneUp Utilities 2019 Crack Keygen latest update and patch notes
                  -AVG PC TuneUp Utilities 2019 Crack Keygen testimonials and user feedback
                  -AVG PC TuneUp Utilities 2019 Crack Keygen download speed and file size
                  -AVG PC TuneUp Utilities 2019 Crack Keygen malware scan and virus protection
                  -AVG PC TuneUp Utilities 2019 Crack Keygen backup and restore options
                  -AVG PC TuneUp Utilities 2019 Crack Keygen custom settings and preferences
                  -AVG PC TuneUp Utilities 2019 Crack Keygen tips and tricks to optimize your PC
                  -AVG PC TuneUp Utilities 2019 Crack Keygen lifetime access and guarantee
                  -Is AVG PC TuneUp Utilities 2019 Crack Keygen safe and legal to use
                  -How to uninstall AVG PC TuneUp Utilities 2019 Crack Keygen completely
                  -How to upgrade from AVG PC TuneUp Utilities 2018 to 2019 Crack Keygen
                  -How to fix AVG PC TuneUp Utilities 2019 Crack Keygen errors and bugs
                  -How to contact AVG PC TuneUp Utilities 2019 Crack Keygen customer service
                  -What is the difference between AVG PC TuneUp Utilities and AVG Antivirus
                  -How to use AVG PC TuneUp Utilities 2019 Crack Keygen with Windows 10
                  -How to speed up your PC with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to clean your registry with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to delete junk files with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to defrag your hard drive with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to boost your battery life with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to update your drivers with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to remove bloatware with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to disable startup programs with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to monitor your system performance with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to recover deleted files with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to encrypt your data with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to shred sensitive files with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to manage your browser extensions with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to fix broken shortcuts with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to find duplicate files with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to optimize your gaming mode with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to schedule automatic maintenance with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to check for software updates with AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to access the online help center of AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to join the community forum of AVG PC TuneUp Utilities 2019 Crack Keygen
                  -How to share your feedback about AVG PC TuneUp Utilities 2019 Crack Keygen

                  -

                  Turbo Mode

                  -

                  Turbo Mode is a feature that turns off over 70 background processes and services that are not essential for your current activity. For example, if you are playing a game or watching a movie, Turbo Mode will turn off Windows updates, antivirus scans, scheduled tasks, etc. This will free up more resources for your game or movie and make it run faster and smoother.

                  -

                  Program Deactivator

                  -

                  Program Deactivator is a feature that stops programs from running in the background when you are not using them. For example, if you have Skype or iTunes installed on your PC but you rarely use them, Program Deactivator will stop them from launching automatically when you start your PC. This will reduce the startup time and memory usage of your PC.

                  -

                  Startup Optimizer

                  -

                  Startup Optimizer is a feature that helps you manage the programs that run when you boot up your PC. You can see which programs are slowing down your startup time and disable them if you don't need them. You can also delay some programs from starting until after your PC is ready. This will make your boot time faster and smoother.

                  -

                  Clean up your PC

                  -

                  The Clean up category of tools helps you clean up your disk space by deleting junk files, browser data, duplicate files, etc. You can use these tools:

                  -

                  Disk Cleaner

                  -

                  Disk Cleaner is a feature that scans your hard drive for temporary files, cache files, log files, etc. that are no longer needed by your system or programs. You can delete these files with one click and free up more disk space for your important files and programs.

                  -

                  Browser Cleaner

                  -

                  Browser Cleaner is a feature that cleans up your browser data such as history, cookies, cache, etc. that are stored on your disk by various browsers such as Chrome, Firefox, Edge, etc. You can delete these data with one click and free up more disk space as well as protect your privacy online.

                  -

                  Duplicate Finder

                  -

                  Duplicate Finder is a feature that helps you find and remove duplicate files on your hard drive such as photos, music, videos, documents, etc. You can scan for duplicate files based on name, size, content, etc. and delete them with one click. This will free up more disk space as well as organize your files better.

                  -

                  Fix and update your PC

                  -

                  The Fix and update category of tools helps you fix common issues on your PC such as registry errors, broken shortcuts, outdated software etc. You can use these tools:

                  -

                  Registry Cleaner

                  -

                  Registry Cleaner is a feature that scans your Windows registry for invalid entries such as missing references, corrupted values etc. that can cause errors or crashes on your system or programs. You can fix these entries with one click and improve the stability and performance of your system.

                  -

                  Shortcut Cleaner

                  -

                  Shortcut Cleaner is a feature that scans your desktop and start menu for broken shortcuts such as missing icons, invalid paths etc. that can clutter up your screen or cause confusion. You can delete these shortcuts with one click and tidy up your desktop and start menu.

                  -

                  Software Updater

                  -

                  Software Updater is a feature that checks for outdated software on your system such as drivers, applications etc. that can affect the security or functionality of your system or programs. You can update these software with one click up to date and secure.

                  -

                  Pros and Cons of AVG PC TuneUp Utilities 2019

                  -

                  AVG PC TuneUp Utilities 2019 has many advantages and disadvantages that you should consider before using it. Here are some of them:

                  -

                  Pros

                  -
                    -
                  • It has a lot of tools and features that can help you optimize your PC in various ways.
                  • -
                  • It has a user-friendly interface that is easy to navigate and customize.
                  • -
                  • It has a free trial version that you can use for 30 days without any limitations.
                  • -
                  • It has a low impact on your system resources and does not slow down your PC.
                  • -
                  • It has a good customer support and update service that can help you with any issues or questions.
                  • -
                  -

                  Cons

                  -
                    -
                  • It is not free and you have to pay $49.99 per year to use the full version.
                  • -
                  • It may not be compatible with some older or newer versions of Windows or other software.
                  • -
                  • It may not be able to fix all the issues or errors on your PC or prevent them from happening again.
                  • -
                  • It may cause some conflicts or problems with other programs or settings on your PC.
                  • -
                  • It may contain some bugs or glitches that can affect its performance or functionality.
                  • -
                  -

                  Conclusion

                  -

                  In conclusion, AVG PC TuneUp Utilities 2019 is a comprehensive software that can help you improve your PC performance by cleaning up junk files, removing unwanted programs, fixing registry errors, updating outdated software, and more. It also has a user-friendly interface that allows you to customize your settings and preferences according to your needs. However, it is not free and you have to pay $49.99 per year to use the full version. It may also not be compatible with some older or newer versions of Windows or other software. It may also not be able to fix all the issues or errors on your PC or prevent them from happening again. It may also cause some conflicts or problems with other programs or settings on your PC. It may also contain some bugs or glitches that can affect its performance or functionality.

                  -

                  Therefore, we recommend that you try the free trial version for 30 days before buying the full version. You should also backup your data and create a restore point before using the software. You should also check for updates and customer support regularly to ensure that the software is working properly and safely. You should also avoid using a crack keygen to download the software for free, as it is illegal and risky.

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about AVG PC TuneUp Utilities 2019:

                  -
                    -
                  1. Q: How do I install AVG PC TuneUp Utilities 2019?
                  2. -
                  3. A: You can install AVG PC TuneUp Utilities 2019 by downloading it from the official website of AVG Technologies. You can choose between the free trial version or the full version. After downloading the file, you can run it and follow the instructions on the screen to complete the installation process.
                  4. -
                  5. Q: How do I activate AVG PC TuneUp Utilities 2019?
                  6. -
                  7. A: You can activate AVG PC TuneUp Utilities 2019 by entering a valid license key that you can get from buying the full version or from a promotional offer. You can enter the license key in the software interface or in the activation window that pops up when you launch the software for the first time.
                  8. -
                  9. Q: How do I uninstall AVG PC TuneUp Utilities 2019?
                  10. -
                  11. A: You can uninstall AVG PC TuneUp Utilities 2019 by going to the Control Panel > Programs > Uninstall a program > AVG PC TuneUp Utilities 2019 > Uninstall. You can also use the uninstaller tool that comes with the software. You can find it in the Start menu > All Programs > AVG PC TuneUp Utilities 2019 > Uninstall AVG PC TuneUp Utilities 2019.
                  12. -
                  13. Q: How do I update AVG PC TuneUp Utilities 2019?
                  14. -
                  15. A: You can update AVG PC TuneUp Utilities 2019 by clicking on the Check for updates button in the software interface or in the notification area of your taskbar. You can also enable automatic updates in the settings menu of the software. You should always update your software to get the latest features, fixes, and security patches.
                  16. -
                  17. Q: How do I contact AVG Technologies for support?
                  18. -
                  19. A: You can contact AVG Technologies for support by visiting their official website and clicking on the Support tab. You can find various options such as FAQs, forums, chat, phone, email, etc. You can also access their support center from within the software interface by clicking on the Help button.
                  20. -
                  -

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Full Download Free The Ultimate Guide for Car Repair and Maintenance.md b/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Full Download Free The Ultimate Guide for Car Repair and Maintenance.md deleted file mode 100644 index cf6edb481818b332c9b4372e140488f20afb3e61..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Autodata 3.38 Full Download Free The Ultimate Guide for Car Repair and Maintenance.md +++ /dev/null @@ -1,132 +0,0 @@ -
                  -

                  Autodata 3.38 Full Download Free: A Comprehensive Guide

                  -

                  If you are a professional mechanic or a car enthusiast, you might have heard of Autodata, a software that provides comprehensive information on various vehicles, including technical data, service manuals, wiring diagrams, diagnostic tools, and more. Autodata is a valuable resource for anyone who wants to repair, maintain, or modify their own or others' vehicles.

                  -

                  Autodata 3.38 full download free


                  DOWNLOAD ✒ ✒ ✒ https://tinourl.com/2uKZVa



                  -

                  However, Autodata is not a cheap software. The latest version, Autodata 4.0, costs around $500 per year for a single user license. That's why many people are looking for ways to download Autodata for free, especially the older version, Autodata 3.38, which is still widely used and compatible with many operating systems.

                  -

                  In this article, we will show you how to download Autodata 3.38 full version for free, as well as how to use it effectively and what are the risks and limitations of doing so. Read on to find out more.

                  -

                  What is Autodata 3.38 and why do you need it?

                  -

                  Autodata 3.38 is a software that was released in 2011 and contains information on over 17,000 vehicles from 80 manufacturers worldwide. It covers cars, vans, trucks, motorcycles, and more. It provides technical data such as engine specifications, torque settings, fluid capacities, service intervals, etc., as well as service manuals that guide you through various procedures such as changing oil, replacing brakes, adjusting valves, etc.

                  -

                  Autodata 3.38 also includes wiring diagrams that show you the electrical connections and components of different systems such as ignition, fuel injection, air conditioning, etc., as well as diagnostic tools that help you identify and troubleshoot various faults and errors in your vehicle's computer system.

                  -

                  With Autodata 3.38, you can access all this information in an easy-to-use interface that allows you to search by vehicle make, model, year, engine type, etc., or by specific topics such as emissions control, steering system, suspension system, etc.

                  -

                  Autodata 3.38 is a useful software for anyone who wants to save money and time on vehicle repairs and maintenance, or who wants to learn more about how vehicles work and how to improve their performance and efficiency.

                  -

                  How to download Autodata 3.38 for free?

                  -

                  If you want to download Autodata 3.38 full version for free, you will need to follow these steps:

                  -

                  Step 1: Find a reliable source for Autodata 3.38 download

                  -

                  The first step is to find a website that offers Autodata 3.38 download for free. There are many websites that claim to do so, but not all of them are trustworthy or safe. Some of them may contain malware or viruses that can harm your computer or steal your personal information.

                  -

                  To avoid such risks, you should look for websites that have positive reviews from other users who have downloaded Autodata from them before. You should also check the file size and format of the download link to make sure it matches the original Autodata setup file.

                  -

                  Autodata 3.38 eng 2010 carsoftos.com torrent
                  -Autodata 3.38 Google Drive link
                  -Autodata 3.38 zip file 4shared
                  -Autodata 3.38 full version exclusive
                  -Autodata 3.38 repair manual for cars
                  -Autodata 3.38 injection systems PINDATA
                  -Autodata 3.38 wiring diagrams and node layouts
                  -Autodata 3.38 parameters for adjusting toe-in
                  -Autodata 3.38 installing timing belts and chains
                  -Autodata 3.38 repairing air conditioners and airbags
                  -Autodata 3.38 ABS and other systems of European cars
                  -Autodata 3.38 Windows xp and 7 compatible
                  -Autodata 3.38 English interface language
                  -Autodata 3.38 program for car services
                  -Autodata 3.38 diagnostics for gasoline and diesel engines
                  -Autodata 3.38 latest version download
                  -Autodata 3.38 crack and keygen
                  -Autodata 3.38 serial number and activation code
                  -Autodata 3.38 online access and updates
                  -Autodata 3.38 free trial and demo
                  -Autodata 3.38 system requirements and specifications
                  -Autodata 3.38 user reviews and ratings
                  -Autodata 3.38 customer support and feedback
                  -Autodata 3.38 features and benefits
                  -Autodata 3.38 alternatives and competitors
                  -Autodata 3.38 how to install and use guide
                  -Autodata 3.38 tips and tricks for beginners
                  -Autodata 3.38 best practices and recommendations
                  -Autodata 3.38 common problems and solutions
                  -Autodata 3.38 FAQs and answers
                  -Autodata 3.38 forum and community
                  -Autodata 3.38 video tutorials and walkthroughs
                  -Autodata 3.38 blog posts and articles
                  -Autodata 3.38 news and updates
                  -Autodata 3.38 discounts and coupons
                  -Autodata 3.38 price and payment options
                  -Autodata 3.38 refund policy and guarantee
                  -Autodata 3.38 license agreement and terms of service
                  -Autodata 3.38 privacy policy and security measures
                  -Autodata 3.38 testimonials and success stories

                  -

                  One example of a reliable website that offers Autodata 3.38 download for free is https://autodatadownload.com/. This website has been verified by many users who have successfully downloaded and installed Autodata from it without any issues.

                  -

                  Step 2: Download and install Autodata 3.38 setup file

                  -

                  The next step is to download the Autodata 3.38 setup file from the website you have chosen in step 1. The setup file should be around 1.5 GB in size and should be in ISO format.

                  -

                  To download the setup file, you will need to click on the download link provided by the website and follow the instructions on how to complete the download process.

                  -

                  Once you have downloaded the setup file, you will need to extract it using a software such as WinRAR or WinZip.

                  -

                  After extracting the setup file, you will need to run it by double-clicking on it or right-clicking on it and choosing "Run as administrator".

                  -

                  The setup wizard will guide you through the installation process of Autodata 3.38 on your computer.

                  -

                  Step 3: Download and install Autodata 3.38 crack file

                  -

                  The third step is to download the Autodata 3.38 crack file from the same website where you downloaded the setup file in step 2.

                  -

                  The crack file is a small file that allows you to bypass the activation process of Autodata and use it without paying for a license.

                  -

                  To download the crack file, you will need to click on the download link provided by the website and follow the instructions on how to complete the download process.

                  -

                  Once you have downloaded the crack file, you will need to extract it using a software such as WinRAR or WinZip.

                  -

                  After extracting the crack file, you will need to copy it and paste it into the folder where you installed Autodata 3.38 on your computer. This folder should be located in C:\Program Files (x86)\Autodata Limited\ADBCD2.

                  -

                  When you paste the crack file into the folder, you will be asked to replace the existing file with the same name. You should choose "Yes" to confirm the replacement.

                  -

                  Step 4: Run Autodata 3.38 and enjoy its features

                  -

                  The final step is to run Autodata 3.38 and enjoy its features.

                  -

                  To run Autodata 3.38, you will need to go to the folder where you installed it on your computer and double-click on the file named "ADBCD.exe".

                  -

                  This will launch Autodata 3.38 on your computer and allow you to access its database and manuals.

                  -

                  You can now use Autodata 3.38 to find information on various vehicles and perform various tasks such as diagnosing and repairing problems, servicing and maintaining your vehicle, modifying and improving your vehicle's performance, etc.

                  -

                  How to use Autodata 3.38 effectively?

                  -

                  Now that you have downloaded and installed Autodata 3.38 for free, you might wonder how to use it effectively. Here are some tips and tricks that can help you get the most out of Autodata 3.38:

                  -

                  How to access Autodata 3.38 database and manuals

                  -

                  To access Autodata 3.38 database and manuals, you will need to use the main menu that appears on the left side of the screen when you run Autodata 3.38.

                  -

                  The main menu consists of several icons that represent different categories of information such as technical data, service manuals, wiring diagrams, diagnostic tools, etc.

                  -

                  To access a specific category of information, you will need to click on the corresponding icon and then select the vehicle make, model, year, engine type, etc., from the drop-down menus that appear on the top of the screen.

                  -

                  After selecting the vehicle details, you will see a list of topics that are relevant to that vehicle under each category of information. You can click on any topic to view its content in detail.

                  -

                  How to diagnose and repair various vehicle problems with Autodata 3.38

                  -

                  To diagnose and repair various vehicle problems with Autodata 3.38, you will need to use the diagnostic tools category of information that is represented by a wrench icon on the main menu.

                  -

                  The diagnostic tools category of information contains several subcategories such as fault codes, component testing, data stream, etc., that can help you identify and troubleshoot various faults and errors in your vehicle's computer system.

                  -

                  To use the diagnostic tools category of information, you will need to connect your vehicle's computer system to your computer using a compatible device such as an OBD-II scanner or a USB cable.

                  -

                  Then, you will need to select the diagnostic tools icon from the main menu and then select the subcategory that matches your device type from the drop-down menu that appears on the top of the screen.

                  -

                  After selecting the subcategory, you will see a list of topics that are relevant to your device type under each subcategory. You can click on any topic to view its content in detail.

                  -

                  For example, if you want to read fault codes from your vehicle's computer system using an OBD-II scanner, you will need to select "OBD-II" from the drop-down menu under "Fault codes" subcategory and then click on "Read fault codes" topic.

                  -

                  This will show you a table that displays all the fault codes that are stored in your vehicle's computer system along with their descriptions and possible causes and solutions.

                  -

                  How to update Autodata 3.38 regularly

                  -

                  To update Autodata 3.38 regularly, you will need to use the update feature that is represented by a globe icon on the main menu.

                  -

                  The update feature allows you to download and install the latest updates for Autodata 3.38 that contain new or updated information on various vehicles and topics.

                  -

                  To use the update feature, you will need to have an active internet connection and enough disk space on your computer.

                  -

                  Then, you will need to click on the update icon from the main menu and follow the instructions on how to complete the update process.

                  -

                  The update process may take some time depending on the size and number of updates available. You should not interrupt or cancel the update process once it has started.

                  -

                  After the update process is completed, you will need to restart Autodata 3.38 to apply the changes.

                  -

                  What are the risks and limitations of using Autodata 3.38 for free?

                  -

                  While downloading and using Autodata 3.38 for free may seem like a great deal, you should also be aware of the risks and limitations of doing so. Here are some of them:

                  -

                  Legal issues and copyright infringement

                  -

                  Downloading and using Autodata 3.38 for free is illegal and violates the terms and conditions of Autodata Limited, the company that owns and develops Autodata software.

                  -

                  By downloading and using Autodata 3.38 for free, you are infringing on their intellectual property rights and exposing yourself to potential legal actions from them or their authorized distributors.

                  -

                  You may also be breaking the laws of your country or region that prohibit piracy and unauthorized use of software.

                  -

                  You may face serious consequences such as fines, lawsuits, or even criminal charges if you are caught downloading and using Autodata 3.38 for free.

                  -

                  Malware and virus infection

                  -

                  Downloading and using Autodata 3.38 for free also exposes your computer to malware and virus infection that can harm your computer or steal your personal information.

                  -

                  As mentioned earlier, not all websites that offer Autodata 3.38 download for free are trustworthy or safe. Some of them may contain malware or viruses that can infect your computer when you download or run the setup or crack files.

                  -

                  Malware or viruses can cause various problems such as slowing down your computer, deleting or corrupting your files, displaying unwanted ads or pop-ups, stealing your passwords or credit card details, etc.

                  -

                  You may also infect other computers or devices that are connected to your computer via a network or a USB drive.

                  -

                  You may need to spend a lot of time and money to remove malware or viruses from your computer or to recover your lost data or identity.

                  -

                  Data loss and corruption

                  -

                  Downloading and using Autodata 3.38 for free also risks data loss and corruption that can affect your work or business.

                  -

                  As mentioned earlier, Autodata 3.38 is a software that contains information on various vehicles that is updated regularly by Autodata Limited.

                  -

                  By downloading and using Autodata 3.38 for free, you are not getting access to the latest updates that contain new or updated information on various vehicles and topics.

                  -

                  This means that you may be using outdated or inaccurate information that can lead to errors or mistakes in your work or business.

                  -

                  You may also lose access to some features or functions of Autodata 3.38 that require an active license or subscription.

                  -

                  You may also experience data loss or corruption due to malware or virus infection, disk failure, power outage, etc., that can damage your files or database.

                  -

                  You may need to spend a lot of time and money to restore your data or database or to redo your work or business.

                  -

                  Conclusion

                  -

                  In conclusion, Autodata 3.38 is a software that provides comprehensive information on various vehicles that can help you repair, maintain, or modify your own or others' vehicles.

                  -

                  However, downloading and using Autodata 3.38 for free is illegal and risky. You may face legal issues, malware infection, data loss, or corruption if you do so.

                  -

                  If you want to use Autodata 3.38 legally and safely, you should buy a license or subscription from Autodata Limited or their authorized distributors. You will get access to the latest updates, features, and support from them.

                  -

                  If you want to learn more about Autodata 3.38 or other related topics, you can visit their official website at https://www.autodata-group.com/.

                  -

                  Frequently Asked Questions

                  -

                  Q: What is the difference between Autodata 3.38 and Autodata 4.0?

                  - A: Autodata 4.0 is the latest version of Autodata software that was released in 2016. It has more features and information than Autodata 3.38 such as: - More vehicles covered (over 34,000 from over 140 manufacturers) - More topics covered (over 600 including hybrid and electric vehicles) - More languages supported (over 40 including Arabic, Chinese, Russian, etc.) - More user-friendly interface (with improved navigation, search, layout, etc.) - More online access (with cloud-based technology that allows access from any device)

                  Q: How can I get a license or subscription for Autodata 4.0?

                  -A: You can get a license or subscription for Autodata 4.0 from Autodata Limited or their authorized distributors in your country or region. You can visit their official website at https://www.autodata-group.com/ to find out more details.

                  Q: How can I contact Autodata Limited for support or feedback?

                  -A: You can contact Autodata Limited for support or feedback by using their online form at https://www.autodata-group.com/contact-us/ or by calling their phone number at +44 (0)3300 444444.

                  Q: How can I learn more about vehicle repair and maintenance?

                  -A: You can learn more about vehicle repair and maintenance by reading books, magazines, blogs, forums, videos, etc., that cover this topic. You can also enroll in courses, workshops, seminars, etc., that teach this topic.

                  Q: How can I improve my writing skills?

                  -A: You can improve your writing skills by practicing regularly, reading widely, learning from feedback, following guidelines, using tools, etc., that help you write better.

                  0a6ba089eb
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cmo deshacerte del virus que convierte las carpetas en accesos directos con el codigo para mostrar carpetas ocultas por virus.md b/spaces/raedeXanto/academic-chatgpt-beta/Cmo deshacerte del virus que convierte las carpetas en accesos directos con el codigo para mostrar carpetas ocultas por virus.md deleted file mode 100644 index 12371f7ca25283381d44855b18d011e86b7dad3b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cmo deshacerte del virus que convierte las carpetas en accesos directos con el codigo para mostrar carpetas ocultas por virus.md +++ /dev/null @@ -1,145 +0,0 @@ -
                  -

                  Código para mostrar carpetas ocultas por virus

                  -

                  ¿Has notado que algunas de tus carpetas o archivos han desaparecido de tu computadora o memoria USB? ¿Has encontrado accesos directos sospechosos en lugar de tus documentos? Si es así, es probable que tu dispositivo haya sido infectado por un virus que oculta carpetas. Estos virus son programas maliciosos que se encargan de modificar los atributos de tus archivos para hacerlos invisibles y así dificultar su recuperación. En este artículo te explicaremos qué son estos virus, cómo recuperar tus carpetas ocultas por ellos y cómo prevenir futuras infecciones.

                  -

                  ¿Qué son los virus que ocultan carpetas?

                  -

                  Los virus que ocultan carpetas son una clase de malware que se aprovecha de las vulnerabilidades del sistema operativo Windows para alterar los permisos y propiedades de tus archivos y carpetas. De esta manera, los hacen indetectables para el explorador de archivos y para el usuario. Estos virus suelen propagarse a través de medios extraíbles como memorias USB, discos duros externos o tarjetas SD, pero también pueden infectar tu computadora si descargas archivos adjuntos de correos electrónicos sospechosos o programas de fuentes no confiables.

                  -

                  codigo para mostrar carpetas ocultas por virus


                  Download Zip 🗸 https://tinourl.com/2uL0rG



                  -

                  Tipos de virus que ocultan carpetas

                  -

                  Existen varios tipos de virus que ocultan carpetas, pero los más comunes son los siguientes:

                  -
                    -
                  • Virus del acceso directo: Este virus reemplaza tus carpetas originales por accesos directos con el mismo nombre e ícono. Si haces clic en ellos, el virus se ejecuta y se propaga a otras unidades. Además, puede descargar otros programas maliciosos en tu dispositivo.
                  • -
                  • Virus del autorun: Este virus crea un archivo llamado autorun.inf en tu dispositivo extraíble. Este archivo contiene instrucciones para ejecutar el virus automáticamente cuando conectes el dispositivo a tu computadora. El virus puede cambiar el nombre y el ícono de tus carpetas para hacerlas parecer archivos ejecutables.
                  • -
                  • Virus del recycler: Este virus crea una carpeta oculta llamada recycler o $recycle.bin en tu dispositivo extraíble. Esta carpeta contiene una copia del virus y otros archivos maliciosos. El virus puede modificar el registro de Windows para impedir que puedas ver los archivos ocultos.
                  • -
                  -

                  Síntomas de una infección por virus que ocultan carpetas

                  -

                  Algunos de los síntomas más evidentes de que tu dispositivo ha sido infectado por un virus que oculta carpetas son los siguientes:

                  -
                    -
                  • No puedes ver tus carpetas o archivos originales en el explorador de archivos.
                  • -
                  • Ves accesos directos o archivos ejecutables con el mismo nombre e ícono de tus carpetas.
                  • -
                  • El espacio ocupado en tu dispositivo no coincide con el tamaño real de tus archivos.
                  • -
                  • Tu computadora se vuelve más lenta o se bloquea con frecuencia.
                  • -
                  • Aparecen ventanas emergentes o mensajes de error al intentar abrir tus carpetas.
                  • -
                  -

                  ¿Cómo recuperar las carpetas ocultas por virus?

                  -

                  Afortunadamente, existen varias formas de recuperar las carpetas ocultas por virus y eliminar estos programas maliciosos de tu dispositivo. A continuación te mostramos dos métodos que puedes usar según tu preferencia y nivel de conocimiento informático.

                  -

                  como recuperar archivos ocultos por virus en usb
                  -comando para ver carpetas ocultas por virus en cmd
                  -programa para restaurar carpetas ocultas por virus
                  -solucionar problema de carpetas ocultas por virus en windows 10
                  -tutorial para mostrar carpetas ocultas por virus en windows 7
                  -codigo para eliminar virus que oculta carpetas
                  -pasos para recuperar carpetas ocultas por virus en pc
                  -herramienta para desocultar archivos ocultos por virus
                  -metodo para ver carpetas ocultas por virus en windows 8
                  -codigo para mostrar archivos ocultos por virus en mac
                  -como hacer visible carpetas ocultas por virus en linux
                  -comando para recuperar carpetas ocultas por virus en android
                  -programa para mostrar carpetas ocultas por virus en disco duro externo
                  -solucionar problema de archivos ocultos por virus en windows xp
                  -tutorial para restaurar carpetas ocultas por virus en windows vista
                  -codigo para limpiar virus que oculta archivos
                  -pasos para mostrar carpetas ocultas por virus en laptop
                  -herramienta para recuperar archivos ocultos por virus en memoria sd
                  -metodo para desocultar carpetas ocultas por virus en windows 11
                  -codigo para ver carpetas ocultas por virus en macbook
                  -como recuperar fotos ocultas por virus en celular
                  -comando para mostrar archivos ocultos por virus en terminal
                  -programa para eliminar virus que oculta carpetas en usb
                  -solucionar problema de carpetas invisibles por virus en windows 10
                  -tutorial para ver archivos ocultos por virus en windows 7
                  -codigo para desinfectar virus que oculta archivos en pc
                  -pasos para restaurar archivos ocultos por virus en usb
                  -herramienta para mostrar carpetas invisibles por virus en disco duro interno
                  -metodo para eliminar carpetas ocultas por virus en windows 8
                  -codigo para recuperar documentos ocultos por virus en mac
                  -como desocultar videos ocultos por virus en android
                  -comando para restaurar archivos invisibles por virus en cmd
                  -programa para ver carpetas invisibles por virus en memoria sd
                  -solucionar problema de archivos invisibles por virus en windows xp
                  -tutorial para eliminar archivos ocultos por virus en windows vista
                  -codigo para quitar virus que oculta carpetas en laptop
                  -pasos para ver archivos invisibles por virus en pc
                  -herramienta para eliminar carpetas invisibles por virus en disco duro externo
                  -metodo para mostrar documentos ocultos por virus en windows 11
                  -codigo para desocultar musica oculta por virus en macbook

                  -

                  Método 1: Usar el símbolo del sistema o CMD

                  -

                  El símbolo del sistema o CMD es una herramienta integrada en Windows que te permite ejecutar comandos para realizar diversas tareas en tu computadora. Una de ellas es cambiar los atributos de tus archivos y carpetas para hacerlos visibles nuevamente. Para usar este método, sigue estos pasos:

                  -

                  Pasos para usar el CMD

                  -
                    -
                  1. Conecta tu dispositivo extraíble a tu computadora y abre el explorador de archivos.
                  2. -
                  3. Identifica la letra asignada a tu dispositivo (por ejemplo, E:).
                  4. -
                  5. Abre el menú Inicio y escribe cmd en la barra de búsqueda.
                  6. -
                  7. Haz clic derecho sobre el resultado Símbolo del sistema y selecciona Ejecutar como administrador.
                  8. -
                  9. En la ventana del CMD, escribe la letra de tu dispositivo seguida de dos puntos (por ejemplo, E:) y presiona Enter.
                  10. -
                  11. Escribe el siguiente comando: attrib -h -r -s /s /d *.* y presiona Enter.
                  12. -
                  13. Espera a que el proceso termine y cierra la ventana del CMD.
                  14. -
                  15. Vuelve al explorador de archivos y comprueba si puedes ver tus carpetas originales.
                  16. -
                  -

                  Ventajas y desventajas del CMD

                  -

                  Este método tiene algunas ventajas y desventajas que debes tener en cuenta antes de usarlo:

                  -
                    -
                  • Ventajas:
                  • -
                      -
                    • Es gratuito y no requiere instalar ningún programa adicional.
                    • -
                    • Es rápido y efectivo para recuperar las carpetas ocultas por virus.
                    • -
                    • No elimina ni daña tus archivos originales.
                    • -
                    -
                  • Desventajas:
                  • -
                      -
                    • No elimina el virus ni previene futuras infecciones.
                    • -
                    • Puede ser difícil o confuso para usuarios principiantes o inexpertos.
                    • -
                    • Puede causar problemas si se usa un comando incorrecto o se cambian los atributos equivocados.
                    • -
                    -
                  -

                  Método 2: Usar un programa antivirus

                  -

                  Otra forma de recuperar las carpetas ocultas por virus es usar un programa antivirus que sea capaz de detectar y eliminar estos programas maliciosos. Existen muchos antivirus disponibles en el mercado, tanto gratuitos como pagos, pero debes asegurarte de elegir uno confiable y actualizado. Para usar este método, sigue estos pasos:

                  -

                  Pasos para usar un antivirus

                  -
                    -
                  1. Descarga e instala un programa antivirus de tu preferencia en tu computadora.
                  2. -
                  3. Conecta tu dispositivo extraíble a tu computadora y abre el programa antivirus.
                  4. -
                  5. Selecciona la opción de escanear o analizar tu dispositivo extraíble (el nombre puede variar según el antivirus).
                  6. -
                  7. Espera a que el programa detecte los virus y te muestre los resultados.
                  8. -
                  9. Selecciona la opción de eliminar o desinfectar los archivos infectados (el nombre puede variar según el antivirus).
                  10. -
                  11. Cierra el programa antivirus y abre el explorador de archivos.
                  12. -
                  13. Vuelve a activar la opción de mostrar archivos ocultos en Windows 10 si es necesario.
                  14. -
                  15. Comprueba si puedes ver tus carpetas originales.
                  16. -
                  -

                  Ventajas y desventajas de un antivirus

                  -

                  Este método también tiene algunas ventajas y desventajas que debes tener en cuenta antes de usarlo:

                  -
                    -
                  • Ventajas:
                  • -
                      -
                    • Elimina el virus y previene futuras infecciones.
                    • - ados por algún motivo. Esta opción puede ser útil si has perdido tus archivos originales después de eliminar los accesos directos o formatear tu dispositivo extraíble. Sin embargo, debes tener en cuenta que la recuperación de archivos borrados no es siempre posible ni garantizada, ya que depende de varios factores como el tiempo transcurrido, el estado del dispositivo y la sobreescritura de datos. -
                    • ¿Qué programas para recuperar archivos borrados me recomiendas?
                      Existen muchos programas para recuperar archivos borrados en el mercado, pero no todos ofrecen el mismo nivel de eficacia y facilidad de uso. Según nuestra investigación y las opiniones de expertos independientes, estos son algunos de los mejores programas que puedes usar para recuperar archivos borrados:
                    • - - - - - - - - - - - - - - - - - - - - - - - - - - -
                      ProgramaCaracterísticasPrecio
                      Disk Drill- Recupera archivos borrados de cualquier dispositivo y formato.
                      - Ofrece una vista previa de los archivos recuperables.
                      - Incluye herramientas adicionales como limpieza de disco, clonación y protección de datos.
                      - Soporta Windows y Mac OS.
                      $89 al año para 1 dispositivo (versión Pro).
                      $0 al año para hasta 500 MB de recuperación (versión Free).
                      EaseUS Data Recovery Wizard- Recupera archivos borrados por cualquier motivo y situación.
                      - Ofrece una vista previa de los archivos recuperables.
                      - Incluye herramientas adicionales como reparación de video, clonación y gestión de particiones.
                      - Soporta Windows y Mac OS.
                      $69.95 al año para 1 dispositivo (versión Pro).
                      $0 al año para hasta 2 GB de recuperación (versión Free).
                      Recuva- Recupera archivos borrados accidentalmente o por virus.
                      - Ofrece una vista previa de los archivos recuperables.
                      - Incluye un modo avanzado con más opciones y filtros.
                      - Soporta Windows.
                      $19.95 una vez para hasta 3 dispositivos (versión Professional).
                      $0 una vez para uso ilimitado (versión Free).
                      Stellar Data Recovery- Recupera archivos borrados por cualquier causa y desde cualquier medio.
                      - Ofrece una vista previa de los archivos recuperables.
                      - Incluye herramientas adicionales como reparación de fotos, monitorización del disco y creación de imágenes.
                      - Soporta Windows y Mac OS.
                      $79.99 al año para 1 dispositivo (versión Standard).
                      $0 al año para hasta 1 GB de recuperación (versión Free).
                      -
              -

              0a6ba089eb
              -
              -
              \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/David G Myers Psichologija 2008 Pdf 13 The Ultimate Resource for Psychology Students and Enthusiasts.md b/spaces/raedeXanto/academic-chatgpt-beta/David G Myers Psichologija 2008 Pdf 13 The Ultimate Resource for Psychology Students and Enthusiasts.md deleted file mode 100644 index b2c605e1cc2fd174b120c4719b53478cc00b5a6d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/David G Myers Psichologija 2008 Pdf 13 The Ultimate Resource for Psychology Students and Enthusiasts.md +++ /dev/null @@ -1,156 +0,0 @@ - -

              David G Myers Psichologija 2008 Pdf 13: A Comprehensive Review

              -

              Psychology is the scientific study of human behavior and mental processes. It is a fascinating and diverse field that explores topics such as personality, cognition, emotion, motivation, development, social interaction, mental health, and more. Psychology can help us understand ourselves and others better, as well as improve our well-being and quality of life.

              -

              David G Myers Psichologija 2008 Pdf 13


              Download Zip ○○○ https://tinourl.com/2uKZkv



              -

              However, psychology is also a complex and challenging discipline that requires a lot of reading, research, and critical thinking. There are many books and resources available for students and enthusiasts of psychology, but not all of them are equally reliable, relevant, and engaging. How can you find a book that covers all the essential topics of psychology in a clear, comprehensive, and captivating way?

              -

              One of the best options is David G Myers Psichologija 2008 Pdf 13. This book is a translation of the 13th edition of Psychology by David G Myers and C Nathan DeWall, which is one of the most popular and widely used textbooks in psychology courses around the world. In this article, we will review this book and explain why it is a must-read for anyone interested in psychology.

              -

              Introduction

              -

              Who is David G Myers?

              -

              David G Myers is a professor of psychology at Hope College in Michigan, USA. He has a PhD in social psychology from the University of Iowa and has published over 20 books and hundreds of articles on various topics in psychology. He is also an award-winning teacher and communicator who has received numerous honors for his contributions to psychology education and public understanding of science.

              -

              What is Psichologija 2008 Pdf 13?

              -

              Psichologija 2008 Pdf 13 is a Lithuanian translation of the 13th edition of Psychology by David G Myers and C Nathan DeWall. The book was published in 2018 by Alma littera, a leading publisher of academic books in Lithuania. The book has over 900 pages and contains 16 chapters that cover all the major areas of psychology. The book also includes many features that enhance the learning experience, such as:

              -

              David G Myers Psychology 2008 Pdf 13th edition
              -David G Myers Psichologija 2008 Pdf download free
              -David G Myers Psichologija 2008 Pdf summary and review
              -David G Myers Psichologija 2008 Pdf online access code
              -David G Myers Psichologija 2008 Pdf test bank and solutions
              -David G Myers Psichologija 2008 Pdf chapter outlines and notes
              -David G Myers Psichologija 2008 Pdf flashcards and quizzes
              -David G Myers Psichologija 2008 Pdf study guide and practice tests
              -David G Myers Psichologija 2008 Pdf instructor's manual and resources
              -David G Myers Psichologija 2008 Pdf ebook and audiobook
              -David G Myers Psichologija 2008 Pdf Lithuanian translation and edition
              -David G Myers Psichologija 2008 Pdf comparison with other textbooks
              -David G Myers Psichologija 2008 Pdf citation and reference style
              -David G Myers Psichologija 2008 Pdf ISBN and price
              -David G Myers Psichologija 2008 Pdf availability and shipping
              -David G Myers Psichologija 2008 Pdf used and new copies
              -David G Myers Psichologija 2008 Pdf rent and sell options
              -David G Myers Psichologija 2008 Pdf course syllabus and requirements
              -David G Myers Psichologija 2008 Pdf lecture slides and videos
              -David G Myers Psichologija 2008 Pdf supplementary materials and exercises
              -David G Myers Psichologija 2008 Pdf key concepts and terms
              -David G Myers Psichologija 2008 Pdf major theories and perspectives
              -David G Myers Psichologija 2008 Pdf research methods and ethics
              -David G Myers Psichologija 2008 Pdf biological bases of behavior
              -David G Myers Psichologija 2008 Pdf sensation and perception
              -David G Myers Psichologija 2008 Pdf states of consciousness
              -David G Myers Psichologija 2008 Pdf learning and memory
              -David G Myers Psichologija 2008 Pdf thinking and intelligence
              -David G Myers Psichologija 2008 Pdf motivation and emotion
              -David G Myers Psichologija 2008 Pdf personality and assessment
              -David G Myers Psichologija 2008 Pdf psychological disorders and treatment
              -David G Myers Psichologija 2008 Pdf social psychology and behavior
              -David G Myers Psichologija 2008 Pdf human development and lifespan
              -David G Myers Psichologija 2008 Pdf gender and sexuality
              -David G Myers Psichologija 2008 Pdf stress and health psychology
              -David G Myers Psichologija 2008 Pdf cross-cultural psychology and diversity
              -David G Myers Psichologija 2008 Pdf positive psychology and happiness
              -David G Myers Psichologija 2008 Pdf applied psychology and careers
              -David G Myers Psichologija 2008 Pdf history of psychology and schools of thought
              -David G Myers Psichologija 2008 Pdf critical thinking and scientific inquiry skills in psychology

              -
                -
              • Learning objectives that summarize the main points of each section
              • -
              • Key terms that highlight the important concepts and definitions
              • -
              • Review questions that test your comprehension and recall
              • -
              • Critical thinking questions that challenge you to apply your knowledge and analyze different scenarios
              • -
              • Research focus boxes that showcase the latest findings and methods in psychology
              • -
              • Psychology in everyday life boxes that illustrate how psychology relates to real-world issues and situations
              • -
              • Diversity matters boxes that explore how culture, gender, ethnicity, and other factors influence human behavior and cognition
              • -
              • Visual summaries that provide a graphical overview of each chapter
              • -
              • Glossary that explains all the key terms in detail
              • -
              • References that list all the sources used in the book
              • -
              -

              Why is it important to read this book?

              -

              This book is important to read because it provides a comprehensive, accurate, and engaging introduction to psychology. Whether you are a student taking a psychology course, a professional working in a related field, or a curious individual who wants to learn more about human nature, this book will help you achieve your goals. By reading this book, you will:

              -
                -
              • Gain a solid foundation of psychological knowledge and skills
              • -
              • Develop a critical and scientific attitude towards psychological phenomena
              • -
              • Appreciate the diversity and complexity of human behavior and cognition
              • -
              • Understand how psychology can help you improve your personal and social life
              • -
              • Discover new perspectives and insights on yourself and others
              • -
              • Foster your curiosity and passion for psychology
              • -
              -

              Main Features of Psichologija 2008 Pdf 13

              -

              In this section, we will briefly describe the main features of each chapter of Psichologija 2008 Pdf 13. We will also provide some examples of the topics covered in each chapter.

              -

              Theoretical Perspectives and Research Methods

              -

              This chapter introduces the history, goals, perspectives, subfields, ethics, and methods of psychology. It explains how psychologists use scientific inquiry to answer questions about human behavior and cognition. Some of the topics covered in this chapter are:

              -
                -
              • The origins and evolution of psychology as a discipline
              • -
              • The major theoretical approaches in psychology (behaviorism, psychoanalysis, humanism, cognitive science)
              • -
              • The main subfields of psychology (biological psychology, developmental psychology, social psychology, clinical psychology, etc.)
              • -
              • The ethical principles and guidelines
              • The steps of the scientific method (observation, hypothesis, experiment, analysis, conclusion)
              • -
              • The types of research designs (descriptive, correlational, experimental)
              • -
              • The methods of data collection (surveys, interviews, tests, observations, case studies)
              • -
              • The measures of central tendency (mean, median, mode) and variability (range, standard deviation)
              • -
              • The concepts of reliability, validity, bias, and confounding variables
              • -
              • The types of statistical tests (t-tests, ANOVA, chi-square)
              • -
              • The interpretation of graphs, tables, and charts
              • -
              -

              Biological Bases of Behavior

              -

              This chapter explores the structure and function of the nervous system and how it influences behavior and cognition. It describes how neurons communicate with each other and how hormones regulate physiological processes. It also examines how genes and environment interact to shape individual differences and behavior. Some of the topics covered in this chapter are:

              -
                -
              • The components of the nervous system (central nervous system, peripheral nervous system)
              • -
              • The divisions of the peripheral nervous system (somatic nervous system, autonomic nervous system)
              • -
              • The subdivisions of the autonomic nervous system (sympathetic nervous system, parasympathetic nervous system)
              • -
              • The types of neurons (sensory neurons, motor neurons, interneurons)
              • -
              • The parts of a neuron (cell body, dendrites, axon)
              • -
              • The process of neural transmission (action potential, synaptic transmission)
              • -
              • The types of neurotransmitters (acetylcholine, dopamine, serotonin, etc.)
              • -
              • The effects of drugs on neurotransmission (agonists, antagonists)
              • -
              • The types of hormones (steroid hormones, peptide hormones)
              • -
              • The functions of the endocrine system (pituitary gland, thyroid gland, adrenal glands, etc.)
              • -
              • The structure and function of the brain (brainstem, cerebellum, limbic system, cerebral cortex)
              • -
              • The methods of studying brain activity (EEG, PET scan, MRI scan)
              • -
              • The localization and lateralization of brain functions (Broca's area, Wernicke's area)
              • -
              • The role of neuroplasticity and neurogenesis in brain development and recovery
              • -
              • The concepts of genotype and phenotype
              • -
              • The methods of studying genetic influences on behavior (twin studies, adoption studies)
              • -
              • The effects of epigenetics and gene-environment interactions on behavior
              -

              Sensation and Perception

              -

              This chapter investigates how we receive and interpret information from our senses. It explains how sensory receptors and neural pathways transduce physical stimuli into neural signals. It also discusses how perceptual processes organize and interpret sensory information based on prior knowledge and expectations. Some of the topics covered in this chapter are:

              -
              • The difference between sensation and perception
              • The principles of psychophysics (thresholds, signal detection theory, Weber's law, etc.)
              • The properties and functions of the five senses (vision, hearing, smell, taste, touch)
              • The anatomy and physiology of the sensory organs and pathways (eye, ear, nose, tongue, skin)
              • The types and characteristics of sensory stimuli (light waves, sound waves, odor molecules, taste buds, pressure receptors)
              • The theories and models of color vision (trichromatic theory, opponent-process theory, color constancy, etc.)
              • The phenomena and illusions of visual perception (depth perception, binocular cues, monocular cues, gestalt principles, etc.)
              • The factors and effects of perceptual adaptation and perceptual set
              • The causes and consequences of sensory deprivation and sensory overload
              • The similarities and differences between human and animal senses
              -

              Learning and Memory

              -

              This chapter examines how we acquire, store, and retrieve information and skills. It describes the main types and stages of memory and the factors that affect memory performance. It also explains the basic principles and applications of learning theories. Some of the topics covered in this chapter are:

              - , long-term memory)
            2. -
            3. The stages and processes of memory (encoding, storage, retrieval)
            4. -
            5. The models and structures of memory (Atkinson-Shiffrin model, working memory model, Baddeley's model)
            6. -
            7. The types and examples of long-term memory (declarative memory, procedural memory, semantic memory, episodic memory)
            8. -
            9. The factors and strategies that enhance memory (rehearsal, chunking, mnemonics, imagery, elaboration, spacing effect, testing effect)
            10. -
            11. The factors and causes that impair memory (interference, decay, forgetting curve, amnesia, Alzheimer's disease)
            12. -
            13. The phenomena and errors of memory (priming, serial position effect, recency effect, primacy effect, recall, recognition, relearning, flashbulb memory, false memory, source amnesia)
            14. -
            15. The applications and implications of learning and memory for education and everyday life
          -

          Thinking, Language, and Intelligence

          -

          This chapter explores how we use mental processes to solve problems, communicate ideas, and measure abilities. It analyzes the components and characteristics of thinking, language, and intelligence. It also evaluates the controversies and issues related to these topics. Some of the topics covered in this chapter are:

          -
          • The definition and functions of thinking, language, and intelligence
          • The types and examples of thinking (concepts, prototypes, schemas, scripts)
          • The types and examples of problem-solving strategies (algorithms, heuristics, insight)
          • The types and examples of decision-making strategies (intuition, reasoning, judgment)
          • The types and examples of cognitive biases and errors (confirmation bias, availability heuristic, representativeness heuristic, framing effect, etc.)
          • The characteristics and functions of language (phonemes, morphemes, grammar, syntax, semantics, pragmatics)
          • The stages and factors of language development (babbling, one-word stage, two-word stage, telegraphic speech, overgeneralization, etc.)
          • The theories and hypotheses of language acquisition (behaviorist theory, nativist theory, interactionist theory, linguistic determinism, linguistic relativity)
          • The types and examples of language diversity and variation (dialects, accents, pidgins, creoles, bilingualism, multilingualism)
          • The definition and measurement of intelligence (IQ tests, standardization, norms, reliability, validity)
          • The types and theories of intelligence (general intelligence, fluid intelligence, crystallized intelligence, multiple intelligences, triarchic theory, emotional intelligence)
          • The factors and influences on intelligence (genetics, environment, culture, gender, motivation)
          • The issues and controversies related to intelligence (intelligence quotient vs. intelligence quality, nature vs. nurture, stability vs. changeability, group differences vs. individual differences)
          -

          Motivation and Emotion

          - , sustain, and regulate our behavior and feelings. It explains the main types and theories of motivation and emotion. It also describes the physiological, cognitive, and social aspects of motivation and emotion. Some of the topics covered in this chapter are:

          -
          • The definition and functions of motivation and emotion
          • The types and examples of motivation (intrinsic motivation, extrinsic motivation, drive-reduction theory, arousal theory, Maslow's hierarchy of needs)
          • The types and examples of emotion (basic emotions, complex emotions, positive emotions, negative emotions)
          • The theories and models of emotion (James-Lange theory, Cannon-Bard theory, Schachter-Singer theory, Lazarus's appraisal theory)
          • The physiological and neural mechanisms of motivation and emotion (hunger, thirst, sex, pain, pleasure, reward system, amygdala, hypothalamus)
          • The cognitive and social factors of motivation and emotion (expectancy-value theory, goal-setting theory, self-determination theory, attribution theory, social comparison theory)
          • The expression and recognition of emotion (facial expressions, body language, tone of voice, cultural differences)
          • The regulation and management of emotion (emotion regulation strategies, coping strategies, emotional intelligence skills)
          • The effects and outcomes of motivation and emotion on behavior and well-being (achievement motivation, affiliation motivation, aggression, altruism, stress, happiness)
          -

          Personality and Social Psychology

          -

          This chapter examines how we develop and display our unique patterns of thoughts, feelings, and actions. It describes the main types and theories of personality and the methods of assessing personality. It also explores how we interact with others and how social situations influence our behavior and cognition. Some of the topics covered in this chapter are:

          -
          • The definition and functions of personality
          • The types and examples of personality traits (the Big Five personality traits, the Myers-Briggs Type Indicator, etc.)
          • The theories and models of personality (psychoanalytic theory, humanistic theory, trait theory, social-cognitive theory)
          • The methods and tools of measuring personality (self-report inventories, projective tests, behavioral observations)
          • The factors and influences on personality development (genetics, environment, culture, gender, life events)
          • The definition and functions of social psychology
          • The types and examples of social cognition (attitudes, beliefs, stereotypes, prejudices)
          • The types and examples of social influence (conformity, obedience, persuasion, group dynamics)
          • The types and examples of social behavior (attraction, love, friendship, aggression, altruism)
          • The phenomena and theories of social psychology (the fundamental attribution error, the self-serving bias, cognitive dissonance, the bystander effect, social loafing, social facilitation, etc.)
          -

          Developmental Psychology

          -

          This chapter studies how we grow and change from conception to death. It explains the main stages and domains of human development (physical development, cognitive development, social development). It also discusses the major issues and controversies related to human development (nature vs. nurture, continuity vs. discontinuity, stability vs. change). Some of the topics covered in this chapter are:

          -
          • The methods and designs of studying human development (longitudinal studies, cross-sectional studies, cohort studies)
          • The processes and outcomes of prenatal development (zygote, embryo, fetus, teratogens)
          • The milestones and factors of physical development (reflexes, motor skills, puberty, menopause)
          • The stages and theories of cognitive development (Piaget's stage theory, Vygotsky's sociocultural theory, information-processing theory)
          • The concepts and phenomena of cognitive development (object permanence, conservation, egocentrism, and theories of moral development (Kohlberg's stage theory, Gilligan's gender perspective, Haidt's social intuitionist model)
          • The stages and theories of psychosocial development (Erikson's stage theory, Marcia's identity statuses, Baumrind's parenting styles)
          • The types and examples of attachment styles (secure attachment, insecure attachment, strange situation test)
          • The types and examples of temperament styles (easy temperament, difficult temperament, slow-to-warm-up temperament)
          • The influences and effects of family, peers, and media on social development
          • The challenges and opportunities of adulthood and aging (marriage, parenthood, career, retirement, health, cognition)
          -

          Abnormal Psychology and Therapy

          -

          This chapter investigates how we define, diagnose, explain, and treat psychological disorders. It describes the main types and characteristics of psychological disorders and the factors that contribute to their development. It also evaluates the effectiveness and ethics of various psychological therapies. Some of the topics covered in this chapter are:

          -
          • The definition and criteria of abnormal behavior
          • The classification and diagnosis of psychological disorders (DSM-5, ICD-10)
          • The types and examples of psychological disorders (anxiety disorders, mood disorders, psychotic disorders, personality disorders, etc.)
          • The causes and risk factors of psychological disorders (biological factors, psychological factors, social factors)
          • The perspectives and models of explaining psychological disorders (medical model, biopsychosocial model, diathesis-stress model)
          • The definition and goals of psychotherapy
          • The types and examples of psychotherapy (psychoanalysis, humanistic therapy, cognitive-behavioral therapy, etc.)
          • The methods and techniques of psychotherapy (free association, interpretation, unconditional positive regard, active listening, cognitive restructuring, exposure therapy, etc.)
          • The evaluation and comparison of psychotherapy (evidence-based practice, outcome research, meta-analysis)
          • The ethical issues and challenges of psychotherapy (informed consent, confidentiality, competence, dual relationships, etc.)
          -

          Conclusion

          -

          In conclusion, David G Myers Psichologija 2008 Pdf 13 is a comprehensive review of psychology that covers all the essential topics in a clear, accurate, and engaging way. It is a great resource for anyone who wants to learn more about human behavior and mental processes. By reading this book, you will gain a solid foundation of psychological knowledge and skills that will help you understand yourself and others better. You will also develop a critical and scientific attitude towards psychological phenomena that will enhance your personal and social life. You will also discover new perspectives and insights on yourself and others that will foster your curiosity and passion for psychology.

          -

          FAQs

          -

          Here are some frequently asked questions about David G Myers Psichologija 2008 Pdf 13:

          -
            -
          1. Where can I buy or download this book?
          2. -

            You can buy this book from Alma littera's website or from other online or offline bookstores. You can also download this book from various websites that offer free pdf downloads. However, you should be careful about the quality and legality of these downloads.

            -
          3. Is this book suitable for beginners or advanced learners?
          4. -, key terms, review questions, critical thinking questions, research focus boxes, psychology in everyday life boxes, diversity matters boxes, visual summaries, glossary, and references.

            -
          5. How can I use this book to prepare for exams or assignments?
          6. -

            You can use this book to prepare for exams or assignments by reviewing the main points and concepts of each chapter. You can also test your comprehension and recall by answering the review questions and critical thinking questions. You can also apply your knowledge and analyze different scenarios by using the research focus boxes and psychology in everyday life boxes. You can also expand your knowledge and explore new topics by using the references and further readings.

            -
          7. How can I use this book to improve my personal and social life?
          8. -

            You can use this book to improve your personal and social life by applying the psychological principles and findings to your own behavior and cognition. You can also use this book to understand and empathize with others better by learning about their perspectives and motivations. You can also use this book to enhance your well-being and happiness by using the psychological strategies and techniques that are proven to be effective.

            -
          9. How can I use this book to develop my curiosity and passion for psychology?
          10. -

            You can use this book to develop your curiosity and passion for psychology by exploring the fascinating and diverse topics that psychology offers. You can also use this book to discover new perspectives and insights on yourself and others that will challenge and inspire you. You can also use this book to join the scientific community of psychology by learning about the latest research and methods in psychology.

            -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/GPS-X Hydromantis Crack Save Time and Money with the Industry-Leading Software for Wastewater Design and Operation.md b/spaces/raedeXanto/academic-chatgpt-beta/GPS-X Hydromantis Crack Save Time and Money with the Industry-Leading Software for Wastewater Design and Operation.md deleted file mode 100644 index 185e52c0725873c57b76a74b9e296dc0f1f8694e..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/GPS-X Hydromantis Crack Save Time and Money with the Industry-Leading Software for Wastewater Design and Operation.md +++ /dev/null @@ -1,117 +0,0 @@ - -

          GPS X Hydromantis Crack: What You Need to Know

          -

          If you are interested in wastewater treatment plant design and optimization, you might have heard of GPS X Hydromantis, a powerful software that can help you with this task. But what is GPS X Hydromantis exactly, and why do some people want to crack it? In this article, we will answer these questions and more, so you can make an informed decision about whether cracking GPS X Hydromantis is worth it or not.

          -

          gps x hydromantis crack


          DOWNLOAD » https://tinourl.com/2uL28t



          -

          What is GPS X Hydromantis?

          -

          GPS X Hydromantis is a software developed by Hydromantis Environmental Software Solutions, Inc., a company that specializes in water and wastewater treatment modeling and simulation. GPS X Hydromantis is one of their flagship products, and it is considered as the industry's most advanced tool for wastewater treatment plant design and process optimization.

          -

          How does GPS X Hydromantis work?

          -

          GPS X Hydromantis works by simulating the dynamic behavior of wastewater treatment plants using mathematical models that represent the physical, chemical, and biological processes involved. The software allows you to create, edit, run, and analyze different scenarios and configurations for your plant, using a user-friendly graphical interface. You can also use GPS X Hydromantis to optimize your plant's performance, by applying various optimization methods and algorithms that can help you reduce costs, improve efficiency, and meet environmental regulations.

          -

          What are the benefits of using GPS X Hydromantis?

          -

          Using GPS X Hydromantis can provide you with many benefits, such as:

          -
            -
          • Enhancing your engineering skills and knowledge by learning from a state-of-the-art software
          • -
          • Saving time and money by designing and optimizing your plant more effectively and accurately
          • -
          • Increasing your confidence and credibility by delivering high-quality results to your clients or stakeholders
          • -
          • Staying ahead of the competition by using the latest techniques and standards for wastewater treatment
          • -
          • Contributing to environmental sustainability by reducing your plant's environmental impact
          • -
          -

          Why do some people want to crack GPS X Hydromantis?

          -

          Despite its many benefits, GPS X Hydromantis is not a cheap software. According to its official website, the price of a single-user license ranges from $4,000 to $20,000 USD per year, depending on the features and modules included. This means that not everyone can afford to buy or renew their license for GPS X Hydromantis. Therefore, some people might resort to cracking the software, which means bypassing its security measures and using it without paying for it.

          -

          What are the drawbacks of cracking GPS X Hydromantis?

          -

          Cracking GPS X Hydromantis might seem like a tempting option for some people who want to use the software without paying for it. However, cracking GPS X Hydromantis also comes with many drawbacks and risks, such as:

          -
            -
          • Violating the intellectual property rights of the software developers and exposing yourself to legal consequences
          • -
          • Compromising the quality and reliability of your results by using an outdated or modified version of the software
          • -
          • Endangering your computer system by downloading or installing malware or viruses along with the cracked software
          • -
          • Losing access to technical support and updates from the software developers
          • -
          • Harming your reputation and ethics by engaging in an illegal and unethical activity
          • -
          -

          What are the alternatives to cracking GPS X Hydromantis?

          -

          If you want to use GPS X Hydromantis but you cannot afford to buy or renew your license, there are some legal and ethical alternatives that you can consider, such as:

          -
            -
          • Applying for a free trial or a discounted license from the software developers if you qualify for their criteria (e.g., academic or non-profit use)
          • -
          • Using a free or low-cost alternative software that can perform similar functions as GPS X Hydromantis (e.g., GPS-X Lite or SimuWorks)
          • -
          • Hiring a professional consultant or service provider who has a valid license for GPS X Hydromantis and can perform the work for you
          • -
          • Saving up money or seeking funding sources to buy or renew your license for GPS X Hydromantis when possible
          • -
          -

          In conclusion, GPS X Hydromantis is a great software for wastewater treatment plant design and optimization, but it is also an expensive one. Cracking GPS X Hydromantis might seem like an easy way to use the software without paying for it, but it also comes with many disadvantages and dangers. Therefore, we recommend that you avoid cracking GPS X Hydromantis and instead consider some legal and ethical alternatives that can help you achieve your goals.

          -

          Frequently Asked Questions

          -

          Here are some common questions and answers about GPS X Hydromantis and cracking:

          -

          gps x hydromantis software free download
          -gps x hydromantis license key generator
          -gps x hydromantis activation code
          -gps x hydromantis full version
          -gps x hydromantis tutorial pdf
          -gps x hydromantis serial number
          -gps x hydromantis patch
          -gps x hydromantis crack download
          -gps x hydromantis keygen
          -gps x hydromantis registration code
          -gps x hydromantis crack 2023
          -gps x hydromantis latest version
          -gps x hydromantis review
          -gps x hydromantis crack reddit
          -gps x hydromantis system requirements
          -gps x hydromantis crack mac
          -gps x hydromantis online
          -gps x hydromantis crack windows 10
          -gps x hydromantis user manual
          -gps x hydromantis crack linux
          -gps x hydromantis training course
          -gps x hydromantis crack android
          -gps x hydromantis demo
          -gps x hydromantis crack ios
          -gps x hydromantis support
          -gps x hydromantis crack apk
          -gps x hydromantis features
          -gps x hydromantis crack zip file
          -gps x hydromantis installation guide
          -gps x hydromantis crack rar file
          -gps x hydromantis price
          -gps x hydromantis crack torrent file
          -gps x hydromantis alternatives
          -gps x hydromantis crack mega link
          -gps x hydromantis benefits
          -gps x hydromantis crack google drive link
          -gps x hydromantis disadvantages
          -gps x hydromantis crack dropbox link
          -gps x hydromantis comparison
          -gps x hydromantis crack mediafire link
          -gps x hydromantis testimonials
          -gps x hydromantis crack 4shared link
          -gps x hydromantis case studies
          -gps x hydromantis crack zippyshare link
          -gps x hydromantis examples
          -gps x hydromantis crack youtube video link
          -gps x hydromantis faq
          -gps x hydromantis crack vimeo video link
          -gps x hydromantis forum

          - - - - - - - - - - - - - - - - - - - - - - - - - -
          QuestionAnswer
          What is cracking?Cracking is a term used to describe the process of breaking or bypassing the security measures of a software or system without authorization.
          Is cracking illegal?Yes, cracking is illegal in most countries, as it violates the intellectual property rights of the software developers.
          Is cracking ethical?No, cracking is unethical, as it deprives the software developers of their rightful income and rewards for their work.
          Where can I find cracked versions of GPS X Hydromantis?We do not recommend that you look for cracked versions of GPS X Hydromantis online, as they are likely to be fake, outdated, modified, infected, or harmful.
          How can I get a valid license for GPS X Hydromantis?You can get a valid license for GPS X Hydromantis by purchasing it from its official website or authorized resellers.
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gezginler Hunting Unlimited 2010 Crack Dosyasi Indir Enjoy the Thrill of Hunting with This Free Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Gezginler Hunting Unlimited 2010 Crack Dosyasi Indir Enjoy the Thrill of Hunting with This Free Download.md deleted file mode 100644 index 43ef698140c997db191c36b90ef2ac166592f820..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gezginler Hunting Unlimited 2010 Crack Dosyasi Indir Enjoy the Thrill of Hunting with This Free Download.md +++ /dev/null @@ -1,122 +0,0 @@ -
          -

          Gezginler Hunting Unlimited 2010 Crack Dosyasi Indir

          -

          Introduction

          -

          If you are a fan of hunting games, you might have heard of Hunting Unlimited 2010. This is a realistic and immersive hunting simulation game that lets you hunt various animals in different locations and scenarios. You can customize your weapons, gear, and vehicles, and challenge yourself with different modes and missions.

          -

          However, if you want to play Hunting Unlimited 2010 on your PC, you might encounter some problems. The game is not free, and you need to buy a license key to activate it. Moreover, the game might not run smoothly on some systems, and you might experience crashes, bugs, or errors.

          -

          gezginler hunting unlimited 2010 crack dosyasi indir


          DOWNLOADhttps://tinourl.com/2uL341



          -

          That's why some people look for a crack dosyasi for Hunting Unlimited 2010. A crack dosyasi is a modified file that bypasses the game's security and allows you to play it without a license key. By downloading a crack dosyasi for Hunting Unlimited 2010, you can enjoy the game for free and without any issues.

          -

          But where can you find a crack dosyasi for Hunting Unlimited 2010? One of the most popular websites that offer crack dosyalari for various games is Gezginler. Gezginler is a Turkish website that provides free downloads of software, games, drivers, and more. In this article, we will show you how to download a crack dosyasi for Hunting Unlimited 2010 from Gezginler and how to use it.

          -

          How to download a crack dosyasi for Hunting Unlimited 2010 from Gezginler?

          -

          Downloading a crack dosyasi for Hunting Unlimited 2010 from Gezginler is not difficult, but you need to follow some steps carefully. Here are the steps you need to take:

          -

          Step 1: Visit the Gezginler website

          -

          The first thing you need to do is to visit the Gezginler website. You can use any web browser to access it, but make sure you have an antivirus program installed on your PC. Some of the files on Gezginler might contain viruses or malware that can harm your computer.

          -

          The Gezginler website has a simple and user-friendly interface. You can see different categories of downloads on the left side of the page, such as Oyunlar (Games), Programlar (Programs), Suruculer (Drivers), and more. You can also use the search bar on the top right corner of the page to find what you are looking for.

          -

          gezginler hunting unlimited 2010 full version download
          -gezginler hunting unlimited 2010 serial key indir
          -gezginler hunting unlimited 2010 patch dosyasi indir
          -gezginler hunting unlimited 2010 crack nasil yapilir
          -gezginler hunting unlimited 2010 oyunu indir
          -gezginler hunting unlimited 2010 kurulumu
          -gezginler hunting unlimited 2010 hileleri
          -gezginler hunting unlimited 2010 sistem gereksinimleri
          -gezginler hunting unlimited 2010 mod indir
          -gezginler hunting unlimited 2010 turkce yama indir
          -gezginler hunting unlimited 2010 gameplay
          -gezginler hunting unlimited 2010 online oyna
          -gezginler hunting unlimited 2010 torrent indir
          -gezginler hunting unlimited 2010 free download
          -gezginler hunting unlimited 2010 activation code indir
          -gezginler hunting unlimited 2010 update dosyasi indir
          -gezginler hunting unlimited 2010 crackli indir
          -gezginler hunting unlimited 2010 demo indir
          -gezginler hunting unlimited 2010 trainer indir
          -gezginler hunting unlimited 2010 cheats
          -gezginler hunting unlimited 2010 tips and tricks
          -gezginler hunting unlimited 2010 review
          -gezginler hunting unlimited 2010 video indir
          -gezginler hunting unlimited 2010 soundtrack indir
          -gezginler hunting unlimited 2010 save dosyasi indir
          -gezginler hunting unlimited 2010 no cd crack indir
          -gezginler hunting unlimited 2010 multiplayer indir
          -gezginler hunting unlimited 2010 iso indir
          -gezginler hunting unlimited 2010 rar sifresi
          -gezginler hunting unlimited 2010 windows 10 uyumlu indir
          -gezginler hunting unlimited 2010 android indir
          -gezginler hunting unlimited 2010 mac indir
          -gezginler hunting unlimited 2010 linux indir
          -gezginler hunting unlimited 2010 steam indir
          -gezginler hunting unlimited 2010 origin indir
          -gezginler hunting unlimited 2010 epic games indir
          -gezginler hunting unlimited 2010 gog indir
          -gezginler hunting unlimited 2010 xbox one indir
          -gezginler hunting unlimited 2010 ps4 indir
          -gezginler hunting unlimited 2010 switch indir
          -gezginler hunting unlimited 2010 vr indir
          -gezginler hunting unlimited 2010 dlc indir
          -gezginler hunting unlimited 2010 editor indir
          -gezginler hunting unlimited 2010 wallpaper indir
          -gezginler hunting unlimited 2010 screensaver indir
          -gezginler hunting unlimited 2010 font indir
          -gezginler hunting unlimited 2010 logo indir
          -gezginler hunting unlimited 2010 theme indir
          -gezginler hunting unlimited 2010 icon indir
          -gezginler hunting unlimited 2010 guide indir

          -

          Step 2: Search for Hunting Unlimited 2010 crack dosyasi

          -

          The next thing you need to do is to search for Hunting Unlimited 2010 crack dosyasi on Gezginler. You can either type "Hunting Unlimited 2010 crack" in the search bar or click on Oyunlar > Aksiyon - Macera - RPG (Action - Adventure - RPG) > Hunting Unlimited 2010.

          -

          You will see a list of results related to Hunting Unlimited 2010. Some of them might be the full game download, while others might be patches, updates, or mods. You need to look for the one that says "crack" or "crack dosyasi" in its title or description.

          -

          Step 3: Choose a reliable download link

          -

          Once you find a result that offers a crack dosyasi for Hunting Unlimited 2010, you need to choose a reliable download link. Gezginler usually provides multiple download links for each file, but not all of them are safe or working. You need to check the ratings, comments, and file size of each link before clicking on it.

          -

          A good download link should have a high rating (at least four stars), positive comments from other users, and a reasonable file size (around 10 MB). A bad download link might have a low rating (one or two stars), negative comments from other users, or an unrealistic file size (too large or too small).

          -

          If you are not sure which link to choose, you can also use an online tool like VirusTotal to scan the link for any malicious content. VirusTotal is a free service that analyzes URLs and files for viruses, malware, and other threats. You can copy and paste the link into VirusTotal's search box and see if it detects any problems.

          -

          Step 4: Extract the crack dosyasi from the zip file

          -

          After choosing a reliable download link, you need to click on it and wait for the download to finish. The crack dosyasi for Hunting Unlimited 2010 will be in a zip file format, which means you need to extract it before using it.

          -

          To extract the zip file, you need to have a program like WinRAR or 7-Zip installed on your PC. These programs allow you to open and decompress zip files easily. You can right-click on the zip file and choose "Extract here" or "Extract to" from the menu.

          -

          You will see a folder with the same name as the zip file. Inside this folder, you will find one or more files with extensions like .exe or .dll. These are the crack dosyalari that you need to use.

          -

          Step 5: Copy and paste the crack dosyasi into the game folder

          -

          2010 on your PC. You can find it by following this path:

          -

          C:\Program Files (x86)\Hunting Unlimited\HuntingUnlimited.exe

          -

          You need to replace the original game file (HuntingUnlimited.exe) with the crack file (HuntingUnlimited.exe) that you downloaded from Gezginler. To do this, you need to right-click on the crack file and choose "Copy". Then, go to the game folder and right-click on an empty space and choose "Paste". You will be asked if you want to overwrite or replace the existing file. Click "Yes" or "OK".

          -

          You have successfully downloaded and installed a crack dosyasi for Hunting Unlimited 2010 from Gezginler!

          -

          How to use a crack dosyasi for Hunting Unlimited 2010?

          -

          Now that you have installed a crack dosyasi for Hunting Unlimited 2010 from Gezginler, you can use it to play the game without any restrictions. Here are the steps you need to take:

          -

          Step 1: Run the game as administrator

          -

          The first thing you need to do is to run the game as administrator. This will ensure that the game has full access to your system resources and prevent any errors or crashes. To do this, you need to right-click on the game file (HuntingUnlimited.exe) and choose "Run as administrator" from the menu.

          -

          Step 2: Enter any serial number when prompted

          -

          The next thing you need to do is to enter any serial number when prompted. The crack dosyasi will bypass the game's activation process, so you don't need to worry about the validity of the serial number. You can use any random combination of letters and numbers, such as ABCD-1234-EFGH-5678.

          -

          Step 3: Enjoy the full version of Hunting Unlimited 2010

          -

          The final thing you need to do is to enjoy the full version of Hunting Unlimited 2010. You can now access all the features and content of the game, such as different animals, locations, weapons, vehicles, modes, and missions. You can also play online with other players who have the same crack dosyasi.

          -

          Conclusion

          -

          In this article, we have shown you how to download a crack dosyasi for Hunting Unlimited 2010 from Gezginler and how to use it. By following these steps, you can play Hunting Unlimited 2010 for free and without any problems on your PC.

          -

          However, we also want to remind you that downloading and using a crack dosyasi for Hunting Unlimited 2010 is illegal and unethical. You are violating the game's copyright and terms of service, and you might face legal consequences or penalties. Moreover, you are depriving the game developers of their rightful income and support.

          -

          Therefore, we recommend that you buy a legitimate copy of Hunting Unlimited 2010 from an authorized source, such as Steam or Amazon. This way, you can enjoy the game legally and safely, and also support the game developers and their future projects.

          -

          FAQs

          -

          Here are some frequently asked questions about downloading a crack dosyasi for Hunting Unlimited 2010 from Gezginler:

          - - - - - - - - - - - - - - - - - - - - - - - - - -
          QuestionAnswer
          Is Gezginler a safe website?Gezginler is a popular website that offers free downloads of software, games, drivers, and more. However, not all of the files on Gezginler are safe or reliable. Some of them might contain viruses or malware that can harm your computer. Therefore, you should always use an antivirus program and scan the files before downloading them.
          What are the system requirements for Hunting Unlimited 2010?The minimum system requirements for Hunting Unlimited 2010 are: - OS: Windows XP/Vista/7/8/10 - Processor: Pentium IV 1.4 GHz or equivalent - Memory: 256 MB RAM - Graphics: 64 MB DirectX 9 compatible video card - DirectX: Version 9.0c - Storage: 1 GB available space - Sound Card: DirectX compatible sound card The recommended system requirements for Hunting Unlimited 2010 are: - OS: Windows XP/Vista/7/8/10 - Processor: Pentium IV 2 GHz or equivalent - Memory: 512 MB RAM - Graphics: 128 MB DirectX 9 compatible video card - DirectX: Version 9.0c - Storage: 1 GB available space - Sound Card: DirectX compatible sound card
          What are some alternatives to Gezginler for downloading a crack dosyasi for Hunting Unlimited 2010?If you don't want to use Gezginler for downloading a crack dosyasi for Hunting Unlimited 2010, you can try some other websites that offer similar files. However, we cannot guarantee their safety or reliability either. Some of these websites are: - Oyunindir.vip - Fullprogramlarindir.com - Oyuncehennemi.com - Torrentoyunindir.com - Oyunindir.club
          What are some tips for playing Hunting Unlimited 2010?If you want to improve your skills and enjoy Hunting Unlimited 2010 more, here are some tips for playing it: - Use the right weapon for each animal. Different animals have different sizes, speeds, and behaviors. You need to choose a weapon that matches their characteristics and can kill them with one shot. - Use the right gear for each location. Different locations have different terrains, climates, and vegetation. You need to choose gear that suits your environment and helps you blend in with your surroundings. - Use the right vehicle for each scenario. Different scenarios have different objectives and challenges. You need to choose a vehicle that can help you achieve your goals and overcome your obstacles. - Use the map and compass to navigate. The map and compass are essential tools for finding your way around the vast hunting areas. You can use them to locate animals, landmarks, waypoints, and objectives. - Use the binoculars and scope to spot animals. The binoculars and scope are useful devices for spotting animals from a distance. You can use them to identify their species, gender, size, health, and behavior. - Use the calls and lures to attract animals. The calls and lures are effective methods for attracting animals to your location. You can use them to mimic their sounds or smells and make them curious or aggressive.
          What are some similar games to Hunting Unlimited 2010?If you like Hunting Unlimited 2010, you might also like some similar games that offer realistic and immersive hunting experiences. Some of these games are: - The Hunter: Call of the Wild - Deer Hunter Reloaded - Cabela's Big Game Hunter Pro Hunts - Hunting Simulator - Carnivores: Dinosaur Hunter Reborn
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/rajatus231/Speeech2Text2Story2Images2Video/app.py b/spaces/rajatus231/Speeech2Text2Story2Images2Video/app.py deleted file mode 100644 index 802d78aff8e7fa6fc5ed4494c961c6cf4b75cebb..0000000000000000000000000000000000000000 --- a/spaces/rajatus231/Speeech2Text2Story2Images2Video/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -from transformers import pipeline -import io, base64 -from PIL import Image -import numpy as np -import tensorflow as tf -import mediapy -import os -import sys -from huggingface_hub import snapshot_download - -import streamlit as st -import firebase_admin -from firebase_admin import credentials -from firebase_admin import firestore -import datetime -import tempfile -from typing import Optional -import numpy as np -from TTS.utils.manage import ModelManager -from TTS.utils.synthesizer import Synthesizer - - -# firestore singleton is a cached multiuser instance to persist shared crowdsource memory -@st.experimental_singleton -def get_db_firestore(): - cred = credentials.Certificate('test.json') - firebase_admin.initialize_app(cred, {'projectId': u'clinical-nlp-b9117',}) - db = firestore.client() - return db - -#start firestore singleton -db = get_db_firestore() - -# create ASR ML pipeline -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") - -# create Text Classification pipeline -classifier = pipeline("text-classification") - -# create text generator pipeline -story_gen = pipeline("text-generation", "pranavpsv/gpt2-genre-story-generator") - -# transcribe function -def transcribe(audio): - text = asr(audio)["text"] - return text - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - sentiment = classifier(text)[0]["label"] - return sentiment - -def upsert(text): - date_time =str(datetime.datetime.today()) - doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time) - doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/Text2SpeechSentimentSave', u'last': text, u'born': date_time,}) - saved = select('Text2SpeechSentimentSave', date_time) - # check it here: https://console.firebase.google.com/u/0/project/clinical-nlp-b9117/firestore/data/~2FStreamlitSpaces - return saved - -def select(collection, document): - doc_ref = db.collection(collection).document(document) - doc = doc_ref.get() - docid = ("The id is: ", doc.id) - contents = ("The contents are: ", doc.to_dict()) - return contents - -def selectall(text): - docs = db.collection('Text2SpeechSentimentSave').stream() - doclist='' - for doc in docs: - r=(f'{doc.id} => {doc.to_dict()}') - doclist += r - return doclist - -# story gen -def generate_story(choice, input_text): - query = " <{0}> {1}".format(choice, input_text) - generated_text = story_gen(query) - generated_text = generated_text[0]['generated_text'] - generated_text = generated_text.split('> ')[2] - return generated_text - -# images gen -def generate_images(text): - steps=50 - width=256 - height=256 - num_images=4 - diversity=6 - image_bytes = image_gen(text, steps, width, height, num_images, diversity) - generated_images = [] - for image in image_bytes[1]: - image_str = image[0] - image_str = image_str.replace("data:image/png;base64,","") - decoded_bytes = base64.decodebytes(bytes(image_str, "utf-8")) - img = Image.open(io.BytesIO(decoded_bytes)) - generated_images.append(img) - return generated_images - -# reductionism - interpolate 4 images - todo - unhardcode the pattern -def generate_interpolation(gallery): - times_to_interpolate = 4 - generated_images = [] - for image_str in gallery: - image_str = image_str.replace("data:image/png;base64,","") - decoded_bytes = base64.decodebytes(bytes(image_str, "utf-8")) - img = Image.open(io.BytesIO(decoded_bytes)) - generated_images.append(img) - generated_images[0].save('frame_0.png') - generated_images[1].save('frame_1.png') - generated_images[2].save('frame_2.png') - generated_images[3].save('frame_3.png') - input_frames = ["frame_0.png", "frame_1.png", "frame_2.png", "frame_3.png"] - frames = list(util.interpolate_recursively_from_files(input_frames, times_to_interpolate, interpolator)) - mediapy.write_video("out.mp4", frames, fps=15) - return "out.mp4" - -# image generator -image_gen = gr.Interface.load("spaces/multimodalart/latentdiffusion") - -# video generator -os.system("git clone https://github.com/google-research/frame-interpolation") -sys.path.append("frame-interpolation") -from eval import interpolator, util - -ffmpeg_path = util.get_ffmpeg_path() -mediapy.set_ffmpeg(ffmpeg_path) -model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style") -interpolator = interpolator.Interpolator(model, None) - -demo = gr.Blocks() -with demo: - - audio_file = gr.inputs.Audio(source="microphone", type="filepath") - text = gr.Textbox() - label = gr.Label() - saved = gr.Textbox() - savedAll = gr.Textbox() - audio = gr.Audio(label="Output", interactive=False) - - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - b3 = gr.Button("Save Speech to Text") - b4 = gr.Button("Retrieve All") - - input_story_type = gr.Radio(choices=['superhero', 'action', 'drama', 'horror', 'thriller', 'sci_fi'], value='sci_fi', label="Genre") - input_start_text = gr.Textbox(placeholder='A teddy bear outer space', label="Starting Text") - - gr.Markdown("1. Select a type of story, then write some starting text! Then hit the 'Generate Story' button to generate a story! Feel free to edit the generated story afterwards!") - button_gen_story = gr.Button("Generate Story") - gr.Markdown("2. After generating a story, hit the 'Generate Images' button to create some visuals for your story! (Can re-run multiple times!)") - button_gen_images = gr.Button("Generate Images") - gr.Markdown("3. After generating some images, hit the 'Generate Video' button to create a short video by interpolating the previously generated visuals!") - button_gen_video = gr.Button("Generate Video") - output_generated_story = gr.Textbox(label="Generated Story") - output_gallery = gr.Gallery(label="Generated Story Images") - output_interpolation = gr.Video(label="Generated Video") - - # Bind functions to buttons - button_gen_story.click(fn=generate_story, inputs=[input_story_type , input_start_text], outputs=output_generated_story) - button_gen_images.click(fn=generate_images, inputs=output_generated_story, outputs=output_gallery) - button_gen_video.click(fn=generate_interpolation, inputs=output_gallery, outputs=output_interpolation) - - b1.click(speech_to_text, inputs=audio_file, outputs=input_start_text ) - b2.click(text_to_sentiment, inputs=text, outputs=label) - b3.click(upsert, inputs=text, outputs=saved) - b4.click(selectall, inputs=text, outputs=savedAll) - -demo.launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aspekte Neu B1 Plus Pdf Download !NEW!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aspekte Neu B1 Plus Pdf Download !NEW!.md deleted file mode 100644 index 982dd7c64d0f78a073b2451165e37b6a0480fa66..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aspekte Neu B1 Plus Pdf Download !NEW!.md +++ /dev/null @@ -1,90 +0,0 @@ - -

          Aspekte Neu B1 Plus Pdf Download: A Comprehensive Guide

          - -

          If you are learning German and want to improve your skills and prepare for the Zertifikat B1 and Zertifikat Deutsch exams, you might be interested in Aspekte Neu B1 Plus. Aspekte Neu B1 Plus is a textbook for intermediate learners of German that covers various topics and aspects of the language and culture. In this article, we will explain what Aspekte Neu B1 Plus is, what it contains, how to use it and how to download it as a PDF file.

          - -

          What is Aspekte Neu B1 Plus?

          - -

          Aspekte Neu B1 Plus is a textbook for learners of German as a foreign language that aims to consolidate and expand their knowledge of the language and prepare them for the level B2. It is part of the Aspekte Neu series, which consists of three textbooks for levels B1 plus, B2 and C1.

          -

          Aspekte Neu B1 Plus Pdf Download


          DOWNLOADhttps://urlgoal.com/2uCJel



          - -

          Aspekte Neu B1 Plus consists of 10 chapters that cover different topics, such as media, travel, art, history, health, education, work, environment, society and culture. Each chapter has four sections: Auftakt (introduction), Texte und Übungen (texts and exercises), Strukturen (structures) and Aussprache (pronunciation). The textbook also includes attractive opening pages and exciting cultural portraits that introduce the topics and provide authentic and interesting information.

          - -

          Aspekte Neu B1 Plus follows a modular and linear approach to teaching and learning. It offers a variety of exercises and activities that train the four skills (listening, reading, speaking and writing) and strategies (learning, communication and mediation). It also provides a systematic review and expansion of grammar and vocabulary. Moreover, it includes authentic films that motivate the learners and illustrate the topics.

          -

          - -

          Aspekte Neu B1 Plus is designed to prepare the learners for the Zertifikat B1 and Zertifikat Deutsch exams. It offers tips and tasks that familiarize them with the exam format and requirements. It also includes model tests that allow them to practice and assess their progress.

          - -

          What does Aspekte Neu B1 Plus contain?

          - -

          Aspekte Neu B1 Plus contains various components that complement each other and provide a comprehensive learning experience. The main components are:

          - -
            -
          • Lehrbuch (textbook): The textbook contains 10 chapters with texts, exercises, structures, pronunciation, films, opening pages and cultural portraits. It also includes an overview of grammar, vocabulary lists, transcripts of audio files and solutions to exercises.
          • -
          • Arbeitsbuch (workbook): The workbook contains additional exercises for each chapter of the textbook that reinforce and practice the skills, structures and vocabulary. It also includes an audio CD with listening exercises.
          • -
          • Intensivtrainer (intensive trainer): The intensive trainer contains supplementary exercises for each chapter of the textbook that deepen and extend the skills, structures and vocabulary. It also includes an audio CD with listening exercises.
          • -
          • Kursbuch (course book): The course book contains all the texts from the textbook in one volume. It is suitable for self-study or as a reference book.
          • -
          • Lehrerhandbuch (teacher's manual): The teacher's manual contains methodological tips, suggestions for lesson planning, additional activities, tests, answer keys and transcripts.
          • -
          • DVD: The DVD contains authentic films for each chapter of the textbook that illustrate the topics and stimulate discussion.
          • -
          - -

          All these components are available in print or digital format. The digital format allows you to access the materials online or offline on your computer or mobile device. You can also use interactive features such as audio files, videos, annotations or bookmarks.

          - -

          How to use Aspekte Neu B1 Plus?

          - -

          Aspekte Neu B1 Plus can be used in different ways depending on your needs and preferences. You can use it as a course book in a classroom setting with a teacher or as a self-study book at home. You can also use it as a supplementary material to practice or review specific topics or skills.

          - -

          To use Aspekte Neu B1 Plus effectively, you should follow these steps:

          - -
            -
          1. Choose the component that suits your purpose: textbook, workbook, intensive trainer or course book.
          2. -
          3. Select the chapter that matches your level and interest: you can follow the order of the book or choose any chapter you like.
          4. -
          5. Read or watch the opening page or cultural portrait to get an overview of the topic and activate your prior knowledge.
          6. -
          7. Read or watch the texts or films in the Auftakt section to get familiar with the topic and vocabulary.
          8. -
          9. Do the exercises in the Texte und Übungen section to practice your skills and strategies.
          10. -
          11. Review or learn the structures in the Strukturen section to improve your grammar.
          12. -
          13. Do the exercises in the Aussprache section to improve your pronunciation.
          14. -
          15. Do more exercises in the workbook or intensive trainer to reinforce or extend your skills, structures and vocabulary.
          16. -
          17. Check your answers with the solutions in the textbook or online.
          18. -
          19. Take a model test in the textbook or online to prepare for the exam or assess your progress.
          20. -
          - -

          You can also use other resources such as glossaries, grammar overviews or vocabulary lists to support your learning. You can also use online platforms such as Klett Augmented or Klett Sprachen App to access digital materials or interactive features.

          - -

          How to download Aspekte Neu B1 Plus as a PDF file?

          - -

          If you want to download Aspekte Neu B1 Plus as a PDF file, you have several options. You can buy a digital version of any component from the official website or other online platforms. You can also download free materials such as texts with tasks, solutions to exercises or glossaries from the official website. Alternatively, you can search for PDF files of Aspekte Neu B1 Plus on various websites or forums dedicated to language learning or e-book sharing. However, be careful when downloading PDF files from unknown sources, as they might contain viruses or malware that could harm your computer.

          - -

          Conclusion

          - -

          In this article, we have explained what Aspekte Neu B1 Plus is, what it contains, how to use it and how to download it as a PDF file. We hope you have found this information useful and interesting. Aspekte Neu B1 Plus is a great textbook for intermediate learners of German that covers various topics and aspects of the language and culture. It also prepares them for the Zertifikat B1 and Zertifikat Deutsch exams. If you want to learn more about Aspekte Neu B1 Plus and its features, you can visit the official website or watch some tutorials on YouTube. If you want to try Aspekte Neu B1 Plus for yourself, you can buy a print or digital version from the official website or download a PDF file at your own risk. Happy learning!

          -

          What are the Reviews of Aspekte Neu B1 Plus?

          - -

          Aspekte Neu B1 Plus is a popular and well-received textbook for intermediate learners of German. It has received many positive reviews from users and experts alike. Some of the reviews of Aspekte Neu B1 Plus are:

          - -
            -
          • "Aspekte Neu B1 Plus is a great textbook for learning German. It covers interesting topics and provides authentic and varied texts and exercises. It also prepares you well for the exams and helps you improve your grammar and pronunciation. I highly recommend it." - User review on Amazon
          • -
          • "Aspekte Neu B1 Plus is a comprehensive and modern textbook for intermediate learners of German. It offers a modular and linear approach to teaching and learning that suits different needs and preferences. It also includes authentic films and cultural portraits that enrich the learning experience. It is one of the best textbooks for this level." - Expert review on Langenscheidt
          • -
          • "Aspekte Neu B1 Plus is a fantastic textbook for intermediate learners of German. It covers a wide range of topics and aspects of the language and culture. It also provides a systematic review and expansion of grammar and vocabulary. It also includes digital materials and interactive features that make learning more fun and effective. It is a must-have textbook for this level." - User review on Goodreads
          • -
          - -

          These are some of the reviews of Aspekte Neu B1 Plus that show its quality and popularity. You can find more reviews on the official website or other online platforms.

          - -

          What are the FAQs about Aspekte Neu B1 Plus?

          - -

          If you have any questions or doubts about Aspekte Neu B1 Plus, you might find the answers in this section. We have collected some of the most frequently asked questions about Aspekte Neu B1 Plus and provided the answers below:

          - -
            -
          1. Q: How can I buy Aspekte Neu B1 Plus?
            A: You can buy Aspekte Neu B1 Plus from the official website or other online platforms such as Amazon or Langenscheidt. You can choose between print or digital format, or buy both as a bundle.
          2. -
          3. Q: How can I access the digital materials of Aspekte Neu B1 Plus?
            A: You can access the digital materials of Aspekte Neu B1 Plus online or offline on your computer or mobile device. You need to register on Klett Augmented or Klett Sprachen App with your email address and password, and enter the code that comes with your book.
          4. -
          5. Q: How can I download Aspekte Neu B1 Plus as a PDF file?
            A: You can download Aspekte Neu B1 Plus as a PDF file from the official website or other online platforms. You need to buy a digital version of any component or download free materials such as texts with tasks, solutions to exercises or glossaries. You can also search for PDF files of Aspekte Neu B1 Plus on various websites or forums dedicated to language learning or e-book sharing.
          6. -
          7. Q: How can I use Aspekte Neu B1 Plus in a classroom setting?
            A: You can use Aspekte Neu B1 Plus in a classroom setting with a teacher or as a self-study book at home. You can also use it as a supplementary material to practice or review specific topics or skills. You can follow the order of the book or choose any chapter you like.
          8. -
          9. Q: How can I prepare for the Zertifikat B1 and Zertifikat Deutsch exams with Aspekte Neu B1 Plus?
            A: You can prepare for the Zertifikat B1 and Zertifikat Deutsch exams with Aspekte Neu B1 Plus by following these steps:
            - Review the grammar, vocabulary, skills and strategies covered in each chapter.
            - Do the exercises in the Texte und Übungen, Aussprache, workbook and intensive trainer sections.
            - Check your answers with the solutions in the textbook or online.
            - Take a model test in the textbook or online.
            - Get feedback from your teacher or peers.
          10. -
          - -

          These are some of the FAQs about Aspekte Neu B1 Plus that might help you understand it better. You can find more FAQs on the official website or other online platforms.

          -

          Conclusion

          - -

          In this article, we have explained what Aspekte Neu B1 Plus is, what it contains, how to use it and how to download it as a PDF file. We have also shared some reviews, tips and tricks and FAQs about Aspekte Neu B1 Plus. Aspekte Neu B1 Plus is a comprehensive and modern textbook for intermediate learners of German that covers various topics and aspects of the language and culture. It also prepares them for the Zertifikat B1 and Zertifikat Deutsch exams. If you want to learn more about Aspekte Neu B1 Plus and its features, you can visit the official website or watch some tutorials on YouTube. If you want to try Aspekte Neu B1 Plus for yourself, you can buy a print or digital version from the official website or download a PDF file at your own risk. Happy learning!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Awave Studio 11 Download Crack Pes Fix.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Awave Studio 11 Download Crack Pes Fix.md deleted file mode 100644 index c33e4dbbd6bd6817b6ec89cf13e283e28a63050d..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Awave Studio 11 Download Crack Pes Fix.md +++ /dev/null @@ -1,14 +0,0 @@ -

          Awave Studio 11 Download Crack Pes


          DOWNLOAD ->>> https://urlgoal.com/2uCMxy



          - -June 11, 2563 B.C. —hanange 7383628160 prymneal. 01.24.2022 at 16:56. — 근는 물스타일 마스카이장애물. — 근는 뿐이 있으니까 참사하셔도 June 5, 2548 BC - Easy Audio CD Burner 3.7. trial software; download; 1.4 MB. Create CDs with your MP3 and WAV files created with Easy Audio CD Burner. Application ... Download Easy Audio CD Burner 3.7.3 for free, without registration. Easy Audio CD Burner ... Easy Audio CD Burner 3.7.3 Free Download -Easy CD-DA Extractor 17.1.2 | Free download for Windows... -19 Nov 2019 ... -Free Download Easy CD-DA Extractor 17.1.2 for Windows in English Free Download ... -17.0.2 (build 566) ... -Easy CD-DA Extractor 17.1.2 free download -Easy CD-DA Extractor 17.1.1 free download -14 Feb 2019 ... -Easy CD-DA Extractor 17.1.1 free download. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Battlefield 2 __LINK__ Crack Reloaded Skidrow.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Battlefield 2 __LINK__ Crack Reloaded Skidrow.md deleted file mode 100644 index 4e7df2b0cd85f5a546e21db57013eacad8184a88..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Battlefield 2 __LINK__ Crack Reloaded Skidrow.md +++ /dev/null @@ -1,34 +0,0 @@ -

          Battlefield 2 Crack Reloaded Skidrow


          Download ❤❤❤ https://urlgoal.com/2uCLvW



          - -co. kr. sears buy sony ps4 games ps4 games black friday 2019, buy ps4 games black friday in hong kong, buy ps4 games in singapore. The vendor will determine the retail price based on the number of vouchers, and when they are redeemed. 2 pc yang dikenal. - -Wii- U Virtual Console :: video games empress that are christian gamer girl - -New games for pc - -Item type: games gifts - -We buy in stores, and with a few coupons, we would pay much less than that. Buy ps4 games black friday in hong kong, buy ps4 games in singapore. The vendor will determine the retail price based on the number of vouchers, and when they are redeemed. Buy ps4 games black friday in hong kong, buy ps4 games in singapore. 2 pc yang dikenal. - -Buy ps4 games black friday in hong kong, buy ps4 games in singapore. EXCLUSIVE GAMES. Amazon. com, PlayStation Store. - -Order ps4 games black friday in hong kong. 16 hours ago POTatogasun11:43 AM. Buy ps4 games black friday in hong kong, buy ps4 games in singapore. The vendor will determine the retail price based on the number of vouchers, and when they are redeemed. - -Buy ps4 games black friday in hong kong, buy ps4 games in singapore. Our Store Location List. Our Store Location List. - -Verified about 24 hours ago. Verified about 24 hours ago. The vendor will determine the retail price based on the number of vouchers, and when they are redeemed. - -Watch Dogs Legion-EMPRESS. com, PlayStation Store. - -BEST SELLERS - -The vendor will determine the retail price based on the number of vouchers, and when they are redeemed. - -The World Cup and FIFA 19 are both big, big events in the gaming industry. The World Cup is an important event for FIFA. The FIFA 19 World Cup is an important, global event, that serves as a real time test of the global popularity of FIFA's latest iteration. - -One of the popular modes of the game is the training mode. It allows the player to customize his character and team. The game is also known for its multiplayer mode. - -The World Cup is a global 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack Dongle See Electrical Expert V3r7.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack Dongle See Electrical Expert V3r7.md deleted file mode 100644 index 38b5a7695d6381c225ad3bcc6fcc307afe4a9167..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crack Dongle See Electrical Expert V3r7.md +++ /dev/null @@ -1,76 +0,0 @@ -

          crack dongle see electrical expert v3r7


          Download Zip >>>>> https://urlgoal.com/2uCJEy



          - -oops, but I just see that I have a bunch of files in.config - - i downloaded a package and i have all the files now. How do i install it? - - i don't have a place to install it - - newguy : type : sudo nautilus - - so how do i install from the files i have? - - and go into the /home/you/.config - - or cd /home/you/.config - - ls -l - - and see the dir - - the files you put there? - - or - - ok dream I've got the path - - I'm in /home/newuser/.config - - cd /home/you/.config/ - - to see the files - - how do i install a package I download from a site? - - i have them but i don't have a place to install them - - jotik, is it a deb? - - no - - wilee-nilee : jotik is the package - -? - - you can do - - sudo dpkg -i thefilename.deb - - it will install it in /usr/local/bin - - wilee-nilee, what is a deb? - - but it will automatically install every dependency - - i don't know what deb is - - jotik : your package will be called.deb - - jotik : sudo dpkg -i thefilename.deb - - you can install it in /usr/local/bin - - its a program i downloaded - - dream, he probably has a folder in home named someapp - - no he doesn't - - - - jotik : have you opened the downloaded file in ubuntu software center - - jotik : by the way you can 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar _HOT_ Freemake Video Converter Sin Marca De Agua.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar _HOT_ Freemake Video Converter Sin Marca De Agua.md deleted file mode 100644 index 936541cd1149d85222fbe589871f74272d071728..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar _HOT_ Freemake Video Converter Sin Marca De Agua.md +++ /dev/null @@ -1,6 +0,0 @@ -

          descargar freemake video converter sin marca de agua


          DOWNLOADhttps://urlgoal.com/2uCJVA



          -
          -. makevideo converter free download for pc.make video converter for mac no watermark.make video converter free download.make video converter download.make video converter review. make video converter free.make video converter pc iso.make video converter download tutorial. make video converter no watermark. make video converter 2013. make video converter 4.0.2.3 full free. make video converter no watermark. Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform.Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform.Download Make Video Converter for PC Windows XP,7,8,8.1,10 and Mac. Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform.Download Make Video Converter for PC Windows XP,7,8,8.1,10 and Mac. Open Source Audio/Video Converter. Download Make Video Converter for PC.Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform. Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform. Create video files.Convert multiple video files into one media file.Convert multiple video files into one media file. Converting video files is a pain when you have many videos to convert. Create Video Files. Free download Make Video Converter for Windows XP, Vista, 7, 8, 8.1, 10. Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform.Make Video Converter is a simple, yet powerful application that lets you combine audio and video into a single file that can then be played back by any video player on any platform. Free Make Video Converter Download.Free Make Video Converter Download. Create a list of the videos you need to convert.Select the video files you want to convert.A simple interface lets you convert them all at once.Choose options for how you want to 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Anime Fighting Jam Wing 1.2.24 _HOT_.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Anime Fighting Jam Wing 1.2.24 _HOT_.md deleted file mode 100644 index 0a3b22e046d30b3b9ee24956e047a1c2415279e8..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Anime Fighting Jam Wing 1.2.24 _HOT_.md +++ /dev/null @@ -1,16 +0,0 @@ -

          download anime fighting jam wing 1.2.24


          Download File ○○○ https://urlgoal.com/2uCMHo



          - -Anime Fighting Jam 1.2.1.2 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information, please visit … - -Anime Fighting Jam 1.2.1.1 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information, please visit … - -Anime Fighting Jam 1.2.0.3 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information, please visit … - -Anime Fighting Jam 1.2.0.2 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information, please visit … - -Anime Fighting Jam 1.2.0.1 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information, please visit … - -Anime Fighting Jam 1.2.0.0 is packed with some cool features that you can use in your game, for example you can use the right click method to play up to four users at the same time, put your fist to your ear, and listen to your opponent yell your name. The user interface has been entirely redesigned, to allow it be more user friendly. For more information 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (wic Reset Key Serial Number).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (wic Reset Key Serial Number).md deleted file mode 100644 index d49d24f7d08779b8d5ebb586a8a43dfb15c3d524..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (wic Reset Key Serial Number).md +++ /dev/null @@ -1,26 +0,0 @@ - -

          How to Use WIC Reset Utility to Reset Your Epson Printer

          -

          If you have an Epson printer that is displaying an error message such as "The printer's ink pads are at the end of their service life" or "A printer's ink pad is at the end of its service life. Please contact Epson Support", you may need to use a software tool called WIC Reset Utility to reset your printer's waste ink counter.

          -

          HD Online Player (wic reset key serial number)


          DOWNLOAD >>>>> https://urlgoal.com/2uCMvM



          -

          WIC Reset Utility is a program that allows you to reset the waste ink counter of your Epson printer without replacing the ink pads or taking your printer to a service center. It can also help you to check the current value of the waste ink counter, reset the ink level counters, read and write serial numbers, and perform other maintenance functions.

          -

          WIC Reset Utility is compatible with most Epson printer models and supports Windows, Mac, and Linux operating systems. You can download it for free from https://www.wic.support/. However, to use it to reset your printer's waste ink counter, you will need to buy a reset key that is valid for one-time use only.

          -

          To use WIC Reset Utility to reset your Epson printer, follow these steps:

          -

          -
            -
          1. Download and install WIC Reset Utility on your computer.
          2. -
          3. Connect your Epson printer to your computer with a USB cable.
          4. -
          5. Run WIC Reset Utility and select your printer model from the drop-down menu.
          6. -
          7. Click on "Read waste counters" button to check the current value of the waste ink counter. If it is more than 100%, you need to reset it.
          8. -
          9. Click on "Buy reset key" button to purchase a reset key online. You will receive an email with your reset key after payment.
          10. -
          11. Enter your reset key in the "Enter reset key here" box and click on "OK".
          12. -
          13. Click on "Reset waste counters" button to reset your printer's waste ink counter.
          14. -
          15. Turn off your printer and then turn it on again.
          16. -
          17. Check if the error message is gone. If not, repeat the steps above.
          18. -
          -

          Congratulations! You have successfully used WIC Reset Utility to reset your Epson printer. You can now continue using your printer without any problems. However, keep in mind that resetting the waste ink counter does not solve the problem of the ink pads being full. You may need to replace them or install an external waste ink tank in the future.

          - -

          WIC Reset Utility is a useful tool for Epson printer users who want to save money and time. It can help you to avoid buying new ink pads or taking your printer to a service center. It can also help you to extend the life of your printer and prevent it from being damaged by overflowing ink.

          -

          However, WIC Reset Utility is not a magic solution that can fix all the problems of your printer. It can only reset the waste ink counter, which is a software indicator of how much ink is collected in the ink pads. It cannot clean or replace the ink pads, which are physical components of your printer. It also cannot reset other counters or errors that may occur in your printer.

          -

          Therefore, you should use WIC Reset Utility with caution and only when necessary. You should also follow the instructions carefully and make sure that you have a valid reset key before using it. If you have any doubts or questions, you can contact the WIC Reset Utility support team or visit their website for more information.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/regarex/SDXL-artists-browser/index.css b/spaces/regarex/SDXL-artists-browser/index.css deleted file mode 100644 index ef60668c873a3fce695f3f7ea063a460cc08d4c6..0000000000000000000000000000000000000000 --- a/spaces/regarex/SDXL-artists-browser/index.css +++ /dev/null @@ -1,791 +0,0 @@ -html, body { - background-color: black; - color: #fff; - font-family: 'Poppins', sans-serif; - font-size: 16px; - margin: 0; - padding: 0; - height: 100%; -} - -h3 { - margin: 5px; -} - -h4 { - margin: 0px; - font-weight: normal; - text-align: center; - line-height: 150%; -} - -#layout { - display: flex; - flex-direction: column; - height: 100%; -} - -#rows { - display: flex; - flex-direction: row; - flex-grow: 1; - overflow: auto; -} - -#toggles { - position: fixed; - top: 0; - left: 0; - width: calc(40% - 20px); - height: calc(100% - 80px); - display: flex; - flex-direction: column; - flex-wrap: wrap; - opacity: 1; - line-height: 140%; - padding: 20px; - overflow: auto; - transition: opacity 50ms 100ms linear; -} - -#gutter { - position: fixed; - z-index: 1; - top: 0; - left: 40%; - width: 50px; - height: calc(100% - 40px); - flex-shrink: 0; - background: black; - background: linear-gradient(90deg, rgba(0,0,0,0) 0%, rgba(0,0,0,1) 40%); -} - -#gutter:hover { - background: linear-gradient(90deg, rgba(255, 255, 255, 0) 0%, rgba(255, 255, 255, 0.1) 40%) -} - -#gutter div { - position: relative; - width: 20px; - height: 100%; - position: relative; - left: 20px; - border-right: 1px solid rgba(255,255,255,0.2); - cursor: col-resize; -} - -#gutter:hover div { - border-right: 1px solid rgba(255,255,255,0.4); -} - -#gutter div[data-tooltip]::before { - content: attr(data-tooltip); - opacity: 0; - transition: opacity 0ms 0ms linear; -} - -#gutter div[data-tooltip]:hover::before { - content: attr(data-tooltip); - position: absolute; - top: 20px; - left: 50%; - transform: translateX(-40%); - background-color: #555; - padding: 4px 8px; - border-radius: 4px; - box-shadow: 0 5px 10px black; - white-space: nowrap; - font-size: 12px; - color: white; - opacity: 1; - transition: opacity 100ms 500ms linear; - pointer-events: none; /* Make sure the tooltip doesn't interfere with other interactions */ -} - -#image-container { - display: flex; - flex-direction: row; - flex-wrap: wrap; - align-items: flex-start; - justify-content: space-around; - margin-left: calc(40% + 50px); - margin-top: 20px; - margin-bottom: 20px; - width: 100%; -} - -#alert { - position: fixed; - z-index: 1; - opacity: 0; - top: 10px; - right: -52px; - padding: 10px; - color: #00ffe6; - background-color: #008679; - border: 1px solid #00ffe6; - border-radius: 3px; - box-shadow: 0 5px 20px #0000007d; -} - -#alert.left { - left: -52px; - right: initial; -} - -#alert.show { - right: 12px; - opacity: 1; - transition: all 100ms ease-in; -} - -#alert.left.show { - left: 12px; - right: initial; - opacity: 1; - transition: all 100ms ease-in; -} - -footer { - flex-shrink: 0; - padding: 5px 10px; - text-align: center; - color: #aaa; - background-color: #222; - border-top: 1px solid black; - font-size: 12px; -} - -footer.special { - color: #00ffe6; - background-color: #008679; - font-size: 14px; -} - -footer > div { - position: relative; - opacity: 0.8; -} - -footer a { - text-decoration: none; - color: #fff; -} - -footer span strong { - font-weight: bold; - color: #fff; -} - -#close_footer { - position: absolute; - top: 0; - right: 0; -} - -footer #close_footer strong { - display: block; - background-color: #aaa; - color: #222; - border-radius: 40px; - line-height: 150%; - cursor: pointer; -} - -footer.special #close_footer strong { - background-color: #00ffe6; - color: #008679; -} - -#layout.footerHidden #toggles { - height: calc(100% - 40px); -} - -#layout.footerHidden #gutter { - height: calc(100%); -} - -#layout.footerHidden footer { - display: none; -} - -.divider { - border-bottom: 1px solid #333; - margin: 10px 40px 5px 0; -} - -#toggles.hide { - opacity: 0; - transition: opacity 50ms linear; -} - -#options_info, -#options_prompts, -#options_artist_sort, -#options_tag_sort { - margin-right: 10px; -} - -#options_info > span:first-child, -#options_prompts > span:first-child, -#options_artist_sort > span:first-child, -#options_tag_sort > span:first-child { - margin-left: 21px; -} - -#toggles label { - margin: 0 20px 0 0; - white-space: nowrap; - opacity: 0.8; - cursor: pointer; -} - -#toggles label:hover { - opacity: 1; -} - -#toggles #artistsShown { - margin: 0 0 0 21px; - white-space: nowrap; - position: relative; - top: 1px; - color: #ffe300; - opacity: 0.8; -} - -#toggles label.top_all { - font-weight: bold; -} - -#toggles label.top_control { - color: #ffe300; -} - -#toggles label.top_control.warning { - color: #ff0000; -} - -#toggles label.no_matches { - opacity: 0.3; - cursor: default; -} - -#toggles label.category { - color: #00d5c0; - font-weight: bold; - padding-bottom: 5px; - margin: 10px 40px 0 0; - border-bottom: 1px solid #333; -} - -#toggles label.hidden { - display: none; -} - -#toggles label .most_used_indicator { - display: inline-block; - width: 14px; - height: 14px; - visibility: hidden; - margin-right: -14px; - position: relative; - top: 1px; - left: 4px; - color: #ffe300; - font-style: normal; -} - -#toggles #artistsMatching { - opacity: 0.8; - cursor: default; -} - -#toggles .count { - opacity: 0.5; -} - -#toggles .link { - display: inline-block; - width: 20px; - height: 20px; - opacity: 0.7; - cursor: pointer; - box-sizing: border-box; - margin-left: 5px; - padding-left: 2px; - border-radius: 4px; - line-height: 130%; -} - -#toggles .link.selected { - background-color: #444; - opacity: 1; - cursor: default; -} - -#toggles .link:hover { - opacity: 1; -} - -#toggles .link:hover::after { - position: absolute; - top: 20px; - left: 20px; - background-color: black; - padding: 0px 4px; - border: 1px solid #777; - border-radius: 3px; - color: #ddd; - box-shadow: 0 5px 10px black; -} - -#infoI:hover::after { - content: 'instructions'; -} - -#infoA:hover::after { - content: 'about'; -} - -#infoX:hover::after { - content: 'export'; -} - -#promptA:hover::after { - content: 'artwork'; -} - -#promptP:hover::after { - content: 'portraits'; -} - -#promptL:hover::after { - content: 'landscapes'; -} - -#sortAA:hover::after { - content: 'alpha'; -} - -#sortAR:hover::after { - content: 'random'; -} - -#sortTA:hover::after { - content: 'alpha'; -} - -#sortTC:hover::after { - content: 'count'; -} - -.information { - display: none; - z-index: 2; - position: fixed; - top: 20px; - left: 20px; - width: calc(40% - 40px); - max-height: calc(100% - 110px); - padding: 20px; - overflow: auto; - background-color: #222; - border-radius: 2px; - border: 1px solid black; - box-shadow: 0 1px 0px #ffffff3d; -} - -.information div { - opacity: 0.8; -} - -.information h2, .information h3, .information ul{ - margin-top: 0; - margin-left: 0; -} - -.information h3 { - margin-bottom: 10px; -} - -.information a { - color: #00ffe7; - font-weight: bold; - text-decoration: none; -} - - -.information a:hover { - color: #fff; -} - - -.information.shown { - display: block; -} - -#instructions { -} - -#about { - -} - -#export textarea { - resize: vertical; - width: 100%; - height: 200px; -} - -#export .buttons { - display: flex; - flex-direction: row; -} - -#export .buttons div { - cursor: pointer; - opacity: 0.8; - padding: 10px; -} - -#export .buttons div:hover { - opacity: 1; -} - -#filtersHidingAll { - display: none; - font-size: 24px; - color: #444; - text-align: center; - font-weight: bold; - position: relative; - top: 50%; - transform: translate(0%, -50%); - margin: 0 40px; - line-height: 220%; -} - -#filtersHidingAll.shown { - display: block; -} - -.image-item { - position: relative; - display: flex; - flex-direction: column; - align-items: center; - padding: 10px; - width: 256px; - background-color: #222; - border-radius: 2px; - margin: 0 5px 20px 5px; - box-shadow: 0 1px 0px #ffffff3d; - border: 1px solid black; - overflow: hidden; -} - -.image-item.hidden { - display: none; -} - -.image-item > span { - height: 84px; - position: relative; - display: block; - width: 100%; -} - -.image-item h3 { - display: flex; - justify-content: center; - opacity: 0.8; - cursor: pointer; - height: 22px; -} - -.image-item h4 { - width: 258px; - height: 52px; - opacity: 0.5; - cursor: pointer; - overflow: hidden; - position: absolute; - left: -1px; - padding-bottom: 6px; - box-sizing: border-box; -} - -.image-item h3:hover { - opacity: 1; -} - -.image-item h4:hover { - z-index: 1; - height: initial; - opacity: 1; - background-color: #222; - border-bottom: 1px solid #111; - color: #aaa; -} - -.image-item .firstN { - margin-right: 8px; - white-space: nowrap; -} - -.image-item .lastN { - white-space: nowrap; -} - -.image-item > div { - width: 256px; - height: 256px; - text-align: center; - border: 1px solid black; - border-radius: 2px; - overflow: hidden; -} - -.image-item .imgTools { - display: flex; - flex-direction: row; - align-items: end; - height: 100%; - background-color: #666; - opacity: 0; - transition: opacity 200ms 50ms linear; -} - -.image-item:hover .imgTools { - opacity: 1; -} - -.image-item .imgTools > div { - position: relative; - opacity: 0.7; - cursor: pointer; -} - -.image-item .imgTools > div:hover { - opacity: 1; -} - -.image-item .imgTools span { - position: absolute; - display: block; - width: 24px; - height: 24px; - border-radius: 4px; - top: 50%; - left: 50%; - transform: translate(-50%, -50%); - box-sizing: border-box; - background-color: #545454; - box-shadow: 0 0 5px #777 -} - -.image-item .art_prev { - width: 50px; - height: 50px; - background-color: #333; - border-radius: 0px 4px 0px 0px; -} - -.image-item .art_next { - width: 50px; - height: 50px; - background-color: #333; - border-radius: 4px 0px 0px 0px; -} - -.image-item .art_star { - flex-grow: 1; - width: 128px; - height: 100%; -} - -.image-item .art_star span { - font-size: 48px; - width: 60px; - height: 60px; - line-height: 120%; - padding: 0; - filter: grayscale(100%); - background-color: initial; - box-shadow: none; -} - -.image-item .imgBox { - position: relative; - z-index: 0; - top: -256px; - left: 0px; - width: 256px; - aspect-ratio: 1 / 1.33; - overflow: hidden; - border-radius: 2px; - background-color: #111; - text-align: left; - cursor: pointer; - animation-name: reduce; - animation-duration: 100ms; - animation-timing-function: linear; - animation-iteration-count: 1; - animation-direction: forward; -} - -.image-item:hover .imgBox { - position: fixed; - z-index: 1; - top: 0px; - left: 20px; - width: 40%; - cursor: not-allowed; - transform: translateY(20px); - animation-name: enlarge; - animation-duration: 100ms; - animation-timing-function: east-out; - animation-iteration-count: 1; - animation-direction: forward; -} - -@keyframes enlarge { - 0% { - opacity: 0; - transform: translateY(0px); - } - 100% { - opacity: 1; - transform: translateY(20px); - } -} - -@keyframes reduce { - 0% { - opacity: 0; - } - 100% { - opacity: 1; - } -} - -.image-item .deprecated { - color: #888; - text-align: center; - display: block; - padding: 70px 20px 20px 20px; -} - -.image-item img { - display: block; - width: 256px; - position: absolute; - top: 0; -} - -.image-item .imgBox img.hidden { - display: none; -} - -.image-item:hover .imgBox img { - width: 100%; - z-index: 1; - box-shadow: -10px 10px 20px rgba(0,0,0,0.6); -} - -.image-item:hover .imgBox img.hidden { - display: initial; - width: 33%; - position: relative; - top: 75%; - box-shadow: initial; - z-index: 0; -} - -.image-item.favorite { - border: 1px solid #ffc10080; - box-shadow: 0 0px 15px #ffe20045; -} - -.image-item.favorite .art_star span { - filter: grayscale(0%); -} - -#layout.edit_mode #toggles { - width: calc(100% - 40px); - transition: width 200ms ease-out; -} - -#layout.edit_mode #gutter { - left: calc(100% - 40px); - transition: left 200ms ease-out; -} - -#layout.edit_mode #image-container { - opacity: 0.2; - margin-left: 100%; - overflow: hidden; - transition: width 200ms ease-out; -} - -#edit_most_used { - color: #ffe300; - opacity: 0.8; - cursor: pointer; - margin: 5px 0 0 21px; -} - -#edit_most_used:hover { - opacity: 1; -} - -#edit_most_used.hidden { - display: none; -} - -#layout.edit_mode #edit_most_used { - font-weight: bold; - color: #ff0000; -} - -#layout.edit_mode .top_control, -#layout.edit_mode .divider, -#layout.edit_mode #options_prompts, -#layout.edit_mode #options_tag_sort, -#layout.edit_mode #options_artist_sort, -#layout.edit_mode #options_info, -#layout.edit_mode .category .count { - visibility: hidden; -} - -#layout.edit_mode .category { - color: #fff; - opacity: 0.5; -} - -#layout.edit_mode .category:hover { - cursor: default; - opacity: 0.5; -} - -#layout.edit_mode [data-category-name="important"] { - opacity: 1; - color: #ffe300; -} - -#layout.edit_mode [data-category-name="important"]:hover { - opacity: 1; -} - -#layout.edit_mode #toggles .was_moved { - font-weight: bold; - color: #ffe300; -} - -#layout.edit_mode #toggles input { - visibility: hidden; -} - -#layout.edit_mode #toggles .most_used_indicator { - visibility: visible; -} diff --git a/spaces/renumics/whisper-commonvoice-noise-issues/run.py b/spaces/renumics/whisper-commonvoice-noise-issues/run.py deleted file mode 100644 index 1eb0ace737fc5cff324d40a773e1da3843a1a598..0000000000000000000000000000000000000000 --- a/spaces/renumics/whisper-commonvoice-noise-issues/run.py +++ /dev/null @@ -1,49 +0,0 @@ -from renumics import spotlight -from renumics.spotlight.analysis.typing import DataIssue -import pandas as pd -import numpy as np - - -if __name__ == "__main__": - df = pd.read_json("report-data-environmental-noise.json") - - data_issues = df["issue"].unique() - spotlight_data_issues = [] - data_issue_severity = [] - for issue in data_issues: - if issue == -1: - continue - issue_rows = np.where(df["issue"] == issue)[0].tolist() - issue_metric = df[df["issue"] == issue].iloc[0]["issue_metric"] - data_issue_severity.append(issue_metric) - new_issue = DataIssue( - severity="medium", - title=f"Issue {issue}: {issue_metric:.2f}", - description=f"Issue {issue}: {issue_metric:.2f}", - rows=issue_rows) - spotlight_data_issues.append(new_issue) - data_issue_order = np.argsort(data_issue_severity) - - data_issue_order = data_issue_order[::-1] - - - - - while True: - dtypes = { - "audio": spotlight.Audio, - "sg_emb_audio": spotlight.Embedding, - } - view = spotlight.show( - df, - dtype=dtypes, - layout="spotlight-layout-environmental-noise.json", - issues=np.array(spotlight_data_issues)[data_issue_order].tolist(), - port=7860, - host="0.0.0.0", - allow_filebrowsing=False - ) - - view.close() - - diff --git a/spaces/rerdscf/webui/app.py b/spaces/rerdscf/webui/app.py deleted file mode 100644 index ddc924c7ab892960b5e39ca043f30d4c1f71805d..0000000000000000000000000000000000000000 --- a/spaces/rerdscf/webui/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME')}") - - os.system(f"wget -q {os.getenv('EMBED_LINK1')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME1')}") - os.system(f"wget -q {os.getenv('EMBED_LINK2')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME2')}") - os.system(f"wget -q {os.getenv('EMBED_LINK3')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME3')}") - os.system(f"wget -q {os.getenv('EMBED_LINK4')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME4')}") - - os.system(f"wget -q {os.getenv('MODEL_LINK1')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME1')}") - os.system(f"wget -q {os.getenv('MODEL_LINK2')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME2')}") - os.system(f"wget -q {os.getenv('MODEL_LINK3')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME3')}") - os.system(f"wget -q {os.getenv('MODEL_LINK4')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME4')}") - - - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --precision full --no-half --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py deleted file mode 100644 index 870b3c43c82d66df001eb1bc24af9ce21ec60c83..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/model/HGFilters.py +++ /dev/null @@ -1,146 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from ..net_util import * - - -class HourGlass(nn.Module): - def __init__(self, num_modules, depth, num_features, norm='batch'): - super(HourGlass, self).__init__() - self.num_modules = num_modules - self.depth = depth - self.features = num_features - self.norm = norm - - self._generate_network(self.depth) - - def _generate_network(self, level): - self.add_module('b1_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - self.add_module('b2_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - if level > 1: - self._generate_network(level - 1) - else: - self.add_module('b2_plus_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - self.add_module('b3_' + str(level), ConvBlock(self.features, self.features, norm=self.norm)) - - def _forward(self, level, inp): - # Upper branch - up1 = inp - up1 = self._modules['b1_' + str(level)](up1) - - # Lower branch - low1 = F.avg_pool2d(inp, 2, stride=2) - low1 = self._modules['b2_' + str(level)](low1) - - if level > 1: - low2 = self._forward(level - 1, low1) - else: - low2 = low1 - low2 = self._modules['b2_plus_' + str(level)](low2) - - low3 = low2 - low3 = self._modules['b3_' + str(level)](low3) - - # NOTE: for newer PyTorch (1.3~), it seems that training results are degraded due to implementation diff in F.grid_sample - # if the pretrained model behaves weirdly, switch with the commented line. - # NOTE: I also found that "bicubic" works better. - up2 = F.interpolate(low3, scale_factor=2, mode='bicubic', align_corners=True) - # up2 = F.interpolate(low3, scale_factor=2, mode='nearest) - - return up1 + up2 - - def forward(self, x): - return self._forward(self.depth, x) - - -class HGFilter(nn.Module): - def __init__(self, opt): - super(HGFilter, self).__init__() - self.num_modules = opt.num_stack - - self.opt = opt - - # Base part - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - - if self.opt.norm == 'batch': - self.bn1 = nn.BatchNorm2d(64) - elif self.opt.norm == 'group': - self.bn1 = nn.GroupNorm(32, 64) - - if self.opt.hg_down == 'conv64': - self.conv2 = ConvBlock(64, 64, self.opt.norm) - self.down_conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1) - elif self.opt.hg_down == 'conv128': - self.conv2 = ConvBlock(64, 128, self.opt.norm) - self.down_conv2 = nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1) - elif self.opt.hg_down == 'ave_pool': - self.conv2 = ConvBlock(64, 128, self.opt.norm) - else: - raise NameError('Unknown Fan Filter setting!') - - self.conv3 = ConvBlock(128, 128, self.opt.norm) - self.conv4 = ConvBlock(128, 256, self.opt.norm) - - # Stacking part - for hg_module in range(self.num_modules): - self.add_module('m' + str(hg_module), HourGlass(1, opt.num_hourglass, 256, self.opt.norm)) - - self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256, self.opt.norm)) - self.add_module('conv_last' + str(hg_module), - nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - if self.opt.norm == 'batch': - self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256)) - elif self.opt.norm == 'group': - self.add_module('bn_end' + str(hg_module), nn.GroupNorm(32, 256)) - - self.add_module('l' + str(hg_module), nn.Conv2d(256, - opt.hourglass_dim, kernel_size=1, stride=1, padding=0)) - - if hg_module < self.num_modules - 1: - self.add_module( - 'bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('al' + str(hg_module), nn.Conv2d(opt.hourglass_dim, - 256, kernel_size=1, stride=1, padding=0)) - - def forward(self, x): - x = F.relu(self.bn1(self.conv1(x)), True) - tmpx = x - if self.opt.hg_down == 'ave_pool': - x = F.avg_pool2d(self.conv2(x), 2, stride=2) - elif self.opt.hg_down in ['conv64', 'conv128']: - x = self.conv2(x) - x = self.down_conv2(x) - else: - raise NameError('Unknown Fan Filter setting!') - - normx = x - - x = self.conv3(x) - x = self.conv4(x) - - previous = x - - outputs = [] - for i in range(self.num_modules): - hg = self._modules['m' + str(i)](previous) - - ll = hg - ll = self._modules['top_m_' + str(i)](ll) - - ll = F.relu(self._modules['bn_end' + str(i)] - (self._modules['conv_last' + str(i)](ll)), True) - - # Predict heatmaps - tmp_out = self._modules['l' + str(i)](ll) - outputs.append(tmp_out) - - if i < self.num_modules - 1: - ll = self._modules['bl' + str(i)](ll) - tmp_out_ = self._modules['al' + str(i)](tmp_out) - previous = previous + ll + tmp_out_ - - return outputs, tmpx.detach(), normx diff --git a/spaces/rhineJoke/baichuan/README.md b/spaces/rhineJoke/baichuan/README.md deleted file mode 100644 index 0e1e1744ab26f694dda45e487991c2de1dcc48e9..0000000000000000000000000000000000000000 --- a/spaces/rhineJoke/baichuan/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Baichuan -emoji: 💻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fcos.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fcos.py deleted file mode 100644 index d985bd02d7ca5c13e86dfdb9a7a5ed9b29d890cc..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/fcos.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FCOS(SingleStageDetector): - """Implementation of `FCOS `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None, - init_cfg=None): - super(FCOS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained, init_cfg) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/transformer_deformable.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/transformer_deformable.py deleted file mode 100644 index ab3535447670ef33f4dbe0c127c9b1aaf098f1eb..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/focalnet_dino/models/dino/transformer_deformable.py +++ /dev/null @@ -1,670 +0,0 @@ -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ - -import copy -import os -from typing import Optional, List -import math - -import torch -import torch.nn.functional as F -from torch import nn, Tensor -from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_ - -from .util.misc import inverse_sigmoid -from projects.instance_segment_anything.ops.modules import MSDeformAttn - -from .utils import sigmoid_focal_loss, MLP, _get_activation_fn, gen_sineembed_for_position - -class DeformableTransformer(nn.Module): - def __init__(self, d_model=256, nhead=8, - num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=1024, dropout=0.1, - activation="relu", return_intermediate_dec=False, - num_feature_levels=4, dec_n_points=4, enc_n_points=4, - two_stage=False, two_stage_num_proposals=300, - use_dab=False, high_dim_query_update=False, no_sine_embed=False): - super().__init__() - - self.d_model = d_model - self.nhead = nhead - self.two_stage = two_stage - self.two_stage_num_proposals = two_stage_num_proposals - self.use_dab = use_dab - - encoder_layer = DeformableTransformerEncoderLayer(d_model, dim_feedforward, - dropout, activation, - num_feature_levels, nhead, enc_n_points) - self.encoder = DeformableTransformerEncoder(encoder_layer, num_encoder_layers) - - decoder_layer = DeformableTransformerDecoderLayer(d_model, dim_feedforward, - dropout, activation, - num_feature_levels, nhead, dec_n_points) - self.decoder = DeformableTransformerDecoder(decoder_layer, num_decoder_layers, return_intermediate_dec, - use_dab=use_dab, d_model=d_model, high_dim_query_update=high_dim_query_update, no_sine_embed=no_sine_embed) - - self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model)) - - if two_stage: - self.enc_output = nn.Linear(d_model, d_model) - self.enc_output_norm = nn.LayerNorm(d_model) - self.pos_trans = nn.Linear(d_model * 2, d_model * 2) - self.pos_trans_norm = nn.LayerNorm(d_model * 2) - else: - if not self.use_dab: - self.reference_points = nn.Linear(d_model, 2) - - self.high_dim_query_update = high_dim_query_update - if high_dim_query_update: - assert not self.use_dab, "use_dab must be True" - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MSDeformAttn): - m._reset_parameters() - if not self.two_stage and not self.use_dab: - xavier_uniform_(self.reference_points.weight.data, gain=1.0) - constant_(self.reference_points.bias.data, 0.) - normal_(self.level_embed) - - def get_proposal_pos_embed(self, proposals): - num_pos_feats = 128 - temperature = 10000 - scale = 2 * math.pi - - dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=proposals.device) - dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats) - # N, L, 4 - proposals = proposals.sigmoid() * scale - # N, L, 4, 128 - pos = proposals[:, :, :, None] / dim_t - # N, L, 4, 64, 2 - pos = torch.stack((pos[:, :, :, 0::2].sin(), pos[:, :, :, 1::2].cos()), dim=4).flatten(2) - return pos - - def gen_encoder_output_proposals(self, memory, memory_padding_mask, spatial_shapes): - N_, S_, C_ = memory.shape - base_scale = 4.0 - proposals = [] - _cur = 0 - for lvl, (H_, W_) in enumerate(spatial_shapes): - mask_flatten_ = memory_padding_mask[:, _cur:(_cur + H_ * W_)].view(N_, H_, W_, 1) - valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1) - valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1) - - grid_y, grid_x = torch.meshgrid(torch.linspace(0, H_ - 1, H_, dtype=torch.float32, device=memory.device), - torch.linspace(0, W_ - 1, W_, dtype=torch.float32, device=memory.device)) - grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) - - scale = torch.cat([valid_W.unsqueeze(-1), valid_H.unsqueeze(-1)], 1).view(N_, 1, 1, 2) - grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale - wh = torch.ones_like(grid) * 0.05 * (2.0 ** lvl) - proposal = torch.cat((grid, wh), -1).view(N_, -1, 4) - proposals.append(proposal) - _cur += (H_ * W_) - output_proposals = torch.cat(proposals, 1) - output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all(-1, keepdim=True) - output_proposals = torch.log(output_proposals / (1 - output_proposals)) - output_proposals = output_proposals.masked_fill(memory_padding_mask.unsqueeze(-1), float('inf')) - output_proposals = output_proposals.masked_fill(~output_proposals_valid, float('inf')) - - output_memory = memory - output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float(0)) - output_memory = output_memory.masked_fill(~output_proposals_valid, float(0)) - output_memory = self.enc_output_norm(self.enc_output(output_memory)) - return output_memory, output_proposals - - def get_valid_ratio(self, mask): - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def forward(self, srcs, masks, pos_embeds, query_embed=None): - """ - Input: - - srcs: List([bs, c, h, w]) - - masks: List([bs, h, w]) - """ - assert self.two_stage or query_embed is not None - - # prepare input for encoder - src_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)): - bs, c, h, w = src.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - - src = src.flatten(2).transpose(1, 2) # bs, hw, c - mask = mask.flatten(1) # bs, hw - pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c - lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1) - lvl_pos_embed_flatten.append(lvl_pos_embed) - src_flatten.append(src) - mask_flatten.append(mask) - src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c - mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw} - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) - spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device) - level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1])) - valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1) - - # encoder - memory = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, lvl_pos_embed_flatten, mask_flatten) - # import ipdb; ipdb.set_trace() - - # prepare input for decoder - bs, _, c = memory.shape - if self.two_stage: - output_memory, output_proposals = self.gen_encoder_output_proposals(memory, mask_flatten, spatial_shapes) - - # hack implementation for two-stage Deformable DETR - enc_outputs_class = self.decoder.class_embed[self.decoder.num_layers](output_memory) - enc_outputs_coord_unact = self.decoder.bbox_embed[self.decoder.num_layers](output_memory) + output_proposals - - topk = self.two_stage_num_proposals - topk_proposals = torch.topk(enc_outputs_class[..., 0], topk, dim=1)[1] - topk_coords_unact = torch.gather(enc_outputs_coord_unact, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)) - topk_coords_unact = topk_coords_unact.detach() - reference_points = topk_coords_unact.sigmoid() - init_reference_out = reference_points - pos_trans_out = self.pos_trans_norm(self.pos_trans(self.get_proposal_pos_embed(topk_coords_unact))) - query_embed, tgt = torch.split(pos_trans_out, c, dim=2) - elif self.use_dab: - reference_points = query_embed[..., self.d_model:].sigmoid() - tgt = query_embed[..., :self.d_model] - tgt = tgt.unsqueeze(0).expand(bs, -1, -1) - init_reference_out = reference_points - else: - query_embed, tgt = torch.split(query_embed, c, dim=1) - query_embed = query_embed.unsqueeze(0).expand(bs, -1, -1) - tgt = tgt.unsqueeze(0).expand(bs, -1, -1) - reference_points = self.reference_points(query_embed).sigmoid() - # bs, num_quires, 2 - init_reference_out = reference_points - - # decoder - # import ipdb; ipdb.set_trace() - hs, inter_references = self.decoder(tgt, reference_points, memory, - spatial_shapes, level_start_index, valid_ratios, - query_pos=query_embed if not self.use_dab else None, - src_padding_mask=mask_flatten) - - inter_references_out = inter_references - if self.two_stage: - return hs, init_reference_out, inter_references_out, enc_outputs_class, enc_outputs_coord_unact - return hs, init_reference_out, inter_references_out, None, None - - -class DeformableTransformerEncoderLayer(nn.Module): - def __init__(self, - d_model=256, d_ffn=1024, - dropout=0.1, activation="relu", - n_levels=4, n_heads=8, n_points=4, - add_channel_attention=False, - use_deformable_box_attn=False, - box_attn_type='roi_align', - ): - super().__init__() - - # self attention - if use_deformable_box_attn: - self.self_attn = MSDeformableBoxAttention(d_model, n_levels, n_heads, n_boxes=n_points, used_func=box_attn_type) - else: - self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn) - self.dropout2 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout3 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - # channel attention - self.add_channel_attention = add_channel_attention - if add_channel_attention: - self.activ_channel = _get_activation_fn('dyrelu', d_model=d_model) - self.norm_channel = nn.LayerNorm(d_model) - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, src): - src2 = self.linear2(self.dropout2(self.activation(self.linear1(src)))) - src = src + self.dropout3(src2) - src = self.norm2(src) - return src - - def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None): - # self attention - # import ipdb; ipdb.set_trace() - src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, key_padding_mask) - src = src + self.dropout1(src2) - src = self.norm1(src) - - # ffn - src = self.forward_ffn(src) - - # channel attn - if self.add_channel_attention: - src = self.norm_channel(src + self.activ_channel(src)) - - return src - - -class DeformableTransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers, norm=None): - super().__init__() - if num_layers > 0: - self.layers = _get_clones(encoder_layer, num_layers) - else: - self.layers = [] - del encoder_layer - self.num_layers = num_layers - self.norm = norm - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - reference_points_list = [] - for lvl, (H_, W_) in enumerate(spatial_shapes): - - ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device), - torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device)) - ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_) - ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None): - """ - Input: - - src: [bs, sum(hi*wi), 256] - - spatial_shapes: h,w of each level [num_level, 2] - - level_start_index: [num_level] start point of level in sum(hi*wi). - - valid_ratios: [bs, num_level, 2] - - pos: pos embed for src. [bs, sum(hi*wi), 256] - - padding_mask: [bs, sum(hi*wi)] - Intermedia: - - reference_points: [bs, sum(hi*wi), num_lebel, 2] - """ - output = src - # bs, sum(hi*wi), 256 - # import ipdb; ipdb.set_trace() - if self.num_layers > 0: - reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device) - for _, layer in enumerate(self.layers): - output = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask) - - if self.norm is not None: - output = self.norm(output) - - return output - - -class DeformableTransformerDecoderLayer(nn.Module): - def __init__(self, d_model=256, d_ffn=1024, - dropout=0.1, activation="relu", - n_levels=4, n_heads=8, n_points=4, - use_deformable_box_attn=False, - box_attn_type='roi_align', - key_aware_type=None, - decoder_sa_type='ca', - module_seq=['sa', 'ca', 'ffn'], - ): - super().__init__() - self.module_seq = module_seq - assert sorted(module_seq) == ['ca', 'ffn', 'sa'] - - # cross attention - # self.cross_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - if use_deformable_box_attn: - self.cross_attn = MSDeformableBoxAttention(d_model, n_levels, n_heads, n_boxes=n_points, used_func=box_attn_type) - else: - self.cross_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # self attention - self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.dropout2 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1) - self.dropout3 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout4 = nn.Dropout(dropout) - self.norm3 = nn.LayerNorm(d_model) - - self.key_aware_type = key_aware_type - self.key_aware_proj = None - self.decoder_sa_type = decoder_sa_type - assert decoder_sa_type in ['sa', 'ca_label', 'ca_content'] - - if decoder_sa_type == 'ca_content': - self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points) - - - - - def rm_self_attn_modules(self): - self.self_attn = None - self.dropout2 = None - self.norm2 = None - - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, tgt): - tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout4(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward_sa(self, - # for tgt - tgt: Optional[Tensor], # nq, bs, d_model - tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - tgt_key_padding_mask: Optional[Tensor] = None, - tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - - # for memory - memory: Optional[Tensor] = None, # hw, bs, d_model - memory_key_padding_mask: Optional[Tensor] = None, - memory_level_start_index: Optional[Tensor] = None, # num_levels - memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - memory_pos: Optional[Tensor] = None, # pos for memory - - # sa - self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - ): - # self attention - if self.self_attn is not None: - # import ipdb; ipdb.set_trace() - if self.decoder_sa_type == 'sa': - q = k = self.with_pos_embed(tgt, tgt_query_pos) - tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - elif self.decoder_sa_type == 'ca_label': - # import ipdb; ipdb.set_trace() - # q = self.with_pos_embed(tgt, tgt_query_pos) - bs = tgt.shape[1] - k = v = self.label_embedding.weight[:, None, :].repeat(1, bs, 1) - tgt2 = self.self_attn(tgt, k, v, attn_mask=self_attn_mask)[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - elif self.decoder_sa_type == 'ca_content': - tgt2 = self.self_attn(self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - tgt_reference_points.transpose(0, 1).contiguous(), - memory.transpose(0, 1), memory_spatial_shapes, memory_level_start_index, memory_key_padding_mask).transpose(0, 1) - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - else: - raise NotImplementedError("Unknown decoder_sa_type {}".format(self.decoder_sa_type)) - - return tgt - - def forward_ca(self, - # for tgt - tgt: Optional[Tensor], # nq, bs, d_model - tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - tgt_key_padding_mask: Optional[Tensor] = None, - tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - - # for memory - memory: Optional[Tensor] = None, # hw, bs, d_model - memory_key_padding_mask: Optional[Tensor] = None, - memory_level_start_index: Optional[Tensor] = None, # num_levels - memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - memory_pos: Optional[Tensor] = None, # pos for memory - - # sa - self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - ): - # cross attention - # import ipdb; ipdb.set_trace() - if self.key_aware_type is not None: - - if self.key_aware_type == 'mean': - tgt = tgt + memory.mean(0, keepdim=True) - elif self.key_aware_type == 'proj_mean': - tgt = tgt + self.key_aware_proj(memory).mean(0, keepdim=True) - else: - raise NotImplementedError("Unknown key_aware_type: {}".format(self.key_aware_type)) - tgt2 = self.cross_attn(self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - tgt_reference_points.transpose(0, 1).contiguous(), - memory.transpose(0, 1), memory_spatial_shapes, memory_level_start_index, memory_key_padding_mask).transpose(0, 1) - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - - return tgt - - def forward(self, - # for tgt - tgt: Optional[Tensor], # nq, bs, d_model - tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - tgt_key_padding_mask: Optional[Tensor] = None, - tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - - # for memory - memory: Optional[Tensor] = None, # hw, bs, d_model - memory_key_padding_mask: Optional[Tensor] = None, - memory_level_start_index: Optional[Tensor] = None, # num_levels - memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - memory_pos: Optional[Tensor] = None, # pos for memory - - # sa - self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - ): - - for funcname in self.module_seq: - if funcname == 'ffn': - tgt = self.forward_ffn(tgt) - elif funcname == 'ca': - tgt = self.forward_ca(tgt, tgt_query_pos, tgt_query_sine_embed, \ - tgt_key_padding_mask, tgt_reference_points, \ - memory, memory_key_padding_mask, memory_level_start_index, \ - memory_spatial_shapes, memory_pos, self_attn_mask, cross_attn_mask) - elif funcname == 'sa': - tgt = self.forward_sa(tgt, tgt_query_pos, tgt_query_sine_embed, \ - tgt_key_padding_mask, tgt_reference_points, \ - memory, memory_key_padding_mask, memory_level_start_index, \ - memory_spatial_shapes, memory_pos, self_attn_mask, cross_attn_mask) - else: - raise ValueError('unknown funcname {}'.format(funcname)) - - return tgt - - # def forward(self, - # # for tgt - # tgt: Optional[Tensor], # nq, bs, d_model - # tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - # tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - # tgt_key_padding_mask: Optional[Tensor] = None, - # tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - - # # for memory - # memory: Optional[Tensor] = None, # hw, bs, d_model - # memory_key_padding_mask: Optional[Tensor] = None, - # memory_level_start_index: Optional[Tensor] = None, # num_levels - # memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - # memory_pos: Optional[Tensor] = None, # pos for memory - - # # sa - # self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - # cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - # ): - # """ - # Input: - # - tgt/tgt_query_pos: nq, bs, d_model - # - - # """ - # assert cross_attn_mask is None - - # # self attention - # if self.self_attn is not None: - # # import ipdb; ipdb.set_trace() - # if self.decoder_sa_type == 'sa': - # q = k = self.with_pos_embed(tgt, tgt_query_pos) - # tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0] - # tgt = tgt + self.dropout2(tgt2) - # tgt = self.norm2(tgt) - # elif self.decoder_sa_type == 'ca_label': - # # import ipdb; ipdb.set_trace() - # # q = self.with_pos_embed(tgt, tgt_query_pos) - # bs = tgt.shape[1] - # k = v = self.label_embedding.weight[:, None, :].repeat(1, bs, 1) - # tgt2 = self.self_attn(tgt, k, v, attn_mask=self_attn_mask)[0] - # tgt = tgt + self.dropout2(tgt2) - # tgt = self.norm2(tgt) - # elif self.decoder_sa_type == 'ca_content': - # tgt2 = self.self_attn(self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - # tgt_reference_points.transpose(0, 1).contiguous(), - # memory.transpose(0, 1), memory_spatial_shapes, memory_level_start_index, memory_key_padding_mask).transpose(0, 1) - # tgt = tgt + self.dropout2(tgt2) - # tgt = self.norm2(tgt) - # else: - # raise NotImplementedError("Unknown decoder_sa_type {}".format(self.decoder_sa_type)) - - - # # cross attention - # # import ipdb; ipdb.set_trace() - # if self.key_aware_type is not None: - # if self.key_aware_type == 'mean': - # tgt = tgt + memory.mean(0, keepdim=True) - # elif self.key_aware_type == 'proj_mean': - # tgt = tgt + self.key_aware_proj(memory).mean(0, keepdim=True) - # else: - # raise NotImplementedError("Unknown key_aware_type: {}".format(self.key_aware_type)) - # tgt2 = self.cross_attn(self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - # tgt_reference_points.transpose(0, 1).contiguous(), - # memory.transpose(0, 1), memory_spatial_shapes, memory_level_start_index, memory_key_padding_mask).transpose(0, 1) - # tgt = tgt + self.dropout1(tgt2) - # tgt = self.norm1(tgt) - - # # ffn - # tgt = self.forward_ffn(tgt) - - # return tgt - - -class DeformableTransformerDecoder(nn.Module): - def __init__(self, decoder_layer, num_layers, return_intermediate=False, use_dab=False, d_model=256, query_dim=4): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.return_intermediate = return_intermediate - assert return_intermediate - # hack implementation for iterative bounding box refinement and two-stage Deformable DETR - self.bbox_embed = None - self.class_embed = None - self.use_dab = use_dab - self.d_model = d_model - self.query_dim = query_dim - if use_dab: - self.query_scale = MLP(d_model, d_model, d_model, 2) - self.ref_point_head = MLP(2 * d_model, d_model, d_model, 2) - - - def forward(self, tgt, reference_points, src, src_spatial_shapes, - src_level_start_index, src_valid_ratios, - query_pos=None, src_padding_mask=None): - output = tgt - if self.use_dab: - assert query_pos is None - - intermediate = [] - intermediate_reference_points = [reference_points] - for layer_id, layer in enumerate(self.layers): - # import ipdb; ipdb.set_trace() - if reference_points.shape[-1] == 4: - reference_points_input = reference_points[:, :, None] \ - * torch.cat([src_valid_ratios, src_valid_ratios], -1)[:, None] # bs, nq, 4, 4 - else: - assert reference_points.shape[-1] == 2 - reference_points_input = reference_points[:, :, None] * src_valid_ratios[:, None] - - if self.use_dab: - # import ipdb; ipdb.set_trace() - query_sine_embed = gen_sineembed_for_position(reference_points_input[:, :, 0, :]) # bs, nq, 256*2 - raw_query_pos = self.ref_point_head(query_sine_embed) # bs, nq, 256 - pos_scale = self.query_scale(output) if layer_id != 0 else 1 - query_pos = pos_scale * raw_query_pos - - output = layer(output, query_pos, reference_points_input, src, src_spatial_shapes, src_level_start_index, src_padding_mask) - - # hack implementation for iterative bounding box refinement - if self.bbox_embed is not None: - box_holder = self.bbox_embed(output) - box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points) - new_reference_points = box_holder[..., :self.query_dim].sigmoid() - reference_points = new_reference_points.detach() - if layer_id != self.num_layers - 1: - intermediate_reference_points.append(new_reference_points) - - intermediate.append(output) - - return torch.stack(intermediate), torch.stack(intermediate_reference_points) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def build_deforamble_transformer(args): - return DeformableTransformer( - d_model=args.hidden_dim, - nhead=args.nheads, - num_encoder_layers=args.enc_layers, - num_decoder_layers=args.dec_layers, - dim_feedforward=args.dim_feedforward, - dropout=args.dropout, - activation="relu", - return_intermediate_dec=True, - num_feature_levels=args.ddetr_num_feature_levels, - dec_n_points=args.ddetr_dec_n_points, - enc_n_points=args.ddetr_enc_n_points, - two_stage=args.ddetr_two_stage, - two_stage_num_proposals=args.num_queries, - use_dab=args.ddetr_use_dab, - high_dim_query_update=args.ddetr_high_dim_query_update, - no_sine_embed=args.ddetr_no_sine_embed) - - diff --git a/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/trunc_noise_widget.py b/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/trunc_noise_widget.py deleted file mode 100644 index dda852b159bd8f2864fe6f6b87de9677e3e41625..0000000000000000000000000000000000000000 --- a/spaces/rossellison/kpop-face-generator/stylegan3-fun/viz/trunc_noise_widget.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import imgui -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class TruncationNoiseWidget: - def __init__(self, viz): - self.viz = viz - self.prev_num_ws = 0 - self.trunc_psi = 1 - self.trunc_cutoff = 0 - self.noise_enable = True - self.noise_seed = 0 - self.noise_anim = False - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - num_ws = viz.result.get('num_ws', 0) - has_noise = viz.result.get('has_noise', False) - if num_ws > 0 and num_ws != self.prev_num_ws: - if self.trunc_cutoff > num_ws or self.trunc_cutoff == self.prev_num_ws: - self.trunc_cutoff = num_ws - self.prev_num_ws = num_ws - - if show: - imgui.text('Truncate') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 10), imgui_utils.grayed_out(num_ws == 0): - _changed, self.trunc_psi = imgui.slider_float('##psi', self.trunc_psi, -1, 2, format='Psi %.2f') - imgui.same_line() - if num_ws == 0: - imgui_utils.button('Cutoff 0', width=(viz.font_size * 8 + viz.spacing), enabled=False) - else: - with imgui_utils.item_width(viz.font_size * 8 + viz.spacing): - changed, new_cutoff = imgui.slider_int('##cutoff', self.trunc_cutoff, 0, num_ws, format='Cutoff %d') - if changed: - self.trunc_cutoff = min(max(new_cutoff, 0), num_ws) - - with imgui_utils.grayed_out(not has_noise): - imgui.same_line() - _clicked, self.noise_enable = imgui.checkbox('Noise##enable', self.noise_enable) - imgui.same_line(round(viz.font_size * 27.7)) - with imgui_utils.grayed_out(not self.noise_enable): - with imgui_utils.item_width(-1 - viz.button_w - viz.spacing - viz.font_size * 4): - _changed, self.noise_seed = imgui.input_int('##seed', self.noise_seed) - imgui.same_line(spacing=0) - _clicked, self.noise_anim = imgui.checkbox('Anim##noise', self.noise_anim) - - is_def_trunc = (self.trunc_psi == 1 and self.trunc_cutoff == num_ws) - is_def_noise = (self.noise_enable and self.noise_seed == 0 and not self.noise_anim) - with imgui_utils.grayed_out(is_def_trunc and not has_noise): - imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w) - if imgui_utils.button('Reset', width=-1, enabled=(not is_def_trunc or not is_def_noise)): - self.prev_num_ws = num_ws - self.trunc_psi = 1 - self.trunc_cutoff = num_ws - self.noise_enable = True - self.noise_seed = 0 - self.noise_anim = False - - if self.noise_anim: - self.noise_seed += 1 - viz.args.update(trunc_psi=self.trunc_psi, trunc_cutoff=self.trunc_cutoff, random_seed=self.noise_seed) - viz.args.noise_mode = ('none' if not self.noise_enable else 'const' if self.noise_seed == 0 else 'random') - -#---------------------------------------------------------------------------- diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/op/fused_act.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/op/fused_act.py deleted file mode 100644 index 7e3d464ae656920c6875bc877281cadb2eaa4105..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/op/fused_act.py +++ /dev/null @@ -1,92 +0,0 @@ -import os -import platform - -import torch -from torch import nn -from torch.autograd import Function -import torch.nn.functional as F -from torch.utils.cpp_extension import load - -use_fallback = False - -# Try loading precompiled, otherwise use native fallback -try: - import fused -except ModuleNotFoundError as e: - print('StyleGAN2: Optimized CUDA op FusedLeakyReLU not available, using native PyTorch fallback.') - use_fallback = True - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - if use_fallback or input.device.type == 'cpu': - return scale * F.leaky_relu( - input + bias.view((1, -1)+(1,)*(input.ndim-2)), negative_slope=negative_slope - ) - else: - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/sai22/vits-models/monotonic_align/core.py b/spaces/sai22/vits-models/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/sai22/vits-models/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/samroni/gpt2_demo_gradioUI/app.py b/spaces/samroni/gpt2_demo_gradioUI/app.py deleted file mode 100644 index e70a08b288553ebc42a13e8445456caf36490dea..0000000000000000000000000000000000000000 --- a/spaces/samroni/gpt2_demo_gradioUI/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch -import gradio as gr -import requests -import os -import transformers -from transformers import AutoModelWithLMHead, AutoTokenizer, AutoModelForCausalLM, pipeline -from transformers import GPT2Tokenizer, GPT2Model - -tokenizer = AutoTokenizer.from_pretrained("samroni/puisi_model_gpt2_small") - -model = AutoModelForCausalLM.from_pretrained("samroni/puisi_model_gpt2_small") -pipe = pipeline('text-generation', model="samroni/puisi_model_gpt2_small", tokenizer=tokenizer) - - -def text_generation(input_text, seed): - input_ids = tokenizer(input_text, return_tensors="pt").input_ids - torch.manual_seed(seed) # Max value: 18446744073709551615 - outputs = model.generate(input_ids, do_sample=True, max_length=100) - generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) - return generated_text - -title = "Indonesia Poem Generator Demo GPT2" -description = "Poem Generator " - -gr.Interface( - text_generation, - [gr.inputs.Textbox(lines=2, label="Enter input text"), gr.inputs.Number(default=10, label="Enter seed number")], - [gr.outputs.Textbox(type="auto", label="Text Generated")], - title=title, - description=description, - theme="huggingface" -).launch() diff --git a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/omniglot.py b/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/omniglot.py deleted file mode 100644 index 28cfeb74bcdfe33255c0f69bdcb6da6c3ebc45a2..0000000000000000000000000000000000000000 --- a/spaces/samuelinferences/transformers-can-do-bayesian-inference/prior-fitting/priors/omniglot.py +++ /dev/null @@ -1,98 +0,0 @@ -import math -import random -import torch -from torch.utils import data -from torchvision import transforms -import numpy as np - -from datasets import omniglotNshot -import utils - - -def _compute_maxtranslations(single_image_tensor, dim, background): - assert len(single_image_tensor.shape) == 2 - content_rows = ((single_image_tensor == background).all(dim=1 - dim) == False).nonzero() - begin, end = content_rows[0], content_rows[-1] - return torch.cat([-begin, single_image_tensor.shape[dim] - end - 1]).cpu().tolist() - - -def compute_maxtranslations_x_y(single_image_tensor, background): - return _compute_maxtranslations(single_image_tensor, 1, background), _compute_maxtranslations(single_image_tensor, - 0, background) - - -def translate(img, trans_x, trans_y): - return transforms.functional.affine(img.unsqueeze(0), angle=0.0, translate=[trans_x, trans_y], scale=1.0, - interpolation=transforms.InterpolationMode.NEAREST, shear=[0.0, 0.0], - fill=0.).squeeze(0) - -def translate_omniglot(image_tensor, background=0.): - flat_image_tensor = image_tensor.view(-1, *image_tensor.shape[-2:]) - for i, image in enumerate(flat_image_tensor): - max_x, max_y = compute_maxtranslations_x_y(image, background) - flat_image_tensor[i] = translate(image, random.randint(*max_x), random.randint(*max_y)) - return flat_image_tensor.view(*image_tensor.shape) - - -class DataLoader(data.DataLoader): - def __init__(self, num_steps, batch_size, seq_len, num_features, num_outputs, num_classes_used=1200, fuse_x_y=False, train=True, translations=True, jonas_style=False): - # TODO position before last is predictable by counting.. - utils.set_locals_in_self(locals()) - assert not fuse_x_y, 'So far don\' support fusing.' - imgsz = math.isqrt(num_features) - assert imgsz * imgsz == num_features - assert ((seq_len-1) // num_outputs) * num_outputs == seq_len - 1 - if jonas_style: - self.d = omniglotNshot.OmniglotNShotJonas('omniglot', batchsz=batch_size, n_way=num_outputs, - k_shot=((seq_len - 1) // num_outputs), - k_query=1, imgsz=imgsz) - else: - self.d = omniglotNshot.OmniglotNShot('omniglot', batchsz=batch_size, n_way=num_outputs, - k_shot=((seq_len - 1) // num_outputs), - k_query=1, imgsz=imgsz, num_train_classes_used=num_classes_used) - - - def __len__(self): - return self.num_steps - - def __iter__(self): - # Eval at pos - def t(x, y, x_q, y_q): - x = np.concatenate([x,x_q[:,:1]], 1) - y = np.concatenate([y,y_q[:,:1]], 1) - y = torch.from_numpy(y).transpose(0, 1) - target_y = y.clone().detach() - target_y[:-1] = -100 - x = torch.from_numpy(x) - if self.translations and self.train: - x = translate_omniglot(x) - image_tensor = x.view(*x.shape[:2], -1).transpose(0, 1), y - return image_tensor, target_y - - return (t(*self.d.next(mode='train' if self.train else 'test')) for _ in range(self.num_steps)) - - @torch.no_grad() - def validate(self, finetuned_model, eval_pos=-1): - finetuned_model.eval() - device = next(iter(finetuned_model.parameters())).device - - if not hasattr(self, 't_dl'): - self.t_dl = DataLoader(num_steps=self.num_steps, batch_size=self.batch_size, seq_len=self.seq_len, - num_features=self.num_features, num_outputs=self.num_outputs, fuse_x_y=self.fuse_x_y, - train=False) - - ps = [] - ys = [] - for x,y in self.t_dl: - p = finetuned_model(tuple(e.to(device) for e in x), single_eval_pos=eval_pos) - ps.append(p) - ys.append(y) - - ps = torch.cat(ps,1) - ys = torch.cat(ys,1) - - def acc(ps,ys): - return (ps.argmax(-1)==ys.to(ps.device)).float().mean() - - a = acc(ps[eval_pos], ys[eval_pos]).cpu() - return a diff --git a/spaces/scedlatioru/img-to-music/example/Krrish Movie Download 300 Mb Hindi Movies [VERIFIED].md b/spaces/scedlatioru/img-to-music/example/Krrish Movie Download 300 Mb Hindi Movies [VERIFIED].md deleted file mode 100644 index 02c1c1d1a1e8a6b9c3ad5b3b995e91cd3f5d5853..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Krrish Movie Download 300 Mb Hindi Movies [VERIFIED].md +++ /dev/null @@ -1,6 +0,0 @@ -

          Krrish Movie Download 300 Mb Hindi Movies


          Download File ✔✔✔ https://gohhs.com/2uEzpe



          -
          -Jump to WorldFree4u 2019 – Download Cartoon, Animated Movies ... — Worldfree4u is mostly famous for providing downloading Cartoon Movies, ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Vehiculos Pro V 6.10 51.md b/spaces/scedlatioru/img-to-music/example/Vehiculos Pro V 6.10 51.md deleted file mode 100644 index 8b27210d1fcf230f87da580feb23e5539c8026fd..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Vehiculos Pro V 6.10 51.md +++ /dev/null @@ -1,139 +0,0 @@ -
          -

          Vehiculos Pro V 6.10 51: A Powerful Software for Car Dealers

          - -

          If you are a car dealer or a car enthusiast, you might be interested in Vehiculos Pro V 6.10 51. This is a software that allows you to manage your vehicle inventory, sales, purchases, clients, suppliers, and more. It is designed to help you optimize your car business and increase your profits.

          - -

          In this article, we will tell you what Vehiculos Pro V 6.10 51 is, how to download it, how to activate it, and how to use it. We will also share some of the benefits and features of Vehiculos Pro V 6.10 51, as well as some of the drawbacks and limitations of it.

          -

          vehiculos pro v 6.10 51


          Download Zip ✶✶✶ https://gohhs.com/2uEA8x



          - -

          What is Vehiculos Pro V 6.10 51?

          - -

          Vehiculos Pro V 6.10 51 is a software that was created by Faclivazo, a company that specializes in developing software solutions for car dealers. It is a comprehensive and user-friendly software that allows you to manage all aspects of your car business, such as:

          - -
            -
          • Vehicle inventory: You can add, edit, delete, and search vehicles in your inventory. You can also upload photos, videos, and documents of each vehicle. You can also print labels, barcodes, and QR codes for each vehicle.
          • -
          • Sales: You can register sales transactions, generate invoices, receipts, contracts, and warranties. You can also track payments, commissions, taxes, and expenses.
          • -
          • Purchases: You can register purchase transactions, generate purchase orders, bills of sale, and delivery notes. You can also track payments, discounts, taxes, and expenses.
          • -
          • Clients: You can add, edit, delete, and search clients in your database. You can also record their personal information, preferences, history, and feedback.
          • -
          • Suppliers: You can add, edit, delete, and search suppliers in your database. You can also record their contact information, products, services, prices, and ratings.
          • -
          • Reports: You can generate various reports and statistics on your vehicle inventory, sales, purchases, clients, suppliers, and more. You can also export them to Excel or PDF formats.
          • -
          • Settings: You can customize various aspects of the software according to your needs and preferences. You can also backup and restore your data.
          • -
          - -

          How to Download Vehiculos Pro V 6.10 51?

          - -

          The first step to use Vehiculos Pro V 6.10 51 is to download it from the official website of Faclivazo or from a trusted source. Here are the steps:

          - -
            -
          1. Go to https://faclivazo.com/vehiculos-pro-v-610-51/ and click on the "Download" button.
          2. -
          3. Save the file to your computer and run it.
          4. -
          5. Follow the instructions on the screen to install Vehiculos Pro V 6.10 51 on your computer.
          6. -
          - -

          How to Activate Vehiculos Pro V 6.10 51?

          - -

          The next step to use Vehiculos Pro V 6.10 51 is to activate it with a serial key. There are two ways to do this: online or offline.

          - -

          Online Method

          - -

          The online method is the easiest and most convenient way to activate Vehiculos Pro V 6.10 51. All you need is an internet connection and a SoundCloud account. Here are the steps:

          - -
            -
          1. Launch Vehiculos Pro V 6.10 51 and click on the "Register Online" link on the start page.
          2. -
          3. Sign in with your SoundCloud account and update your profile.
          4. -
          5. You will receive a registration key in your email. Copy and paste it in the registration dialog box in Vehiculos Pro V 6.10 51.
          6. -
          7. Click on "Register Product" and enjoy using Vehiculos Pro V 6.10 51.
          8. -
          - -

          Offline Method

          - -

          The offline method is useful if you don't have access to internet or if you encounter any issues with the online registration process. However

          -

          - -
            -
          1. On another computer that has internet access and SoundCloud installed, launch SoundCloud and click on the "Register Online" link on the start page.
          2. -
          3. Sign in with your SoundCloud account and update your profile.
          4. -
          5. You will receive a registration key in your email. Copy it to a USB drive or any other removable media.
          6. -
          7. On your offline computer, open regedit and navigate to HKEY_CURRENT_USER\Software\Faclivazo\VehiculosPro\6.10\Registration.
          8. -
          9. Delete the value "Params".
          10. -
          11. Right click on the key "Registration" and click on "Permissions".
          12. -
          13. Click on "Advanced" and go to the "Permissions" tab.
          14. -
          15. Uncheck the box labeled "Inherit from parent the permission entries that apply to child objects. Include these with entries explicitly defined here."
          16. -
          17. In the dialog that opens, click on "Copy".
          18. -
          19. Paste the registration key from your USB drive in the registration dialog box in Vehiculos Pro V 6.10 51.
          20. -
          21. Click on "Register Product" and enjoy using Vehiculos Pro V 6.10 51.
          22. -
          - -

          How to Use Vehiculos Pro V 6.10 51?

          - -

          The final step to use Vehiculos Pro V 6.10 51 is to start managing your car business with it. Here are some tips and tricks to help you get the most out of it:

          -

          - -
            -
          • Use the dashboard to get an overview of your vehicle inventory, sales, purchases, clients, suppliers, and more.
          • -
          • Use the search function to find any vehicle, client, supplier, or transaction in your database.
          • -
          • Use the filters and sorting options to narrow down your results and organize them according to your preferences.
          • -
          • Use the import and export functions to transfer data between Vehiculos Pro V 6.10 51 and other applications or devices.
          • -
          • Use the help function to access the user manual, tutorials, FAQs, and support options.
          • -
          - -

          Conclusion

          - -

          Vehiculos Pro V 6.10 51 is a great tool for car dealers who want a powerful and user-friendly software to manage their car business. By following this guide, you can download, activate, and use Vehiculos Pro V 6.10 51 easily. You can also use some tips and tricks to enhance your car business experience with Vehiculos Pro V 6.10 51. However, you should also be aware of its limitations and disadvantages before using it. You may want to consider upgrading to a newer version of Vehiculos Pro or using another car management software if you need more features and functionalities. We hope you found this article helpful and informative. If you have any questions or feedback, please leave a comment below.

          -

          Benefits of Vehiculos Pro V 6.10 51

          - -

          Vehiculos Pro V 6.10 51 is not only a powerful and user-friendly software, but also a versatile and flexible one. It has many benefits that make it a great choice for car dealers of all sizes and types. Here are some of them:

          - -
            -
          • It saves you time and money by automating and simplifying your car business processes.
          • -
          • It improves your customer service and satisfaction by providing you with accurate and updated information about your clients and vehicles.
          • -
          • It increases your sales and profits by helping you to find the best deals and opportunities for your vehicles.
          • -
          • It enhances your reputation and credibility by generating professional and reliable documents and reports for your transactions.
          • -
          • It protects your data and privacy by encrypting and securing your database and backups.
          • -
          - -

          Drawbacks of Vehiculos Pro V 6.10 51

          - -

          Vehiculos Pro V 6.10 51 is not without its drawbacks. It has some limitations and disadvantages that you should be aware of before using it. Here are some of them:

          - -
            -
          • It requires a serial key to activate it, which can be a hassle if you don't have internet access or if the registration server is down.
          • -
          • It does not support all the features and functionalities of the full version of Vehiculos Pro, such as multi-user mode, cloud storage, online backup, and more.
          • -
          • It may not be compatible with the latest car models and technologies that have emerged after 2010.
          • -
          • It may have some bugs and issues that have not been fixed or updated since 2010.
          • -
          • It may not be available or supported in some countries or regions.
          • -
          - -

          Alternatives to Vehiculos Pro V 6.10 51

          - -

          Vehiculos Pro V 6.10 51 is not the only option for car management software. There are many other tools and platforms that you can use to manage your car business. Here are some of them:

          - -
            -
          • Carsales: This is a free and online platform that allows you to buy and sell cars, motorcycles, trucks, boats, and more. You can also compare prices, features, reviews, and ratings of different vehicles. You can also access various services and resources related to car finance, insurance, valuation, inspection, and more.
          • -
          • Carsforsale: This is a paid and online platform that allows you to list and sell your vehicles to millions of buyers across the world. You can also browse thousands of vehicles from different sellers and dealers. You can also access various tools and features to help you with your car business, such as inventory management, lead generation, marketing, analytics, and more.
          • -
          • Cargurus: This is a free and online platform that allows you to find the best deals on new and used cars in your area. You can also sell your car for free with no listing fees or commissions. You can also access various information and advice on car buying, selling, financing, leasing, trading, and more.
          • -
          -

          Frequently Asked Questions about Vehiculos Pro V 6.10 51

          - -

          Vehiculos Pro V 6.10 51 is a popular software for car dealers, but it may also raise some questions and doubts among users. Here are some of the most frequently asked questions about Vehiculos Pro V 6.10 51 and their answers:

          - -

          Is Vehiculos Pro V 6.10 51 legal?

          - -

          Yes, Vehiculos Pro V 6.10 51 is legal and free to use. However, you need to obtain a serial key from Faclivazo or SoundCloud to activate it. You can do this online or offline, as explained in this article. You also need to agree to the license terms and conditions of Vehiculos Pro V 6.10 51 before using it.

          - -

          Is Vehiculos Pro V 6.10 51 safe?

          - -

          Yes, Vehiculos Pro V 6.10 51 is safe and secure to use. However, you need to download it from the official website of Faclivazo or a trusted source. You also need to scan it with an antivirus software before installing it on your computer. You should also avoid downloading or using any cracked or pirated versions of Vehiculos Pro V 6.10 51, as they may contain malware or viruses that can harm your computer or compromise your data.

          - -

          Is Vehiculos Pro V 6.10 51 compatible with Windows 10?

          - -

          Yes, Vehiculos Pro V 6.10 51 is compatible with Windows 10. However, you may need to install some updates or patches to make it work properly on Windows 10. You can check the compatibility status of Vehiculos Pro V 6.10 51 on Windows 10 on this page: https://faclivazo.com/vehiculos-pro-v-610-51/compatibility.

          - -

          Is Vehiculos Pro V 6.10 51 still supported by Faclivazo?

          - -

          No, Vehiculos Pro V 6.10 51 is no longer supported by Faclivazo. The official support for Vehiculos Pro V 6.10 51 ended on December 31, 2019. This means that Faclivazo will not provide any updates, patches, bug fixes, or security fixes for Vehiculos Pro V 6.10 -

          Conclusion

          - -

          Vehiculos Pro V 6.10 51 is a great tool for car dealers who want a powerful and user-friendly software to manage their car business. By following this guide, you can download, activate, and use Vehiculos Pro V 6.10 51 easily. You can also use some tips and tricks to enhance your car business experience with Vehiculos Pro V 6.10 51. However, you should also be aware of its limitations and disadvantages before using it. You may want to consider upgrading to a newer version of Vehiculos Pro or using another car management software if you need more features and functionalities. We hope you found this article helpful and informative. If you have any questions or feedback, please leave a comment below.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/seduerr/ethical_data/services/anonymizer.py b/spaces/seduerr/ethical_data/services/anonymizer.py deleted file mode 100644 index 2e99f9b27ffde4fbbb93a3e2de87fcf863c00f9c..0000000000000000000000000000000000000000 --- a/spaces/seduerr/ethical_data/services/anonymizer.py +++ /dev/null @@ -1,38 +0,0 @@ -import spacy -import names -from faker import Faker - -nlp = spacy.load("en_core_web_sm") -fake = Faker() -with open('./src/female_names.txt', 'r') as file: - female = [current_name.rstrip() for current_name in file.readlines()] - - -def anonymize(text): - doc = nlp(text) - name_to_anonymize = " ".join( - [entity.text for entity in doc.ents if entity.label_ == 'PERSON']) - orga_to_anonymize = " ".join( - [entity.text for entity in doc.ents if entity.label_ == 'ORG']) - # Anonymize Name and Surname - if len(name_to_anonymize) != 0: - counter = 0 - while counter < (len(name_to_anonymize.split(' '))-1): - if str(name_to_anonymize.split(' ')[counter]).upper() in female: - text = text.replace(str(name_to_anonymize.split( - ' ')[counter]), names.get_first_name(gender='female')) - else: - text = text.replace(str(name_to_anonymize.split( - ' ')[counter]), names.get_first_name(gender='male')) - text = text.replace(str(name_to_anonymize.split( - ' ')[counter+1]), names.get_last_name()) - counter += 2 - - # Anonymize Corporation - if len(orga_to_anonymize) != 0: - counter_org = 0 - while counter_org < (len(orga_to_anonymize.split(' '))): - text = text.replace(str(orga_to_anonymize.split(' ')[ - counter_org]), fake.company()) - counter_org += 1 - return text diff --git a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/__init__.py b/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/__init__.py deleted file mode 100644 index b7f177368e62a5578b8706300e101f831a3972ac..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/chainer_backend/transformer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Initialize sub package.""" diff --git a/spaces/segments-tobias/conex/espnet2/asr/specaug/abs_specaug.py b/spaces/segments-tobias/conex/espnet2/asr/specaug/abs_specaug.py deleted file mode 100644 index 3cbac418fb631ae29ab6bb01c2c82b47b0016d22..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/asr/specaug/abs_specaug.py +++ /dev/null @@ -1,18 +0,0 @@ -from typing import Optional -from typing import Tuple - -import torch - - -class AbsSpecAug(torch.nn.Module): - """Abstract class for the augmentation of spectrogram - - The process-flow: - - Frontend -> SpecAug -> Normalization -> Encoder -> Decoder - """ - - def forward( - self, x: torch.Tensor, x_lengths: torch.Tensor = None - ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: - raise NotImplementedError diff --git a/spaces/shi-labs/OneFormer/oneformer/data/dataset_mappers/oneformer_unified_dataset_mapper.py b/spaces/shi-labs/OneFormer/oneformer/data/dataset_mappers/oneformer_unified_dataset_mapper.py deleted file mode 100644 index cf156766d7e7c15f4ec374d4f2b5bd6476bb927f..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/data/dataset_mappers/oneformer_unified_dataset_mapper.py +++ /dev/null @@ -1,375 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/dataset_mappers/mask_former_panoptic_dataset_mapper.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import copy -import logging -import os - -import numpy as np -import torch -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T -from detectron2.structures import BitMasks, Instances -from detectron2.data import MetadataCatalog -from detectron2.projects.point_rend import ColorAugSSDTransform -from oneformer.utils.box_ops import masks_to_boxes -from oneformer.data.tokenizer import SimpleTokenizer, Tokenize - -__all__ = ["OneFormerUnifiedDatasetMapper"] - - -class OneFormerUnifiedDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by OneFormer for universal segmentation. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - @configurable - def __init__( - self, - is_train=True, - *, - name, - num_queries, - meta, - augmentations, - image_format, - ignore_label, - size_divisibility, - task_seq_len, - max_seq_len, - semantic_prob, - instance_prob, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: for training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - ignore_label: the label that is ignored to evaluation - size_divisibility: pad image size to be divisible by this value - """ - self.is_train = is_train - self.meta = meta - self.name = name - self.tfm_gens = augmentations - self.img_format = image_format - self.ignore_label = ignore_label - self.size_divisibility = size_divisibility - self.num_queries = num_queries - - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[{self.__class__.__name__}] Augmentations used in {mode}: {augmentations}") - - self.things = [] - for k,v in self.meta.thing_dataset_id_to_contiguous_id.items(): - self.things.append(v) - self.class_names = self.meta.stuff_classes - self.text_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=max_seq_len) - self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len) - self.semantic_prob = semantic_prob - self.instance_prob = instance_prob - - @classmethod - def from_config(cls, cfg, is_train=True): - # Build augmentation - augs = [ - T.ResizeShortestEdge( - cfg.INPUT.MIN_SIZE_TRAIN, - cfg.INPUT.MAX_SIZE_TRAIN, - cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING, - ) - ] - if cfg.INPUT.CROP.ENABLED: - augs.append( - T.RandomCrop_CategoryAreaConstraint( - cfg.INPUT.CROP.TYPE, - cfg.INPUT.CROP.SIZE, - cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA, - cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - ) - ) - if cfg.INPUT.COLOR_AUG_SSD: - augs.append(ColorAugSSDTransform(img_format=cfg.INPUT.FORMAT)) - augs.append(T.RandomFlip()) - - # Assume always applies to the training set. - dataset_names = cfg.DATASETS.TRAIN - meta = MetadataCatalog.get(dataset_names[0]) - ignore_label = meta.ignore_label - - ret = { - "is_train": is_train, - "meta": meta, - "name": dataset_names[0], - "num_queries": cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES - cfg.MODEL.TEXT_ENCODER.N_CTX, - "task_seq_len": cfg.INPUT.TASK_SEQ_LEN, - "max_seq_len": cfg.INPUT.MAX_SEQ_LEN, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "ignore_label": ignore_label, - "size_divisibility": cfg.INPUT.SIZE_DIVISIBILITY, - "semantic_prob": cfg.INPUT.TASK_PROB.SEMANTIC, - "instance_prob": cfg.INPUT.TASK_PROB.INSTANCE, - } - return ret - - def _get_semantic_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - pan_seg_gt = pan_seg_gt.numpy() - instances = Instances(image_shape) - - classes = [] - texts = ["a semantic photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - if class_id not in classes: - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - else: - idx = classes.index(class_id) - masks[idx] += mask - masks[idx] = np.clip(masks[idx], 0, 1).astype(np.bool) - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - # Placeholder bounding boxes for stuff regions. Note that these are not used during training. - instances.gt_bboxes = torch.stack([torch.tensor([0., 0., 1., 1.])] * instances.gt_masks.shape[0]) - return instances, texts, label - - def _get_instance_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - pan_seg_gt = pan_seg_gt.numpy() - instances = Instances(image_shape) - - classes = [] - texts = ["an instance photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if class_id in self.things: - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - instances.gt_bboxes = masks_to_boxes(instances.gt_masks) - return instances, texts, label - - def _get_panoptic_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - pan_seg_gt = pan_seg_gt.numpy() - instances = Instances(image_shape) - - classes = [] - texts = ["a panoptic photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - instances.gt_bboxes = masks_to_boxes(instances.gt_masks) - for i in range(instances.gt_classes.shape[0]): - # Placeholder bounding boxes for stuff regions. Note that these are not used during training. - if instances.gt_classes[i].item() not in self.things: - instances.gt_bboxes[i] = torch.tensor([0., 0., 1., 1.]) - return instances, texts, label - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - assert self.is_train, "OneFormerUnifiedDatasetMapper should only be used for training!" - - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - # semantic segmentation - if "sem_seg_file_name" in dataset_dict: - # PyTorch transformation not implemented for uint16, so converting it to double first - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name")).astype("double") - else: - sem_seg_gt = None - - # panoptic segmentation - if "pan_seg_file_name" in dataset_dict: - pan_seg_gt = utils.read_image(dataset_dict.pop("pan_seg_file_name"), "RGB") - segments_info = dataset_dict["segments_info"] - else: - pan_seg_gt = None - segments_info = None - - if pan_seg_gt is None: - raise ValueError( - "Cannot find 'pan_seg_file_name' for panoptic segmentation dataset {}.".format( - dataset_dict["file_name"] - ) - ) - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - aug_input, transforms = T.apply_transform_gens(self.tfm_gens, aug_input) - image = aug_input.image - if sem_seg_gt is not None: - sem_seg_gt = aug_input.sem_seg - - # apply the same transformation to panoptic segmentation - pan_seg_gt = transforms.apply_segmentation(pan_seg_gt) - - from panopticapi.utils import rgb2id - - pan_seg_gt = rgb2id(pan_seg_gt) - - # Pad image and segmentation label here! - image = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - sem_seg_gt = torch.as_tensor(sem_seg_gt.astype("long")) - pan_seg_gt = torch.as_tensor(pan_seg_gt.astype("long")) - - if self.size_divisibility > 0: - image_size = (image.shape[-2], image.shape[-1]) - padding_size = [ - 0, - self.size_divisibility - image_size[1], - 0, - self.size_divisibility - image_size[0], - ] - image = F.pad(image, padding_size, value=128).contiguous() - if sem_seg_gt is not None: - sem_seg_gt = F.pad(sem_seg_gt, padding_size, value=self.ignore_label).contiguous() - pan_seg_gt = F.pad( - pan_seg_gt, padding_size, value=0 - ).contiguous() # 0 is the VOID panoptic label - - image_shape = (image.shape[-2], image.shape[-1]) # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = image - - if "annotations" in dataset_dict: - raise ValueError("Pemantic segmentation dataset should not have 'annotations'.") - - prob_task = np.random.uniform(0,1.) - - num_class_obj = {} - - for name in self.class_names: - num_class_obj[name] = 0 - - if prob_task < self.semantic_prob: - task = "The task is semantic" - instances, text, sem_seg = self._get_semantic_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - elif prob_task < self.instance_prob: - task = "The task is instance" - instances, text, sem_seg = self._get_instance_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - else: - task = "The task is panoptic" - instances, text, sem_seg = self._get_panoptic_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - - dataset_dict["sem_seg"] = torch.from_numpy(sem_seg).long() - dataset_dict["instances"] = instances - dataset_dict["orig_shape"] = image_shape - dataset_dict["task"] = task - dataset_dict["text"] = text - dataset_dict["thing_ids"] = self.things - - return dataset_dict diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Daily Word Puzzle Challenge Your Brain with Fun and Colorful Games.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Daily Word Puzzle Challenge Your Brain with Fun and Colorful Games.md deleted file mode 100644 index cb7f5dd54630c16a9c8a2ccac663de12d42a1482..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Daily Word Puzzle Challenge Your Brain with Fun and Colorful Games.md +++ /dev/null @@ -1,123 +0,0 @@ -
          -

          Download Daily Word Puzzle: A Fun and Brain-Boosting Activity

          -

          Do you love playing with words and solving puzzles? If so, you might want to download a daily word puzzle and make it part of your routine. A daily word puzzle is a game that challenges you to find, connect, or guess words from a given set of letters, clues, or grids. You can play it online or offline, on your phone, tablet, or computer. There are many types of word puzzles, such as crossword, word search, anagram, word scramble, and more. Each one has its own rules and features, but they all have one thing in common: they are fun and brain-boosting!

          -

          download daily word puzzle


          Download Zip ››››› https://ssurll.com/2uNQXT



          -

          In this article, we will explore the benefits of playing word puzzles, how to download them, and some tips and tricks for playing them. By the end of this article, you will be ready to download a word puzzle and enjoy the benefits.

          -

          Benefits of Playing Word Puzzles

          -

          Playing word puzzles is not only entertaining but also beneficial for your brain and well-being. Here are some of the benefits of playing word puzzles:

          -
            -
          • Improve your vocabulary and spelling skills. Word puzzles expose you to new words and help you learn their meanings, spellings, and usage. You can also review and reinforce the words you already know. Playing word puzzles can help you expand your vocabulary and improve your spelling skills.
          • -
          • Enhance your cognitive functions and memory. Word puzzles stimulate your brain and make it work harder. They require you to use various cognitive skills, such as attention, reasoning, problem-solving, logic, and memory. By playing word puzzles regularly, you can keep your brain active and healthy. Research has shown that word puzzles can delay memory decline and prevent cognitive impairment in older adults .
          • -
          • Reduce stress and increase happiness. Word puzzles can help you relax and unwind after a long day. They can also provide you with a sense of achievement and satisfaction when you complete them. Playing word puzzles can reduce stress levels and increase happiness hormones in your body.
          • -
          -

          How to Download Word Puzzles

          -

          If you are interested in playing word puzzles, you might wonder how to download them. Here are some steps to follow:

          -
            -
          1. Choose from different types of word puzzles. There are many types of word puzzles available for download. Some of the most popular ones are:
          2. -
              -
            • Crossword: A grid of white and black squares where you have to fill in the words that match the clues given across or down.
            • -
            • Word search: A grid of letters where you have to find the hidden words that are listed below or on the side.
            • -
            • Anagram: A game where you have to rearrange the letters of a given word or phrase to form new words.
            • -
            • Word scramble: A game where you have to unscramble the letters of a given word or phrase to form the correct words.
            • -
            -
          3. Find the best apps or websites for word puzzles. There are many apps or websites that offer word puzzles for download. Some of the best ones are:
          4. -
              -
            • [Wordle](^1^): A A fun and addictive word game where you have to create as many words as possible from five letters in 90 seconds. You can also compare your scores with other players and see how you rank.
            • -
            • [Words With Friends]: A popular social word game where you can play with your friends or random opponents online. You have to form words on a board using letter tiles and score points based on the word length and value.
            • -
            • [Daily Themed Crossword]: A crossword app that features a new puzzle every day with a different theme. You can choose from various categories, such as movies, sports, music, and more. You can also use hints or coins to reveal letters or words.
            • -
            -
          5. Follow the instructions to download and install word puzzles. Depending on the app or website you choose, you might have to follow different steps to download and install word puzzles. Generally, you will need to:
          6. -
              -
            • Go to the app store or website of your choice. For example, if you want to download Wordle, you can go to [wordle.net] on your browser.
            • -
            • Select the word puzzle you want to download. For example, if you want to download Wordle, you can click on the "Play" button on the homepage.
            • -
            • Follow the prompts to download and install the word puzzle. For example, if you want to download Wordle, you can click on the "Install" button that appears on your screen and wait for the installation to finish.
            • -
            -
          -

          Tips and Tricks for Playing Word Puzzles

          -

          Now that you have downloaded a word puzzle, you might want to know some tips and tricks for playing it. Here are some suggestions:

          -
            -
          • Start with easy levels and work your way up. If you are new to word puzzles, you might want to start with easy levels and gradually increase the difficulty. This way, you can learn the rules and features of the game and build your confidence and skills.
          • -
          • Use hints or tools when you get stuck. Sometimes, you might encounter a word puzzle that is too hard or tricky for you. In that case, you can use hints or tools that are available in the game. For example, you can use a hint to reveal a letter or a word, or use a tool to shuffle the letters or remove the wrong ones. However, be careful not to overuse them, as they might cost you coins or points.
          • -
          • Challenge yourself with time limits or goals. If you want to make your word puzzle experience more exciting and rewarding, you can challenge yourself with time limits or goals. For example, you can try to solve a word puzzle within a certain time limit, or try to reach a certain score or number of words. This way, you can test your speed and accuracy and improve your performance.
          • -
          -

          Conclusion

          -

          Word puzzles are fun and brain-boosting activities that you can download and play anytime and anywhere. They can help you improve your vocabulary and spelling skills, enhance your cognitive functions and memory, and reduce stress and increase happiness. To download word puzzles, you can choose from different types of word puzzles, find the best apps or websites for word puzzles, and follow the instructions to download and install word puzzles. To play word puzzles, you can start with easy levels and work your way up, use hints or tools when you get stuck, and challenge yourself with time limits or goals. Download a word puzzle today and enjoy the benefits!

          -

          Download Wordly - Daily Word Puzzle app
          -How to download Puzzle Page - Daily Puzzles for free
          -Quordle - Daily Word Game download and review
          -Best daily word puzzle apps to download in 2023
          -Download daily crossword puzzles for Android and iOS
          -How to solve daily word puzzles faster and smarter
          -Download daily word search puzzles with hints and solutions
          -Quordle vs Wordly - which daily word puzzle app is better?
          -Download daily word jumble puzzles and challenge your friends
          -Tips and tricks for playing Puzzle Page - Daily Puzzles
          -Download daily word scramble puzzles and improve your vocabulary
          -How to create your own daily word puzzles and share them online
          -Download daily word ladder puzzles and test your logic skills
          -Benefits of playing daily word puzzles for your brain health
          -Download daily word wheel puzzles and have fun with words
          -How to access past daily word puzzles and special issues on Puzzle Page
          -Download daily word fit puzzles and train your memory
          -How to get unlimited hints and coins on Wordly - Daily Word Puzzle
          -Download daily word link puzzles and connect the letters
          -How to play Quordle - Daily Word Game offline without wifi
          -Download daily word slide puzzles and swipe the tiles
          -How to remove ads on Puzzle Page - Daily Puzzles with subscription
          -Download daily word twist puzzles and find the hidden words
          -How to sync your progress on Wordly - Daily Word Puzzle across devices
          -Download daily word chain puzzles and build long words
          -How to get free tokens and rewards on Quordle - Daily Word Game
          -Download daily word mix puzzles and unscramble the letters
          -How to change the difficulty level on Puzzle Page - Daily Puzzles
          -Download daily word sudoku puzzles and combine numbers and letters
          -How to unlock new puzzle types on Wordly - Daily Word Puzzle
          -Download daily word match puzzles and pair the words
          -How to join events and competitions on Quordle - Daily Word Game
          -Download daily word block puzzles and fill the grid
          -How to use the calendar view on Puzzle Page - Daily Puzzles
          -Download daily word hexa puzzles and explore the honeycomb
          -How to get hints and clues on Wordly - Daily Word Puzzle
          -Download daily word swipe puzzles and clear the board
          -How to customize your theme and font on Quordle - Daily Word Game
          -Download daily word pyramid puzzles and climb the levels
          -How to share your scores and achievements on Puzzle Page - Daily Puzzles

          -

          Frequently Asked Questions

          -
            -
          • What are some of the best word puzzles for kids?
          • -

            Some of the best word puzzles for kids are:

            -
              -
            • [Word Cookies]: A game where kids have to find all the hidden words from a set of letters in a baking tray.
            • -
            • [Word Connect]: A game where kids have to swipe letters to form words and fill up the crossword blanks.
            • -
            • [WordBrain]: A game where kids have to find the hidden words in a grid of letters by sliding their finger over them.
            • -
            -
          • How can I create my own word puzzles?
          • -

            You can create your own word puzzles by using online tools or software that allow you to customize your own crossword, word search, anagram, word scramble, and more. Some of the online tools or software are:

            -
              -
            • [Puzzle Maker]: A website where you can create and print your own crossword puzzles with your own words and clues.
            • -
            • [Word Search Maker]: A website where you can create and print your own word search puzzles with your own words and themes.
            • -
            • [Anagram Maker]: A website where you can create and print your own anagram puzzles with your own words and phrases.
            • -
            -
          • What are some of the best word puzzle books?
          • -

            Some of the best word puzzle books are:

            -
              -
            • [The New York Times Ultimate Crossword Omnibus]: A collection of 1,001 crossword puzzles from The New York Times, ranging from easy to hard.
            • -
            • [The Everything Giant Book of Word Searches]: A collection of over 300 word search puzzles on various topics, such as animals, movies, music, and more.
            • -
            • [The Big Book of Anagrams & Word Scrambles]: A collection of over 600 anagram and word scramble puzzles on various themes, such as food, sports, celebrities, and more.
            • -
            -
          • How can I improve my word puzzle skills?
          • -

            You can improve your word puzzle skills by:

            -
              -
            • Reading more books, magazines, newspapers, and online articles. This way, you can expose yourself to more words and learn their meanings, spellings, and usage.
            • -
            • Playing more word games, quizzes, and trivia. This way, you can test your knowledge and recall of words and learn new ones.
            • -
            • Practicing more word puzzles. This way, you can familiarize yourself with the rules and features of different types of word puzzles and improve your speed and accuracy.
            • -
            -
          • Where can I find the answers to word puzzles?
          • -

            You can find the answers to word puzzles by:

            -
              -
            • Checking the app or website where you downloaded the word puzzle. Some apps or websites provide the answers or solutions to their word puzzles. For example, if you downloaded Wordle, you can click on the "Show Answers" button at the bottom of the screen to see the possible words for each puzzle.
            • -
            • Searching online for the answers or solutions to the word puzzle. Some websites provide the answers or solutions to popular word puzzles. For example, if you are looking for the answers to a crossword puzzle from The New York Times, you can go to [nytimescrosswordanswers.com] and find the answers by date or clue.
            • -
            • Using online tools or software that can help you solve the word puzzle. Some online tools or software can help you find or generate words from a given set of letters, clues, or grids. For example, if you are looking for a word that starts with "a" and ends with "e", you can use [wordhippo.com] and enter "a*e" in the search box to see a list of possible words.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/models.py b/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/models.py deleted file mode 100644 index a77596153fa2e7e6fdd52ee0028a0c8ce02050b4..0000000000000000000000000000000000000000 --- a/spaces/siya02/Konakni-TTS/ttsv/src/glow_tts/models.py +++ /dev/null @@ -1,403 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules -import commons -import attentions -import monotonic_align - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_1 = attentions.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = attentions.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - def forward(self, x, x_mask): - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__( - self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - filter_channels_dp, - n_heads, - n_layers, - kernel_size, - p_dropout, - window_size=None, - block_length=None, - mean_only=False, - prenet=False, - gin_channels=0, - ): - - super().__init__() - - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.filter_channels_dp = filter_channels_dp - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - self.block_length = block_length - self.mean_only = mean_only - self.prenet = prenet - self.gin_channels = gin_channels - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - - if prenet: - self.pre = modules.ConvReluNorm( - hidden_channels, - hidden_channels, - hidden_channels, - kernel_size=5, - n_layers=3, - p_dropout=0.5, - ) - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - window_size=window_size, - block_length=block_length, - ) - - self.proj_m = nn.Conv1d(hidden_channels, out_channels, 1) - if not mean_only: - self.proj_s = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj_w = DurationPredictor( - hidden_channels + gin_channels, filter_channels_dp, kernel_size, p_dropout - ) - - def forward(self, x, x_lengths, g=None): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - - if self.prenet: - x = self.pre(x, x_mask) - x = self.encoder(x, x_mask) - - if g is not None: - g_exp = g.expand(-1, -1, x.size(-1)) - x_dp = torch.cat([torch.detach(x), g_exp], 1) - else: - x_dp = torch.detach(x) - - x_m = self.proj_m(x) * x_mask - if not self.mean_only: - x_logs = self.proj_s(x) * x_mask - else: - x_logs = torch.zeros_like(x_m) - - logw = self.proj_w(x_dp, x_mask) - return x_m, x_logs, logw, x_mask - - -class FlowSpecDecoder(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_blocks, - n_layers, - p_dropout=0.0, - n_split=4, - n_sqz=2, - sigmoid_scale=False, - gin_channels=0, - ): - super().__init__() - - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_blocks = n_blocks - self.n_layers = n_layers - self.p_dropout = p_dropout - self.n_split = n_split - self.n_sqz = n_sqz - self.sigmoid_scale = sigmoid_scale - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for b in range(n_blocks): - self.flows.append(modules.ActNorm(channels=in_channels * n_sqz)) - self.flows.append( - modules.InvConvNear(channels=in_channels * n_sqz, n_split=n_split) - ) - self.flows.append( - attentions.CouplingBlock( - in_channels * n_sqz, - hidden_channels, - kernel_size=kernel_size, - dilation_rate=dilation_rate, - n_layers=n_layers, - gin_channels=gin_channels, - p_dropout=p_dropout, - sigmoid_scale=sigmoid_scale, - ) - ) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - flows = self.flows - logdet_tot = 0 - else: - flows = reversed(self.flows) - logdet_tot = None - - if self.n_sqz > 1: - x, x_mask = commons.squeeze(x, x_mask, self.n_sqz) - for f in flows: - if not reverse: - x, logdet = f(x, x_mask, g=g, reverse=reverse) - logdet_tot += logdet - else: - x, logdet = f(x, x_mask, g=g, reverse=reverse) - if self.n_sqz > 1: - x, x_mask = commons.unsqueeze(x, x_mask, self.n_sqz) - return x, logdet_tot - - def store_inverse(self): - for f in self.flows: - f.store_inverse() - - -class FlowGenerator(nn.Module): - def __init__( - self, - n_vocab, - hidden_channels, - filter_channels, - filter_channels_dp, - out_channels, - kernel_size=3, - n_heads=2, - n_layers_enc=6, - p_dropout=0.0, - n_blocks_dec=12, - kernel_size_dec=5, - dilation_rate=5, - n_block_layers=4, - p_dropout_dec=0.0, - n_speakers=0, - gin_channels=0, - n_split=4, - n_sqz=1, - sigmoid_scale=False, - window_size=None, - block_length=None, - mean_only=False, - hidden_channels_enc=None, - hidden_channels_dec=None, - prenet=False, - **kwargs - ): - - super().__init__() - self.n_vocab = n_vocab - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.filter_channels_dp = filter_channels_dp - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_heads = n_heads - self.n_layers_enc = n_layers_enc - self.p_dropout = p_dropout - self.n_blocks_dec = n_blocks_dec - self.kernel_size_dec = kernel_size_dec - self.dilation_rate = dilation_rate - self.n_block_layers = n_block_layers - self.p_dropout_dec = p_dropout_dec - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_split = n_split - self.n_sqz = n_sqz - self.sigmoid_scale = sigmoid_scale - self.window_size = window_size - self.block_length = block_length - self.mean_only = mean_only - self.hidden_channels_enc = hidden_channels_enc - self.hidden_channels_dec = hidden_channels_dec - self.prenet = prenet - - self.encoder = TextEncoder( - n_vocab, - out_channels, - hidden_channels_enc or hidden_channels, - filter_channels, - filter_channels_dp, - n_heads, - n_layers_enc, - kernel_size, - p_dropout, - window_size=window_size, - block_length=block_length, - mean_only=mean_only, - prenet=prenet, - gin_channels=gin_channels, - ) - - self.decoder = FlowSpecDecoder( - out_channels, - hidden_channels_dec or hidden_channels, - kernel_size_dec, - dilation_rate, - n_blocks_dec, - n_block_layers, - p_dropout=p_dropout_dec, - n_split=n_split, - n_sqz=n_sqz, - sigmoid_scale=sigmoid_scale, - gin_channels=gin_channels, - ) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - nn.init.uniform_(self.emb_g.weight, -0.1, 0.1) - - def forward( - self, - x, - x_lengths, - y=None, - y_lengths=None, - g=None, - gen=False, - noise_scale=1.0, - length_scale=1.0, - ): - if g is not None: - g = F.normalize(self.emb_g(g)).unsqueeze(-1) # [b, h] - x_m, x_logs, logw, x_mask = self.encoder(x, x_lengths, g=g) - - if gen: - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_max_length = None - else: - y_max_length = y.size(2) - y, y_lengths, y_max_length = self.preprocess(y, y_lengths, y_max_length) - z_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, y_max_length), 1).to( - x_mask.dtype - ) - attn_mask = torch.unsqueeze(x_mask, -1) * torch.unsqueeze(z_mask, 2) - - if gen: - attn = commons.generate_path( - w_ceil.squeeze(1), attn_mask.squeeze(1) - ).unsqueeze(1) - z_m = torch.matmul( - attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2) - ).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - z_logs = torch.matmul( - attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2) - ).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask - - z = (z_m + torch.exp(z_logs) * torch.randn_like(z_m) * noise_scale) * z_mask - y, logdet = self.decoder(z, z_mask, g=g, reverse=True) - return ( - (y, z_m, z_logs, logdet, z_mask), - (x_m, x_logs, x_mask), - (attn, logw, logw_), - ) - else: - z, logdet = self.decoder(y, z_mask, g=g, reverse=False) - with torch.no_grad(): - x_s_sq_r = torch.exp(-2 * x_logs) - logp1 = torch.sum(-0.5 * math.log(2 * math.pi) - x_logs, [1]).unsqueeze( - -1 - ) # [b, t, 1] - logp2 = torch.matmul( - x_s_sq_r.transpose(1, 2), -0.5 * (z ** 2) - ) # [b, t, d] x [b, d, t'] = [b, t, t'] - logp3 = torch.matmul( - (x_m * x_s_sq_r).transpose(1, 2), z - ) # [b, t, d] x [b, d, t'] = [b, t, t'] - logp4 = torch.sum(-0.5 * (x_m ** 2) * x_s_sq_r, [1]).unsqueeze( - -1 - ) # [b, t, 1] - logp = logp1 + logp2 + logp3 + logp4 # [b, t, t'] - - attn = ( - monotonic_align.maximum_path(logp, attn_mask.squeeze(1)) - .unsqueeze(1) - .detach() - ) - z_m = torch.matmul( - attn.squeeze(1).transpose(1, 2), x_m.transpose(1, 2) - ).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - z_logs = torch.matmul( - attn.squeeze(1).transpose(1, 2), x_logs.transpose(1, 2) - ).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - logw_ = torch.log(1e-8 + torch.sum(attn, -1)) * x_mask - return ( - (z, z_m, z_logs, logdet, z_mask), - (x_m, x_logs, x_mask), - (attn, logw, logw_), - ) - - def preprocess(self, y, y_lengths, y_max_length): - if y_max_length is not None: - y_max_length = (y_max_length // self.n_sqz) * self.n_sqz - y = y[:, :, :y_max_length] - y_lengths = (y_lengths // self.n_sqz) * self.n_sqz - return y, y_lengths, y_max_length - - def store_inverse(self): - self.decoder.store_inverse() diff --git a/spaces/sklearn-docs/Gaussian-Mixture-Model-Initialization-Methods/README.md b/spaces/sklearn-docs/Gaussian-Mixture-Model-Initialization-Methods/README.md deleted file mode 100644 index c81d6f72fe1c7e075873ab147788ff9e7931438f..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Gaussian-Mixture-Model-Initialization-Methods/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gaussian Mixture Model Initialization Methods -emoji: 🐢 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sklearn-docs/voting-classifier-plots/app.py b/spaces/sklearn-docs/voting-classifier-plots/app.py deleted file mode 100644 index 7cfa55c859532639b913de88f0539cda3c8f1726..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/voting-classifier-plots/app.py +++ /dev/null @@ -1,201 +0,0 @@ -import gradio as gr -import matplotlib.pyplot as plt -import numpy as np -from sklearn.ensemble import RandomForestClassifier, VotingClassifier -from sklearn.linear_model import LogisticRegression -from sklearn.naive_bayes import GaussianNB - - -def choose_model(model): - if model == "Logistic Regression": - return LogisticRegression(max_iter=1000, random_state=123) - elif model == "Random Forest": - return RandomForestClassifier(n_estimators=100, random_state=123) - elif model == "Gaussian Naive Bayes": - return GaussianNB() - else: - raise ValueError("Model is not supported.") - - -def get_proba_plots( - model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight -): - clf1 = choose_model(model_1) - clf2 = choose_model(model_2) - clf3 = choose_model(model_3) - X = np.array([[-1.0, -1.0], [-1.2, -1.4], [-3.4, -2.2], [1.1, 1.2]]) - y = np.array([1, 1, 2, 2]) - - eclf = VotingClassifier( - estimators=[("clf1", clf1), ("clf2", clf2), ("clf3", clf3)], - voting="soft", - weights=[model_1_weight, model_2_weight, model_3_weight], - ) - - # predict class probabilities for all classifiers - probas = [c.fit(X, y).predict_proba(X) for c in (clf1, clf2, clf3, eclf)] - - # get class probabilities for the first sample in the dataset - class1_1 = [pr[0, 0] for pr in probas] - class2_1 = [pr[0, 1] for pr in probas] - - # plotting - - N = 4 # number of groups - ind = np.arange(N) # group positions - width = 0.35 # bar width - - fig, ax = plt.subplots() - - # bars for classifier 1-3 - p1 = ax.bar( - ind, np.hstack(([class1_1[:-1], [0]])), width, color="green", edgecolor="k" - ) - p2 = ax.bar( - ind + width, - np.hstack(([class2_1[:-1], [0]])), - width, - color="lightgreen", - edgecolor="k", - ) - - # bars for VotingClassifier - ax.bar(ind, [0, 0, 0, class1_1[-1]], width, color="blue", edgecolor="k") - ax.bar( - ind + width, [0, 0, 0, class2_1[-1]], width, color="steelblue", edgecolor="k" - ) - - # plot annotations - plt.axvline(2.8, color="k", linestyle="dashed") - ax.set_xticks(ind + width) - ax.set_xticklabels( - [ - f"{model_1}\nweight {model_1_weight}", - f"{model_2}\nweight {model_2_weight}", - f"{model_3}\nweight {model_3_weight}", - "VotingClassifier\n(average probabilities)", - ], - rotation=40, - ha="right", - ) - plt.ylim([0, 1]) - plt.title("Class probabilities for sample 1 by different classifiers") - plt.legend([p1[0], p2[0]], ["class 1", "class 2"], loc="upper left") - plt.tight_layout() - plt.show() - return fig - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Class probabilities by the `VotingClassifier` - - This space shows the effect of the weight of different classifiers when using sklearn's [VotingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html#sklearn.ensemble.VotingClassifier). - - For example, suppose you set the weights as in the table below, and the models have the following predicted probabilities: - - | | Weights | Predicted Probabilities | - |---------|:-------:|:----------------:| - | Model 1 | 1 | 0.5 | - | Model 2 | 2 | 0.8 | - | Model 3 | 5 | 0.9 | - - The predicted probability by the `VotingClassifier` will be $(1*0.5 + 2*0.8 + 5*0.9) / (1 + 2 + 5)$ - - You can experiment with different model types and weights and see their effect on the VotingClassifier's prediction. - - This space is based on [sklearn’s original demo](https://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_probas.html#sphx-glr-auto-examples-ensemble-plot-voting-probas-py). - """ - ) - with gr.Row(): - with gr.Column(scale=3): - with gr.Row(): - model_1 = gr.Dropdown( - [ - "Logistic Regression", - "Random Forest", - "Gaussian Naive Bayes", - ], - label="Model 1", - value="Logistic Regression", - ) - model_1_weight = gr.Slider( - value=1, label="Model 1 Weight", minimum=0, maximum=10, step=1 - ) - with gr.Row(): - model_2 = gr.Dropdown( - [ - "Logistic Regression", - "Random Forest", - "Gaussian Naive Bayes", - ], - label="Model 2", - value="Random Forest", - ) - model_2_weight = gr.Slider( - value=1, label="Model 2 Weight", minimum=0, maximum=10, step=1 - ) - with gr.Row(): - model_3 = gr.Dropdown( - [ - "Logistic Regression", - "Random Forest", - "Gaussian Naive Bayes", - ], - label="Model 3", - value="Gaussian Naive Bayes", - ) - - model_3_weight = gr.Slider( - value=5, label="Model 3 Weight", minimum=0, maximum=10, step=1 - ) - with gr.Column(scale=4): - proba_plots = gr.Plot() - - model_1.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - model_2.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - model_3.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - model_1_weight.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - model_2_weight.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - model_3_weight.change( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - - demo.load( - get_proba_plots, - [model_1, model_2, model_3, model_1_weight, model_2_weight, model_3_weight], - proba_plots, - queue=False, - ) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/pix2pix_model.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/pix2pix_model.py deleted file mode 100644 index 9fddc332222c0153d10097b8745632db617b99bc..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/pix2pix_model.py +++ /dev/null @@ -1,285 +0,0 @@ -""" -Copyright (C) 2019 NVIDIA Corporation. All rights reserved. -Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). -""" - -import os -import torch -import sstan_models.networks as networks -import model_util as util - - -class Pix2PixModel(torch.nn.Module): - @staticmethod - def modify_commandline_options(parser, is_train): - networks.modify_commandline_options(parser, is_train) - return parser - - def __init__(self, opt): - super().__init__() - self.opt = opt - self.FloatTensor = torch.cuda.FloatTensor if self.use_gpu() else torch.FloatTensor - self.ByteTensor = torch.cuda.ByteTensor if self.use_gpu() else torch.ByteTensor - - self.netG, self.netD, self.netE = self.initialize_networks(opt) - - # set loss functions - if opt.isTrain: - self.criterionGAN = networks.GANLoss(opt.gan_mode, tensor=self.FloatTensor, opt=self.opt) - self.criterionFeat = torch.nn.L1Loss() - if not opt.no_vgg_loss: - self.criterionVGG = networks.VGGLoss(self.opt.gpu_ids) - if opt.use_vae: - self.KLDLoss = networks.KLDLoss() - - # Entry point for all calls involving forward pass - # of deep networks. We used this approach since DataParallel module - # can't parallelize custom functions, we branch to different - # routines based on |mode|. - def forward(self, data, mode, style_codes=None): - input_semantics, real_image = self.preprocess_input(data) - domain = None - - # print(torch.cuda.memory_cached(0)) - if mode == "generator": - g_loss, generated = self.compute_generator_loss(input_semantics, real_image, domain) - return g_loss, generated - elif mode == "discriminator": - d_loss = self.compute_discriminator_loss(input_semantics, real_image, domain) - return d_loss - elif mode == "encode_only": - _, mu, logvar = self.encode_z(real_image, domain) - return mu, logvar - elif mode == "inference": - with torch.no_grad(): - fake_image, _, _ = self.generate_fake(input_semantics, real_image, domain, style_codes=style_codes, compute_kld_loss=False) - return fake_image - elif mode == "generate_img_npy": - with torch.no_grad(): - fake_image, encoded_style_code = self.generate_img_npy(input_semantics, real_image, domain) - return fake_image, encoded_style_code - else: - raise ValueError("|mode| is invalid") - - def create_optimizers(self, opt): - G_params = list(self.netG.parameters()) - if opt.use_vae: - G_params += list(self.netE.parameters()) - if opt.isTrain: - D_params = list(self.netD.parameters()) - - beta1, beta2 = opt.beta1, opt.beta2 - if opt.no_TTUR: - G_lr, D_lr = opt.lr, opt.lr - else: - G_lr, D_lr = opt.lr / 2, opt.lr * 2 - - optimizer_G = torch.optim.Adam(G_params, lr=G_lr, betas=(beta1, beta2)) - optimizer_D = torch.optim.Adam(D_params, lr=D_lr, betas=(beta1, beta2)) - - return optimizer_G, optimizer_D - - def save(self, epoch): - util.save_network(self.netG, "G", epoch, self.opt) - util.save_network(self.netD, "D", epoch, self.opt) - if self.opt.use_vae: - util.save_network(self.netE, "E", epoch, self.opt) - - ############################################################################ - # Private helper methods - ############################################################################ - - def initialize_networks(self, opt): - netG = networks.define_G(opt) - netD = networks.define_D(opt) if opt.isTrain else None - netE = networks.define_E(opt) if opt.use_vae else None - - if not opt.isTrain or opt.continue_train: - # netG = util.load_network(netG, 'G', opt.which_epoch, opt) - checkpoint_path = os.path.join(os.path.dirname(__file__), "..", opt.checkpoint_path) - device = "cuda" if torch.cuda.is_available() else "cpu" - if device == "cuda": - checkpoint = torch.load(checkpoint_path) - else: - checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage) - s = checkpoint - netG.load_state_dict(s) - - if opt.isTrain: - netD = util.load_network(netD, "D", opt.which_epoch, opt) - if opt.use_vae: - netE = util.load_network(netE, "E", opt.which_epoch, opt) - - return netG, netD, netE - - # preprocess the input, such as moving the tensors to GPUs and - # transforming the label map to one-hot encoding - # |data|: dictionary of the input data - - def preprocess_input(self, data): - """ - # move to GPU and change data types - data['label'] = data['label'].long() - if self.use_gpu(): - data['label'] = data['label'].cuda(non_blocking=True) - data['instance'] = data['instance'].cuda(non_blocking=True) - data['image'] = data['image'].cuda(non_blocking=True) - data['domain'] = data['domain'].cuda(non_blocking=True) - - # create one-hot label map - label_map = data['label'] - bs, _, h, w = label_map.size() - nc = self.opt.label_nc + 1 if self.opt.contain_dontcare_label \ - else self.opt.label_nc - input_label = self.FloatTensor(bs, nc, h, w).zero_() - input_semantics = input_label.scatter_(1, label_map, 1.0) - - # concatenate instance map if it exists - if not self.opt.no_instance: - inst_map = data['instance'] - instance_edge_map = self.get_edges(inst_map) - input_semantics = torch.cat((input_semantics, instance_edge_map), dim=1) - - return input_semantics, data['image'], data['domain'] - """ - - data = data.long() - image = (data - 128).float() / 128.0 - if self.use_gpu(): - data = data.cuda() - image = image.float().cuda() - label_map = data - bs, _, h, w = label_map.size() - nc = self.opt.label_nc + 1 if self.opt.contain_dontcare_label else self.opt.label_nc - input_label = self.FloatTensor(bs, nc, h, w).zero_() - input_semantics = input_label.scatter_(1, label_map, 1.0) - - return input_semantics, image # data['image'], - - def compute_generator_loss(self, input_semantics, real_image, domain): - G_losses = {} - - fake_image, KLD_loss, _ = self.generate_fake(input_semantics, real_image, domain, compute_kld_loss=self.opt.use_vae) - - if self.opt.use_vae: - if KLD_loss.data.item() > 2.5: - print("ng") - print(KLD_loss.data.item()) - KLD_loss.data = torch.Tensor([min(999.9999, KLD_loss.data.item())]).cuda() - G_losses["KLD"] = KLD_loss - - pred_fake, pred_real = self.discriminate(input_semantics, fake_image, real_image, domain) - - G_losses["GAN"] = self.criterionGAN(pred_fake, True, for_discriminator=False) - - if not self.opt.no_ganFeat_loss: - num_D = len(pred_fake) - GAN_Feat_loss = self.FloatTensor(1).fill_(0) - for i in range(num_D): # for each discriminator - # last output is the final prediction, so we exclude it - num_intermediate_outputs = len(pred_fake[i]) - 1 - for j in range(num_intermediate_outputs): # for each layer output - unweighted_loss = self.criterionFeat(pred_fake[i][j], pred_real[i][j].detach()) - GAN_Feat_loss += unweighted_loss * self.opt.lambda_feat / num_D - G_losses["GAN_Feat"] = GAN_Feat_loss - - if not self.opt.no_vgg_loss: - G_losses["VGG"] = self.criterionVGG(fake_image, real_image) * self.opt.lambda_vgg - - return G_losses, fake_image - - def compute_discriminator_loss(self, input_semantics, real_image, domain): - D_losses = {} - with torch.no_grad(): - fake_image, _, _ = self.generate_fake(input_semantics, real_image, domain) - fake_image = fake_image.detach() - fake_image.requires_grad_() - - pred_fake, pred_real = self.discriminate(input_semantics, fake_image, real_image, domain) - - D_losses["D_Fake"] = self.criterionGAN(pred_fake, False, for_discriminator=True) - D_losses["D_real"] = self.criterionGAN(pred_real, True, for_discriminator=True) - - return D_losses - - def encode_z(self, real_image, domain): - mu, logvar = self.netE(real_image, domain) - z = self.reparameterize(mu, logvar) - return z, mu, logvar - - def generate_fake(self, input_semantics, real_image, domain, style_codes=None, compute_kld_loss=True): - KLD_loss = None - if self.opt.use_vae and style_codes is None: - # print('yes') - style_codes, mu, logvar = self.encode_z(real_image, domain) - if compute_kld_loss: - KLD_loss = self.KLDLoss(mu, logvar) * self.opt.lambda_kld - - fake_image = self.netG(input_semantics, real_image, style_codes=style_codes) - - assert (not compute_kld_loss) or self.opt.use_vae, "You cannot compute KLD loss if opt.use_vae == False" - - return fake_image, KLD_loss, style_codes - - def generate_img_npy(self, input_semantics, real_image, domain, compute_kld_loss=False): - KLD_loss = None - style_codes, mu, logvar = self.encode_z(real_image, domain) - if compute_kld_loss: - KLD_loss = self.KLDLoss(mu, logvar) * self.opt.lambda_kld - - fake_image = self.netG(input_semantics, real_image, style_codes=style_codes) - # print(real_image, fake_image.shape) - assert (not compute_kld_loss) or self.opt.use_vae, "You cannot compute KLD loss if opt.use_vae == False" - - return fake_image, style_codes - - # Given fake and real image, return the prediction of discriminator - # for each fake and real image. - - def discriminate(self, input_semantics, fake_image, real_image, domain): - fake_concat = torch.cat([input_semantics, fake_image], dim=1) - real_concat = torch.cat([input_semantics, real_image], dim=1) - - # In Batch Normalization, the fake and real images are - # recommended to be in the same batch to avoid disparate - # statistics in fake and real images. - # So both fake and real images are fed to D all at once. - fake_and_real = torch.cat([fake_concat, real_concat], dim=0) - - discriminator_out = self.netD(fake_and_real, domain) - - pred_fake, pred_real = self.divide_pred(discriminator_out) - - return pred_fake, pred_real - - # Take the prediction of fake and real images from the combined batch - def divide_pred(self, pred): - # the prediction contains the intermediate outputs of multiscale GAN, - # so it's usually a list - if type(pred) == list: - fake = [] - real = [] - for p in pred: - fake.append([tensor[: tensor.size(0) // 2] for tensor in p]) - real.append([tensor[tensor.size(0) // 2 :] for tensor in p]) - else: - fake = pred[: pred.size(0) // 2] - real = pred[pred.size(0) // 2 :] - - return fake, real - - def get_edges(self, t): - edge = self.ByteTensor(t.size()).zero_() - edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - return edge.float() - - def reparameterize(self, mu, logvar): - std = torch.exp(0.5 * logvar) - eps = torch.randn_like(std) - return eps.mul(std) + mu - - def use_gpu(self): - return len(self.opt.gpu_ids) > 0 diff --git a/spaces/sofmi/MegaDetector_DLClive/README.md b/spaces/sofmi/MegaDetector_DLClive/README.md deleted file mode 100644 index 5cb008d6f1259b0e047a6902c91ff61a145d9980..0000000000000000000000000000000000000000 --- a/spaces/sofmi/MegaDetector_DLClive/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MegaDetector DLClive -emoji: 👀 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Combining MegaDetector implementation from https://huggingface.co/spaces/hlydecker/MegaDetector_v5 with DLClive -# Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sparanoid/milky-green-svc/slicer.py b/spaces/sparanoid/milky-green-svc/slicer.py deleted file mode 100644 index 52300c30712388aa6360d111f67c67bf0296a509..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-svc/slicer.py +++ /dev/null @@ -1,163 +0,0 @@ -import os.path -import time -from argparse import ArgumentParser - -import librosa -import numpy as np -import soundfile -from scipy.ndimage import maximum_filter1d, uniform_filter1d - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -# @timeit -def _window_maximum(arr, win_sz): - return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -# @timeit -def _window_rms(arr, win_sz): - filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2)) - return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1] - - -def level2db(levels, eps=1e-12): - return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1)) - - -def _apply_slice(audio, begin, end): - if len(audio.shape) > 1: - return audio[:, begin: end] - else: - return audio[begin: end] - - -class Slicer: - def __init__(self, - sr: int, - db_threshold: float = -40, - min_length: int = 5000, - win_l: int = 300, - win_s: int = 20, - max_silence_kept: int = 500): - self.db_threshold = db_threshold - self.min_samples = round(sr * min_length / 1000) - self.win_ln = round(sr * win_l / 1000) - self.win_sn = round(sr * win_s / 1000) - self.max_silence = round(sr * max_silence_kept / 1000) - if not self.min_samples >= self.win_ln >= self.win_sn: - raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s') - if not self.max_silence >= self.win_sn: - raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s') - - @timeit - def slice(self, audio): - if len(audio.shape) > 1: - samples = librosa.to_mono(audio) - else: - samples = audio - if samples.shape[0] <= self.min_samples: - return [audio] - # get absolute amplitudes - abs_amp = np.abs(samples - np.mean(samples)) - # calculate local maximum with large window - win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln)) - sil_tags = [] - left = right = 0 - while right < win_max_db.shape[0]: - if win_max_db[right] < self.db_threshold: - right += 1 - elif left == right: - left += 1 - right += 1 - else: - if left == 0: - split_loc_l = left - else: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[ - 0] - 1: - right += 1 - left = right - continue - if right == win_max_db.shape[0] - 1: - split_loc_r = right + self.win_ln - else: - sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln], - win_sz=self.win_sn)) - split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right) - split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn]) - sil_tags.append((split_loc_l, split_loc_r)) - right += 1 - left = right - if left != right: - sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2) - rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn)) - split_win_l = left + np.argmin(rms_db_left) - split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn]) - sil_tags.append((split_loc_l, samples.shape[0])) - if len(sil_tags) == 0: - return [audio] - else: - chunks = [] - for i in range(0, len(sil_tags)): - chunks.append(int((sil_tags[i][0] + sil_tags[i][1]) / 2)) - return chunks - - -def main(): - parser = ArgumentParser() - parser.add_argument('audio', type=str, help='The audio to be sliced') - parser.add_argument('--out_name', type=str, help='Output directory of the sliced audio clips') - parser.add_argument('--out', type=str, help='Output directory of the sliced audio clips') - parser.add_argument('--db_thresh', type=float, required=False, default=-40, - help='The dB threshold for silence detection') - parser.add_argument('--min_len', type=int, required=False, default=5000, - help='The minimum milliseconds required for each sliced audio clip') - parser.add_argument('--win_l', type=int, required=False, default=300, - help='Size of the large sliding window, presented in milliseconds') - parser.add_argument('--win_s', type=int, required=False, default=20, - help='Size of the small sliding window, presented in milliseconds') - parser.add_argument('--max_sil_kept', type=int, required=False, default=500, - help='The maximum silence length kept around the sliced audio, presented in milliseconds') - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None) - slicer = Slicer( - sr=sr, - db_threshold=args.db_thresh, - min_length=args.min_len, - win_l=args.win_l, - win_s=args.win_s, - max_silence_kept=args.max_sil_kept - ) - chunks = slicer.slice(audio) - if not os.path.exists(args.out): - os.makedirs(args.out) - start = 0 - end_id = 0 - for i, chunk in enumerate(chunks): - end = chunk - soundfile.write(os.path.join(out, f'%s-%s.wav' % (args.out_name, str(i).zfill(2))), audio[start:end], sr) - start = end - end_id = i + 1 - soundfile.write(os.path.join(out, f'%s-%s.wav' % (args.out_name, str(end_id).zfill(2))), audio[start:len(audio)], - sr) - - -if __name__ == '__main__': - main() diff --git a/spaces/sriramelango/Social_Classification_Public/models/search.py b/spaces/sriramelango/Social_Classification_Public/models/search.py deleted file mode 100644 index d5ea68b4ce04409c504c1d22098b7968a9ce596a..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/models/search.py +++ /dev/null @@ -1,814 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import List, Optional - -import torch -import torch.nn as nn -from fairseq.token_generation_constraints import ( - ConstraintState, - OrderedConstraintState, - UnorderedConstraintState, -) -from torch import Tensor - - -class Search(nn.Module): - def __init__(self, tgt_dict): - super().__init__() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.src_lengths = torch.tensor(-1) - self.supports_constraints = False - self.stop_on_max_len = False - - def step( - self, step, lprobs, scores, prev_output_tokens=None, original_batch_idxs=None - ): - """Take a single search step. - - Args: - step: the current search step, starting at 0 - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - scores: (bsz x input_beam_size x step) - the historical model scores of each hypothesis up to this point - prev_output_tokens: (bsz x step) - the previously generated oputput tokens - original_batch_idxs: (bsz) - the tensor with the batch indices, in the range [0, bsz) - this is useful in case there has been applied a re-ordering - and we need to know the orignal indices - - Return: A tuple of (scores, indices, beams) where: - scores: (bsz x output_beam_size) - the scores of the chosen elements; output_beam_size can be - larger than input_beam_size, e.g., we may return - 2*input_beam_size to account for EOS - indices: (bsz x output_beam_size) - the indices of the chosen elements - beams: (bsz x output_beam_size) - the hypothesis ids of the chosen elements, in the range [0, input_beam_size) - """ - raise NotImplementedError - - @torch.jit.export - def set_src_lengths(self, src_lengths): - self.src_lengths = src_lengths - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - """Initialize constraint states for constrained decoding (if supported). - - Args: - batch_constraints: (torch.Tensor, optional) - the list of constraints, in packed form - beam_size: (int) - the beam size - Returns: - *encoder_out* rearranged according to *new_order* - """ - pass - - def prune_sentences(self, batch_idxs: Tensor): - """ - Removes constraint states for completed sentences (if supported). - This is called from sequence_generator._generate() when sentences are - deleted from the batch. - - Args: - batch_idxs: Indices of *sentences* whose constraint state should be *kept*. - """ - pass - - def update_constraints(self, active_hypos: Tensor): - """ - Updates the constraint states by selecting the beam items that are retained. - This is called at each time step of sequence_generator._generate() when - the set of 2 * {beam_size} candidate hypotheses are reduced to the beam size. - - Args: - active_hypos: (batch size, beam size) - list of integers denoting, for each sentence, which beam candidate items - should be kept. - """ - pass - - -class BeamSearch(Search): - def __init__(self, tgt_dict): - super().__init__(tgt_dict) - self.constraint_states = None - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # At this point, beams_buf and indices_buf are single-dim and contain relative indices - return scores_buf, indices_buf, beams_buf - - -class PrefixConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, prefix_allowed_tokens_fn): - super().__init__(tgt_dict) - self.prefix_allowed_tokens_fn = prefix_allowed_tokens_fn - self.stop_on_max_len = True - - @torch.jit.export - def apply_mask(self, x, prev_output_tokens, original_batch_idxs): - beam_size = x.shape[0] // original_batch_idxs.shape[0] - original_batch_idxs = ( - original_batch_idxs.unsqueeze(-1).repeat((1, beam_size)).flatten().tolist() - ) - - mask = torch.full_like(x, -math.inf) - for sent_i, (sent, batch_i) in enumerate( - zip(prev_output_tokens, original_batch_idxs) - ): - mask[sent_i, :, self.prefix_allowed_tokens_fn(batch_i, sent)] = 0 - - return mask - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Tensor, - prev_output_tokens: Tensor, - original_batch_idxs: Tensor, - ): - bsz, beam_size, vocab_size = lprobs.size() - - lprobs += self.apply_mask( - lprobs.view(bsz * beam_size, 1, vocab_size), - prev_output_tokens, - original_batch_idxs, - ).view(bsz, beam_size, vocab_size) - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - return scores_buf, indices_buf, beams_buf - - -class LexicallyConstrainedBeamSearch(Search): - """Implements lexically constrained beam search as described in - - Fast Lexically Constrained Decoding with Dynamic Beam - Allocation for Neural Machine Translation. Post & Vilar, - NAACL 2018. https://www.aclweb.org/anthology/N18-1119/ - - and - - Improved Lexically Constrained Decoding for Translation and - Monolingual Rewriting. Hu et al, NAACL - 2019. https://www.aclweb.org/anthology/N19-1090/ - - This is accomplished by maintaining, for each beam hypothesis, a - ConstraintState object (see constraints.py) that tracks which - constraints have been generated and using this information to - shape the beam for each input sentence. - """ - - def __init__(self, tgt_dict, representation): - super().__init__(tgt_dict) - self.representation = representation - self.vocab_size = len(tgt_dict) - self.num_cands = 0 - self.supports_constraints = True - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - self.constraint_states = [] - for constraint_tensor in batch_constraints: - if self.representation == "ordered": - constraint_state = OrderedConstraintState.create(constraint_tensor) - elif self.representation == "unordered": - constraint_state = UnorderedConstraintState.create(constraint_tensor) - - self.constraint_states.append([constraint_state for i in range(beam_size)]) - - @torch.jit.export - def prune_sentences(self, batch_idxs: Tensor): - self.constraint_states = [ - self.constraint_states[i] for i in batch_idxs.tolist() - ] - - @torch.jit.export - def update_constraints(self, active_hypos: Tensor): - if self.constraint_states: - batch_size = active_hypos.size(0) - for sentid in range(batch_size): - self.constraint_states[sentid] = [ - self.constraint_states[sentid][i] for i in active_hypos[sentid] - ] - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - """ - A constrained step builds a large candidates list from the following: - - the top 2 * {beam_size} items over the whole beam - - for each item in the beam - - the top {each_k} (default 1) - - all next constraints - We then compute the constrained state of each beam item, and assign - stripe codes: 0 to the best in each bank, 1 to the 2nd-best, and so - on. We then sort by (stripe, score), and truncate the list at - 2 * beam size. - - Args: - step: the decoder step - lprobs: (batch size, beam size, target vocab) - the target-vocab distributions for each item in the beam. - Retrun: A tuple of (scores, indices, beams, constraints) where: - scores: (batch, output beam size) - the scores of the chosen elements - indices: (batch, output beam size) - the target vocab indices of the chosen elements - beams: (batch, output beam size) - the 0-indexed hypothesis ids of the chosen elements - constraints: (batch, output beam size) - the new constraint states - """ - each_k = 1 - device = lprobs.device - - batch_size, beam_size, vocab_size = lprobs.size() - - self.num_cands = min( - # Just take the k-best. We'll get another k from the 1-best from each - # row, plus more from the constraints - beam_size * 2, - lprobs.view(batch_size, -1).size(1) - 1, # -1 so we never select pad - ) - - # STEP 0: Preliminary. Prevent EOS for unfinished hyps across all batch items - constraint_states = self.constraint_states - if constraint_states and step > 0: - not_finished_indices = [] - for sentno, sent_constraints in enumerate(constraint_states): - for beamno, state in enumerate(sent_constraints): - index = sentno * beam_size + beamno - if not state.finished: - not_finished_indices.append(index) - not_finished_indices = torch.tensor(not_finished_indices) - if not_finished_indices.numel() > 0: - lprobs.view(batch_size * beam_size, -1)[ - not_finished_indices, self.eos - ] = -math.inf - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam entry for each batch item - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(batch_size, -1), - self.num_cands, - ) - scores_buf, indices_buf = top_prediction - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # Short circuit if there are no constraints in this batch - if not constraint_states: - return scores_buf, indices_buf, beams_buf - - # STEP 1: get top-1 from each hypothesis across all sentences in the batch - if step > 0: - top_scores, top_indices = torch.topk( - lprobs.view(batch_size * beam_size, -1), - k=each_k, - dim=1, - ) - top_scores = top_scores.view(batch_size, -1) - top_indices = top_indices.view(batch_size, -1) - scores_buf = torch.cat((scores_buf, top_scores), dim=1) - indices_buf = torch.cat((indices_buf, top_indices), dim=1) - new_beams = torch.arange(0, beam_size, device=device).repeat(batch_size, 1) - beams_buf = torch.cat((beams_buf, new_beams), dim=1) - - # Now, process sentences in the batch one by one. - new_scores_buf = torch.zeros((batch_size, 2 * beam_size), device=device) - new_indices_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - new_beams_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - for sentno, states in enumerate(constraint_states): - scores, indices, beams, new_states = self.step_sentence( - step, - sentno, - lprobs[sentno], - constraint_states[sentno], - beams_buf[sentno].clone(), - indices_buf[sentno].clone(), - scores_buf[sentno].clone(), - ) - new_scores_buf[sentno] = scores - new_indices_buf[sentno] = indices - new_beams_buf[sentno] = beams - self.constraint_states[sentno] = new_states - - return new_scores_buf, new_indices_buf, new_beams_buf - - @torch.jit.export - def step_sentence( - self, - step: int, - sentno: int, - lprobs: Tensor, - constraint_states: List[List[ConstraintState]], - beams_buf: Tensor, - indices_buf: Tensor, - scores_buf: Tensor, - ): - """Does per-sentence processing. Adds all constraints for each - hypothesis to the list of candidates; then removes duplicates, - sorts, and dynamically stripes across the banks. All tensor inputs - are collapsed to those pertaining to a single input sentence. - """ - device = lprobs.device - - # STEP 2: Add all constraints for each beam item - for beamno, state in enumerate(constraint_states): - next_tokens = torch.tensor(list(state.next_tokens()), device=device).long() - if next_tokens.numel() != 0: - indices_buf = torch.cat((indices_buf, next_tokens)) - next_beams = ( - torch.tensor(beamno, device=device) - .repeat(next_tokens.size(0)) - .long() - ) - beams_buf = torch.cat((beams_buf, next_beams)) - next_values = lprobs[beamno].take(next_tokens.view(-1)) - scores_buf = torch.cat((scores_buf, next_values)) - - # At the 0th time step, there is just one beam item - if step == 0: - break - - # STEP 3: Compute the "bank" for each candidate. This is the - # number of constraints it's generated. We need this so that - # we can do round-robin allocation of the beam across these - # banks. If C is the number of constraints, we select the best - # item in bank C, then the best in bank C-1, etc, followed by - # the 2nd-best in bank C, the 2nd-best in bank C-1, etc, and so - # on, until the maximum beam size. We accomplish this by - # creating a sort key and striping across the banks. - - # Compute the new states for all candidates - cands_size = indices_buf.size(0) - constraint_states = [ - constraint_states[beams_buf[i]].advance(indices_buf[i]) - for i in range(cands_size) - ] - - banks = torch.tensor([state.bank for state in constraint_states], device=device) - - # STEP 4: Sort - num_constraint_tokens = len(state.tokens) - - # Sort by keys (bank, score) (i.e., sort banks together, and scores - # within banks). AFAIK pytorch doesn't support either stable sort or - # multi-key sorting, so we have to hack this. - MAX_SCORE = -100 - sort_key = (num_constraint_tokens - banks) * MAX_SCORE + scores_buf - sort_values, sort_indices = sort_key.sort(dim=0, descending=True) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - banks = banks[sort_indices] - - # Sort the constraints to follow suit - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 5: Remove duplicates. The topk calls (overall and - # per-row) plus the per-row generation of constraints will - # produce duplicates. Here we remove them. - - def roll(t): - """Rolls a 1d tensor left by 1. - - [0, 1, 2, 3, 4] becomes [4, 0, 1, 2, 3] - """ - return torch.cat((t[-1].unsqueeze(0), t[0:-1]), dim=0) - - # We map candidates (beam, token_id) to a single dimension. - # This is then shifted by 1. We can then easily identify - # duplicates and create a mask that identifies unique - # extensions. - uniques_mask = beams_buf * (self.vocab_size + 1) + indices_buf - uniques_mask = roll(uniques_mask) != uniques_mask - - # Use the mask to pare down the data structures - scores_buf = torch.masked_select(scores_buf, uniques_mask) - indices_buf = torch.masked_select(indices_buf, uniques_mask) - beams_buf = torch.masked_select(beams_buf, uniques_mask) - banks = torch.masked_select(banks, uniques_mask) - i = 1 - for mask in uniques_mask[1:]: - if not mask: - constraint_states.pop(i) - i += mask - - # STEP 6: Assign IDs round-robin across banks, sort, and - # truncate. Now that the candidates are sorted by (bank, - # score) and uniqed, we dynamically allocate the {beam_size} - # beam by striping across the candidates. These stripes will - # be used as sort keys to do round-robin selection. This is - # accomplished in a single pass with offsets. Sorting by - # highest-banks (furthest-along hypotheses) first ensures - # progress through the constraints. - # - # e.g., BANKS: 3 3 3 2 2 2 2 1 1 1 0 0 - # OLD STRIPES: 0 1 2 0 1 2 3 0 1 2 0 1 - # NEW STRIPES: 0 1+4 2+8 0+1 1+5 2+9 3+11 0+2 1+6 2+10 0+3 1+7 - # = 0 5 10 1 6 11 13 2 7 12 3 8 - # - # Sorting by this then gives the following banks: - # - # 3 2 1 0 3 2 1 0 3 2 1 2 - # - # We'll take the top {beam_size} of these. - stripe_offsets = [offset * (len(banks) + 1) for offset in range(len(banks) + 1)] - stripes = torch.zeros_like(banks) - cur_bank_count = -1 - cur_bank = banks[0] - for i, bank in enumerate(banks): - if bank != cur_bank: - cur_bank_count = 0 - cur_bank = bank - else: - cur_bank_count += 1 - stripes[i] = num_constraint_tokens - bank + stripe_offsets[cur_bank_count] - - # STEP 7: Sort by the stripes values - sort_values, sort_indices = stripes.sort(dim=0) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 8: Truncate to the candidates size! - scores_buf = scores_buf[: self.num_cands] - indices_buf = indices_buf[: self.num_cands] - beams_buf = beams_buf[: self.num_cands] - - return scores_buf, indices_buf, beams_buf, constraint_states - - -class LengthConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, min_len_a, min_len_b, max_len_a, max_len_b): - super().__init__(tgt_dict) - self.min_len_a = min_len_a - self.min_len_b = min_len_b - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.beam = BeamSearch(tgt_dict) - self.needs_src_lengths = True - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - min_lens = self.min_len_a * self.src_lengths + self.min_len_b - max_lens = self.max_len_a * self.src_lengths + self.max_len_b - lprobs[step < min_lens, :, self.eos] = -math.inf - lprobs[step >= max_lens, :, self.eos] = 0 - return self.beam.step(step, lprobs, scores) - - -class DiverseBeamSearch(Search): - """Diverse Beam Search. - - See "Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence - Models" for details. - - We only implement the Hamming Diversity penalty here, which performed best - in the original paper. - """ - - def __init__(self, tgt_dict, num_groups, diversity_strength): - super().__init__(tgt_dict) - self.num_groups = num_groups - self.diversity_strength = -diversity_strength - self.beam = BeamSearch(tgt_dict) - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - if beam_size % self.num_groups != 0: - raise ValueError( - "DiverseBeamSearch requires --beam to be divisible by the number of groups" - ) - - # initialize diversity penalty - diversity_buf = torch.zeros(lprobs[:, 0, :].size()).to(lprobs) - - scores_G, indices_G, beams_G = [], [], [] - for g in range(self.num_groups): - lprobs_g = lprobs[:, g :: self.num_groups, :] - scores_g = scores[:, g :: self.num_groups, :] if step > 0 else None - - # apply diversity penalty - if g > 0: - lprobs_g = torch.add( - lprobs_g, - other=diversity_buf.unsqueeze(1), - alpha=self.diversity_strength, - ) - else: - lprobs_g = lprobs_g.contiguous() - - scores_buf, indices_buf, beams_buf = self.beam.step( - step, lprobs_g, scores_g - ) - beams_buf.mul_(self.num_groups).add_(g) - - scores_G.append(scores_buf.clone()) - indices_G.append(indices_buf.clone()) - beams_G.append(beams_buf.clone()) - - # update diversity penalty - diversity_buf.scatter_add_( - 1, indices_buf, torch.ones(indices_buf.size()).to(diversity_buf) - ) - - # interleave results from different groups - scores_buf = torch.stack(scores_G, dim=2).view(bsz, -1) - indices_buf = torch.stack(indices_G, dim=2).view(bsz, -1) - beams_buf = torch.stack(beams_G, dim=2).view(bsz, -1) - return scores_buf, indices_buf, beams_buf - - -class Sampling(Search): - sampling_topk: int - sampling_topp: float - - def __init__(self, tgt_dict, sampling_topk=-1, sampling_topp=-1.0): - super().__init__(tgt_dict) - self.sampling_topk = sampling_topk - self.sampling_topp = sampling_topp - - def _sample_topp(self, lprobs): - """Sample among the smallest set of elements whose cumulative probability mass exceeds p. - - See `"The Curious Case of Neural Text Degeneration" - (Holtzman et al., 2019) `_. - - Args: - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - - Return: A tuple of (trimed_probs, truncated_indices) where: - trimed_probs: (bsz x input_beam_size x ?) - the model's probabilities over the elements selected to sample from. The - width of the third dimension is determined by top-P. - truncated_indices: (bsz x input_beam_size x ?) - the indices of the chosen elements. - """ - probs = lprobs.exp_() - - # sort the last dimension (vocab dimension) in descending order - sorted_probs, sorted_indices = probs.sort(descending=True) - - # compute a mask to indicate the words to be included in the top-P set. - cumsum_probs = sorted_probs.cumsum(dim=2) - mask = cumsum_probs.lt(self.sampling_topp) - - # note that mask was computed by 'lt'. One more word needs to be included - # so that the cumulative probability mass can exceed p. - cumsum_mask = mask.cumsum(dim=2) - last_included = cumsum_mask[:, :, -1:] - last_included.clamp_(0, mask.size()[2] - 1) - mask = mask.scatter_(2, last_included, 1) - - # truncate unnecessary dims. - max_dim = last_included.max() - truncated_mask = mask[:, :, : max_dim + 1] - truncated_probs = sorted_probs[:, :, : max_dim + 1] - truncated_indices = sorted_indices[:, :, : max_dim + 1] - - # trim the words that are not in top-P by setting their probabilities - # to 0, so that they would not be sampled later. - trim_mask = ~truncated_mask - trimed_probs = truncated_probs.masked_fill_(trim_mask, 0) - return trimed_probs, truncated_indices - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - - if self.sampling_topp > 0: - # only sample from the smallest set of words whose cumulative probability mass exceeds p - probs, top_indices = self._sample_topp(lprobs) - elif self.sampling_topk > 0: - # only sample from top-k candidates - lprobs, top_indices = lprobs.topk(self.sampling_topk) - probs = lprobs.exp_() - else: - probs = lprobs.exp_() - - # dummy data to be consistent with true branch for type check - top_indices = torch.empty(0).to(probs) - # sample - if step == 0: - indices_buf = torch.multinomial( - probs.view(bsz, -1), - beam_size, - replacement=True, - ).view(bsz, beam_size) - else: - indices_buf = torch.multinomial( - probs.view(bsz * beam_size, -1), - 1, - replacement=True, - ).view(bsz, beam_size) - - if step == 0: - # expand to beam size - probs = probs.expand(bsz, beam_size, -1) - - # gather scores - scores_buf = torch.gather(probs, dim=2, index=indices_buf.unsqueeze(-1)) - scores_buf = scores_buf.log_().view(bsz, -1) - - # remap indices if using top-k or top-P sampling - if self.sampling_topk > 0 or self.sampling_topp > 0: - indices_buf = torch.gather( - top_indices.expand(bsz, beam_size, -1), - dim=2, - index=indices_buf.unsqueeze(-1), - ).squeeze(2) - - if step == 0: - beams_buf = indices_buf.new_zeros(bsz, beam_size) - else: - beams_buf = torch.arange(0, beam_size).to(indices_buf).repeat(bsz, 1) - # make scores cumulative - scores_buf.add_( - torch.gather(scores[:, :, step - 1], dim=1, index=beams_buf) - ) - - return scores_buf, indices_buf, beams_buf - - -class DiverseSiblingsSearch(Search): - """ - Beam search with diverse siblings. - - See "A Simple, Fast Diverse Decoding Algorithm for Neural Generation" for details. - https://arxiv.org/abs/1611.08562 - - 1/ Calculate hypotheses for each beam - 2/ Intra-sibling ordering - 3/ Rewrite scores - 4/ Choose top K hypotheses - - if diversity_rate == 0 is equivalent to BeamSearch - """ - - def __init__(self, tgt_dict, diversity_rate): - super().__init__(tgt_dict) - self.diversity_rate = diversity_rate - self.beam = BeamSearch(tgt_dict) - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - k = min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ) - s_list: List[Tensor] - i_list: List[Tensor] - s_list = [torch.empty(0).to(lprobs) for i in range(beam_size)] - i_list = [torch.LongTensor().to(device=lprobs.device) for i in range(beam_size)] - sibling_score = torch.arange(1, k + 1).to(lprobs) * self.diversity_rate - - if step == 0: - return self.beam.step(step, lprobs, scores) - lprobs.add_(scores[:, :, step - 1].unsqueeze(-1)) - - # 1/ Calculate hypotheses for each beam - for i in range(beam_size): - torch.topk(lprobs[:, i, :].view(bsz, -1), k, out=(s_list[i], i_list[i])) - i_list[i].fmod_(vocab_size) - - # 2/ Intra-sibling ordering by default from topk + 3/ Rewrite scores - s_list[i].sub_(sibling_score) - - # 4/ Choose top K hypotheses - indices = torch.stack(i_list, dim=1).view(bsz, -1) - - final_scores = torch.empty(0).to(lprobs) - final_indices = torch.LongTensor().to(device=lprobs.device) - final_beams = torch.LongTensor().to(device=lprobs.device) - (final_scores, final_indices) = torch.topk( - torch.stack(s_list, dim=1).view(bsz, -1), - k, - ) - - final_beams = final_indices // k - - for i in range(bsz): - final_indices[i] = indices[i][final_indices[i]] - - return final_scores, final_indices, final_beams diff --git a/spaces/srisakthi2821/UcenAiBot/app.py b/spaces/srisakthi2821/UcenAiBot/app.py deleted file mode 100644 index 7bf6de60dfc3c5f8cb1d23c096d6e174c45e3701..0000000000000000000000000000000000000000 --- a/spaces/srisakthi2821/UcenAiBot/app.py +++ /dev/null @@ -1,250 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template ="""Your Name is INFO-Ai.You must tell your name and explains yourself like what you have able to do instantly informations.You are the university college of engineering college's chatAi.You can clear the doubts the college information. -Type of college:Constituent College of Anna University, Chennai. - -College information : - UCE Nagercoil is situated in the heart of Nagercoil city, located in the southern-most tip of Tamil Nadu and India. It's a constituent college of Anna University, Chennai and funded by Tamil Nadu State Government. The institution was established in the year 2009 with a goal of catering to the needs for deserving engineering students by providing quality technical education. - The institution offers under graduate engineering programmes in Civil, CSE, ECE, EEE, IT and Mechanical besides MBA through distance education mode. Research programmes leading to doctoral degree are also offered in the above mentioned departments and also in Chemistry, Physics, Mathematics and English.. - GOALS AND OBJECTIVE:The Goal of Training & Placement cell is to provide Employment Opportunities and training to UCEN students to achieve 100% placement for students through dedication, attitude and complete involvement is our mission. Placement Cell arranges and coordinates various programmes that aim at moulding the students so as to meet the industry expectations in career building and in turn bring laurels to the parent institution. Professional Trainers are arranged for Personality Development, Career Development Training like Aptitude Training, Motivation Training and Group Discussions to enable the students to face Technical and HR round in campus interviews. - VISION: To be an excellent institution in the state and country imparting technical education, providing need based, value based and career based programes and producing self-reliant, self-sufficient technocrafts, capable of meeting new challenges and technology development..for more about about this college in this website https://ucen.ac.in/ ,and answer those questions by refering that website. - Counselling code: 4023 - -College fees structure: - for under-graduate students: - In general RS.28,810-RS.30,000. - In SC/ST RS.22,810-RS.24,000. - for post graduate students: - for general RS.31,460-RS.44,375. - In SC/ST RS.22,990-RS.28,375. - -Dean: - Dr.V.A.NAGARAJAN , Ph.D,[2] - -Former Deans - Dr. T.V.S.Pillai, Ph.D., is the Dean i/c. - -Library: - UCEN's library has a collection of handpicked necessary books and magazines. - The library has computers connected to the internet with sufficient bandwidth for the utilization of staff and students. - -Ranking: - UCEN is ranked #7 among 497 Non-Autonomous Engineering institutes ( Not Architectural Colleges) affiliated to Anna University. The ranking is based on pass percentage of students during April–May 2013 University exams.[3] - contact information: - deanucen@gmail.com - Call:04652-260511 - Our Location:Dean, University College of Engineering Nagercoil,Konam, Nagercoil, Kanya Kumari District,TamilNadu- 629 004 - - -The Placement information: - DR. S.VICTOR - TRAINING & PLACEMENT OFFICER - Assistant Professor (Sr.Gr) / MBA - DR.S. JEBA ANANDH - ASSISTANT TRAINING & PLACEMENT OFFICER - Teaching Fellow / CSE - DR.M.VENKATESAN - LIAISON MEMBER/MECH - MRS.M.SUBHA - LIAISON MEMBER/ECE - MRS.T. VIVEKA - LIAISON MEMBER/CSE - MS.I.STEPHIE RACHEL - LIAISON MEMBER/IT - DR. C. MYTHILI LIAISON MEMBER/EEE -Last year Students placed informations(2022-2023): - Above 100 students were placed from ucen in top companies like(TCS,TATA,Accenture...etc) - -Hostel: - ESTABLISHMENT OF UCEN HOSTELS - University College of Engineering Nagercoil (UCEN) Hostels was established to render outstanding services for the welfare of students. The Hostel not only believes in transparent administration but also in establishing sound systems and procedures and implementation of the same to achieve the goal. Over the period of time, the UCEN Hostels has established such systems, procedures and rules for an effective administration. The UCEN Hostels is established for the welfare of the students and is under the direct control of the Institution. UCEN Hostels comprises of 4 blocks (located at UCEN Campus), out of which boys are accommodated in 2 Blocks and Girls in 2 Blocks. All the blocks are named as the Tamil geographical thinais, Girls Hostel Block name: Kurinji and Mullai; Boys Hostel Block name: Marutham and Neithal. - DR. V. A. NAGARAJAN, M.E., PH.D - DEAN/WARDEN - University College of Engineering, Nagercoil - DR. M. EDWIN, M.E., PH.D - EXECUTIVE WARDEN - University College of Engineering, Nagercoil - DR. C. JUSTIN DHANARAJ, M.SC., PH.D - DEPUTY WARDEN (BOYS HOSTEL) - University College of Engineering, Nagercoil - DR. R. BHARATHI, M.E., PH.D - DEPUTY WARDEN (GIRLS HOSTEL) - University College of Engineering, Nagercoil - -MESS TIMINGS: - Breakfast - 7.30 AM-8.45 AM - Lunch - 12.30 PM-1.45 PM - Dinner - 7.15 PM-8.15 PM - -DISCIPLINE in hostel: - Hostel residents shall not issue orders to hostel employees or interfere in their work. Misconduct of hostel employees shall be reported to the Executive Warden with full particulars. - Hostel residents are requested not to tip any employees of the hostel. - Hostel residents are not allowed to put up notices or convene meeting or take out procession of any sort within the hostel area. - Hostel residents are instructed to maintain silence and not to create any sort of disturbance such as playing music, creating noise etc. between 9.00 p.m. to 6.00 a.m. - Consumption and / or possession of toxic drinks or drugs within the hostel and Institute campus are strictly prohibited. Any Resident entering the hostel after consuming of toxic drinks or drugs outside the campus is also prohibited. Any resident violating this rule will be expelled from the hostel. -Study Hours in Hostel(must be followed everyday): - Night 8.00PM to 10.00PM - - -Department-Information(under-graduate) - -Information_technology(IT): - The information technology department can have professional teaching staffs and more enjoyment with studies in convinent ways in that dept.Mostly in this dept moreover 60%-70% of students were placed in every final year year. - Information technology faculties: - Head of the department: DR.J.Banumathi. - Asst prof:Dr.A.Radhakrishnan - Asst professor:Dr.J.vijila. - Teaching Fellow: Ms.I.Stephie Rachel,Ms.S.Brintha Asha,Ms.Sonia Robet,Ms.Jasmine Shiney. - -Computer science and engineering(cse): - The cs department has a aknowledgement ways of teaching staffs.The department has well qualified faculty members with an intake of 120 students and it offers subjects relevant to current industrial needs. - The students are taken to industrial visits and they undergo in-plant training in reputed organizations. - Duration - 4 Years(Regular)/ 3 Years(Lateral Entry) - No Of Semesters - 8 Semeters(Regular)/ 6 Semeters(Lateral Entry) - Eligibility - 10 +2 System Of Education. Must Have Secured A Pass In Physics, Chemistry And Mathematics In The Qualifying Examination. - Scope For Higher Studies - M.E./M.Tech./M.B.A./M.S. - [Faculties for cse]: - Dr.M.Muthuselvi Assistant Professor & H.O.D - Dr. K.L.Neela Assistant Professor - Dr.K.Ramesh Asst.Prof (Sr.Gr) - Ms.T.viveka Teaching fellow - Dr.S.JEBA ANANDH Teaching fellow - Mr.M.Ramadass Teaching fellow - Mr.R.Narendran Teaching fellow - -Civil Engineering: - The Department of Civil Engineering was established in the year 2009 and it is a part of the institute since its inception. It offers undergraduate courses. The Department runs successfully with highly qualified, knowledgeable and energetic faculties. - Duration: - 4 Years(Regular)/ 3 Years(Lateral Entry) - No Of Semesters - 8 Semeters(Regular)/ 6 Semeters(Lateral Entry) - Eligibility - 10 +2 System Of Education. Must Have Secured A Pass In Physics, Chemistry And Mathematics In The Qualifying Examination. - Scope For Higher Studies - M.E./M.Tech./M.B.A./M.S. - [Faculties for civil]: - Dr.R.Ninija Merina Assistant Professor & H.O.D - Asst Proffesors:Dr.K.Dhanalakshmi,Dr.S.Judes Sujatha,Dr.S.Sahaya Vasanthi. - Teaching Fellows:Mr.N.Subash,Mr.A.Krishna Prakash,Mr.A.Pon Yesu Raja,Ms.S.Gogila Devi,Mr.S.Jhon Basil. - -Mechanical Department: - Mechanical Engineering is one of the major activities in the engineering profession and its principles are involved in the design, study, development and construction of nearly all of the physical devices and systems. Continued research and development have led to better machines and processes helping the mankind - Duration - 4 Years(Regular)/ 3 Years(Lateral Entry) - No Of Semesters - 8 Semeters(Regular)/ 6 Semeters(Lateral Entry) - Eligibility - 10 +2 System Of Education. Must Have Secured A Pass In Physics, Chemistry And Mathematics In The Qualifying Examination. - Scope For Higher Studies - M.E./M.Tech./M.B.A./M.S. - faculties: - Hod:Dr.N.Saravanan Asst prof &Hod - Dr.V.A.Nagarajan,Ph.D.Asst.Prof.&Dean i/c - Asst Prof:Dr.U.Arunachalam,Dr.M.Edwin,Dr.P.Arul Franco,Dr.M.S.Starvin,Dr.G.R.jinu,Dr.S.Suresh,Dr.M.Venkatesan,Dr.G.Arun Vijay,Dr.S.Sheeju Selva Roji,Mr.T.R.Kannan,Mr.S.Muthukumar,Mr.E.RajaSherin. - -Electronics and communication engineering(ece): - The Department of Electronics and Communication Engineering was established in the year 2009. The dedicated team of staffs of this department involve in the creation of future Electronics and Communication Engineers with innovative ideas and creativity. - Duration - 4 Years(Regular)/ 3 Years(Lateral Entry) - No Of Semesters - 8 Semeters(Regular)/ 6 Semeters(Lateral Entry) - Eligibility - 10 +2 System Of Education. Must Have Secured A Pass In Physics, Chemistry And Mathematics In The Qualifying Examination. - Scope For Higher Studies - M.E./M.Tech./M.B.A./M.S. - Faculties: - asst prof:Mr.S.Shahul Hameed Shabeer,Dr.R.Bharathi,Mrs.M.subha. - teaching fellow:Mr.J.Arun Prem Shanth,Ms.I.Jayagayathri,Mr.N.Bathlin Nelmin,Mrs.P.Arul Sindhia,Mrs.C.Ajitha. - -Electrical and electronics engineering: - The Department of Electrical and Electronics Engineering was established in the year 2015. The Department is vibrant in teaching and research of design, development and operation of electrical systems.Technical areas within the EEE discipline include Electromagnetic, Electronics, Power Systems, Control Systems, Instrumentation Systems, Digital Systems, Power Electronics, Signal Processing and Communications - Duration - 4 Years(Regular)/ 3 Years(Lateral Entry) - No Of Semesters - 8 Semeters(Regular)/ 6 Semeters(Lateral Entry) - Eligibility - 10 +2 System Of Education. Must Have Secured A Pass In Physics, Chemistry And Mathematics In The Qualifying Examination. - Scope For Higher Studies - M.E./M.Tech./M.B.A./M.S. - Department faculties: - hod:Dr.S.Sahaya Elsi. - Asst Professors:Dr.T.Sree Renga Raja,Dr.C.Mythili. - -Science and humanites: - Science and Humanities department is combination of Maths, Physics, Chemistry and English. Department of Mathematics provide strong Mathematical background to Engineering Graduates to cope up with the needs of emerging technology at National and International levels and undertake significant research projects individually and in collaboration with other Departments / Institutions / Industries. - Dr.T.V.Sivasubramonia pillai - Dr.P.Titus,Asst professor &hod/Maths - Dr.s.Petchimuthu asst prof/maths - Dr.K.Selva Kumar asst prof/maths - Dr.J.Vernold Vivin asst prof/maths - Dr.R.Surendran asst prof/Physics - Mr.C.Vincent Jerin asst prof/Physics - Dr.S.Athimoolam asst prof/Physics - Dr.K.p.Vinod Kumar asst prof/Chemistry - Dr.C.Mahendran asst prof/Chemistry - Dr.R.Latha Devi asst prof/English - teaching fellow/maths:Dr.S.Padma Vijaya,Mrs.P.Kamalamma - teaching fellow/english:Dr.B.C.Anish Krishnan Nayar,Mr.A.Kingsley jesu abel. - -Department information(Post-graduate): - The Post Graduate Course Master of Business Administration was established in the year 2022 in UCEN.The MBA course focuses on management education more than just business management in response to the rapidly changing economic environment and the globalized business, the department has made sustained efforts to impart quality education to meet the needs of corporate techno world. - Duration: - 2 Years (4 Semesters) - Admission Procedure - Prospective candidates go through the eligibility based admission process. Admission to this institute is based on the performance of the candidate in the TENCET (Tamil Nadu Common Entrance Test) & Tamilnadu MBA Counseling conducted by DOTE, Tamilnadu at GCT, Coimbatore. - Eligibility - A Pass in a recognized Bachelor's Degree of minimum 3 years duration and obtained at least 50% (45% in case of candidates belonging to reserved category – BC / BCM / MBC & DNC / SC / SCA / ST) in the qualifying Degree examination. - In case of B.E./ B.Tech. / Diploma courses, in addition to regular mode of study, Lateral Entry and Part time modes are also considered to be eligible. - Faculty members: - Dr.S.Victor Asst.Prof(Sr.Gr) - Dr.R.Banumathi Asst.Prof(Sr.Gr) - Dr.G.Linta Shalini visitingFaculty - Dr.J.Sheela Samuel Visiting faculty. - -The physical educational coordinator:MR.D.SaravanaMoorthy. - For ground: - The ground size is 800m in the ucen college and has multiple sports events in every year of competition in the college. - The sports are:kabaddi,cricket,football,volleyball,shuttle cork,hand ball,kho-kho. - In atheletics:running,jumping and throwing events. - Indoor games:carrom,chess,table tennis,etc... -Info AI developed by universtiy college of engineering nagercoil student. - - -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response, examples=["what about INFO-AI ?","What about the college ?","What about the placement information?","College's contact information","Where is this college is situated ?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/stamps-labs/stamp2vec/pipelines/feature_extraction/vits8.py b/spaces/stamps-labs/stamp2vec/pipelines/feature_extraction/vits8.py deleted file mode 100644 index c20d9b1e52319901547063abb8d181461feabb20..0000000000000000000000000000000000000000 --- a/spaces/stamps-labs/stamp2vec/pipelines/feature_extraction/vits8.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch -from torchvision import transforms -from huggingface_hub import hf_hub_download - -class Vits8Pipeline: - def __init__(self): - self.device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.model = None # Initialized upon loading torchscript - self.transform = transforms.ToTensor() - - @classmethod - def from_pretrained(cls, model_path_hf: str = None, filename_hf: str = "weights.pt", local_model_path: str = None): - vit = cls() - if model_path_hf is not None and filename_hf is not None: - vit.model = torch.jit.load(hf_hub_download(model_path_hf, filename=filename_hf), map_location='cpu') - vit.model.to(vit.device) - vit.model.eval() - elif local_model_path is not None: - vit.model = torch.jit.load(local_model_path, map_location='cpu') - vit.model.to(vit.device) - vit.model.eval() - return vit - - def __call__(self, image) -> torch.Tensor: - image = image.convert("RGB") - img_tensor = self.transform(image).to(self.device).unsqueeze(0) - return self.model(img_tensor)[0].detach().cpu() \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Alastair Reynolds Century Rain Epub WORK.md b/spaces/stomexserde/gpt4-ui/Examples/Alastair Reynolds Century Rain Epub WORK.md deleted file mode 100644 index c937e4507084b6cb2b1c664ca4f38a63fdb6cdca..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Alastair Reynolds Century Rain Epub WORK.md +++ /dev/null @@ -1,20 +0,0 @@ -
            -

            How to Download Alastair Reynolds Century Rain Epub for Free

            -

            If you are a fan of science fiction, space opera, mystery and time travel, you might be interested in reading Century Rain, a novel by Alastair Reynolds. This book is set in a future where Earth has been devastated by a nanotechnology disaster, and a group of archaeologists discover a wormhole that leads to an alternate version of Earth in the mid-twentieth century. There, they must find and stop a device that threatens to destroy both worlds.

            -

            Alastair Reynolds Century Rain Epub


            Download File ———>>> https://urlgoal.com/2uI6xg



            -

            But how can you get your hands on this amazing book without paying anything? Well, there are some ways to download Alastair Reynolds Century Rain Epub for free from the internet. Here are some of them:

            -
              -
            • Use a search engine of shadow libraries. These are websites that host millions of books, papers, comics and magazines that you can download for free. Some examples are Z-Library, Library Genesis and Sci-Hub. You can use Anna's Archive (https://annas-archive.org) to search across these libraries and find the Epub file of Century Rain. Just type the title and author in the search box and click on the download options.
            • -
            • Use an IPFS gateway. IPFS stands for InterPlanetary File System, and it is a peer-to-peer network that allows you to access files that are stored by other users around the world. You can use an IPFS gateway to download Century Rain Epub from the IPFS network. You can find the link to the file on Anna's Archive, or use this hash: ed46e5129cea0d146b5c36caf8682040. Then, paste it on an IPFS gateway website, such as ipfs.io or cloudflare-ipfs.com, and click on the download button.
            • -
            • Use a Tor browser. Tor is a software that allows you to browse the internet anonymously and access websites that are blocked or censored by your government or ISP. You can use a Tor browser to access Z-Library on Tor, which is a hidden service that hosts Century Rain Epub and many other books. You can download the Tor browser from https://www.torproject.org/, and then use this link to access Z-Library on Tor: http://zlibraryexau2g3p.onion/book/ed46e5129cea0d146b5c36caf8682040.
            • -
            -

            These are some of the ways to download Alastair Reynolds Century Rain Epub for free. However, please be aware that these methods may not be legal or safe in your country or region. Always be careful when downloading files from the internet, and make sure you have updated antivirus software and firewall. Also, if you enjoy reading Century Rain, please consider supporting the author by buying his other books or leaving a review on Goodreads or other platforms.

            - -

            What are the Reviews of Alastair Reynolds Century Rain Epub?

            -

            Now that you know how to download Alastair Reynolds Century Rain Epub for free, you might be wondering what other readers think of this book. Is it worth your time and attention? Well, according to Goodreads, Century Rain has an average rating of 3.95 out of 5 stars, based on 10,108 ratings and 524 reviews. That's pretty impressive for a science fiction novel that combines so many different genres and elements.

            -

            Most reviewers praise the book for its original and complex plot, its rich and detailed world-building, its engaging and realistic characters, its fast-paced and thrilling action scenes, its clever and surprising twists, and its satisfying and emotional ending. They also appreciate the author's skillful writing style, his scientific background and knowledge, his homage to classic noir fiction and jazz music, and his exploration of themes such as identity, history, morality, technology, and destiny.

            -

            -

            Some reviewers criticize the book for its slow and confusing start, its excessive length and descriptions, its occasional info-dumps and techno-babble, its implausible coincidences and contrivances, its underdeveloped romance and villains, and its abrupt and ambiguous epilogue. They also find some parts of the book boring, predictable, or irrelevant to the main story.

            -

            Overall, Century Rain is a highly recommended book for fans of science fiction and mystery who are looking for a unique and immersive reading experience. It is a standalone novel that does not require any prior knowledge of the author's other works. However, if you enjoy Century Rain, you might want to check out Alastair Reynolds' other books, such as Revelation Space, House of Suns, Pushing Ice, or The Prefect.

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Alberto Gomes Moedas Portuguesas Pdf Download LINK.md b/spaces/stomexserde/gpt4-ui/Examples/Alberto Gomes Moedas Portuguesas Pdf Download LINK.md deleted file mode 100644 index 1922c33cf08927529231451fafde36bf8b54f1b5..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Alberto Gomes Moedas Portuguesas Pdf Download LINK.md +++ /dev/null @@ -1,29 +0,0 @@ - -

            Alberto Gomes Moedas Portuguesas PDF Download: A Comprehensive Guide to Portuguese Coins

            -

            If you are interested in learning more about the history and value of Portuguese coins, you may want to download the PDF version of Alberto Gomes' book Moedas Portuguesas. This book is considered one of the most authoritative and comprehensive works on the subject, covering more than 1000 years of coinage from the Visigoths to the Euro.

            -Cover of Moedas Portuguesas by Alberto Gomes -

            Who is Alberto Gomes?

            -

            Alberto Gomes is a Portuguese numismatist, historian, and author who has dedicated his life to the study of Portuguese coins. He was born in 1937 in Porto and graduated in economics from the University of Porto. He started collecting coins at a young age and soon became fascinated by their historical and artistic significance. He has published several books and articles on Portuguese numismatics, as well as catalogues and guides for collectors and dealers. He is also the founder and president of the Portuguese Numismatic Society.

            -

            alberto gomes moedas portuguesas pdf download


            Download Zip ••• https://urlgoal.com/2uIaIw



            -

            What is Moedas Portuguesas?

            -

            Moedas Portuguesas is a book that Alberto Gomes first published in 1987 and has updated several times since then. It is a comprehensive catalogue of all the coins minted in Portugal or by Portuguese authorities abroad from the 8th century to the present day. It includes detailed descriptions, illustrations, and prices of each coin, as well as historical and technical information. It also covers the coins of the former Portuguese colonies, such as Brazil, Angola, Mozambique, India, Macau, Timor, etc.

            -

            Why download the PDF version?

            -

            The PDF version of Moedas Portuguesas is a convenient and affordable way to access this valuable resource. You can download it instantly from our website and read it on your computer, tablet, or smartphone. You can also print it out or save it on a USB drive for offline use. The PDF version has the same content as the printed version, but with some advantages:

            -
              -
            • It is cheaper than buying the hardcover book.
            • -
            • It is easier to search and navigate through the pages.
            • -
            • It is updated regularly with new discoveries and market trends.
            • -
            • It has interactive features such as hyperlinks, bookmarks, and zooming.
            • -
            -

            How to download it?

            -

            To download the PDF version of Moedas Portuguesas, you just need to follow these simple steps:

            -
              -
            1. Click on the link below to go to our secure payment page.
            2. -
            3. Choose your preferred payment method (credit card, PayPal, etc.) and complete the transaction.
            4. -
            5. You will receive an email with a download link and a password to access the PDF file.
            6. -
            7. Click on the link and enter the password to download the file to your device.
            8. -
            9. Enjoy reading Moedas Portuguesas!
            10. -
            -

            Download Moedas Portuguesas PDF Now!

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Edius Failed To Initialize Skin !!TOP!!.md b/spaces/stomexserde/gpt4-ui/Examples/Edius Failed To Initialize Skin !!TOP!!.md deleted file mode 100644 index 7efee60b53224865a3eb70c5319f5db886c09c07..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Edius Failed To Initialize Skin !!TOP!!.md +++ /dev/null @@ -1,203 +0,0 @@ -
            -

            How to Fix Edius Error: Failed to Initialize Skin

            -

            If you are a video editor who uses Edius, you may have encountered an error message that says Failed to initialize skin when you try to launch the software. This error can prevent you from accessing and editing your projects, and it can be very frustrating. Fortunately, there are some possible solutions that you can try to fix this error and get back to your work. In this article, we will explain what Edius is, why it may show this error, and how to troubleshoot and solve it.

            -

            Edius Failed To Initialize Skin


            Download ⇒⇒⇒ https://urlgoal.com/2uI8nw



            -

            What is Edius and why it may show this error

            -

            Edius is a popular video editing software

            -

            Edius is a non-linear editor (NLE) that works with most modern video formats. The software is capable of 3D editing, multicam editing, HDR editing, and more. It is used by professionals and enthusiasts alike for various purposes, such as filmmaking, video journalism, YouTube production, broadcasting, and education. Edius is known for its stability, performance, and creative tools.

            -

            The error may occur due to corrupted or missing files, incompatible settings, or malware infection

            -

            The Failed to initialize skin error may occur due to various reasons, such as:

            -
              -
            • Corrupted or missing files in the Edius installation folder or the Windows system folder. This can happen due to improper installation, uninstallation, or update of Edius or other programs.
            • -
            • Incompatible settings in the Edius preferences or the Windows registry. This can happen due to changes in the screen resolution, display scaling, color scheme, or other system settings.
            • -
            • Malware infection in the Edius executable file or the Windows system files. This can happen due to downloading or opening malicious files from untrusted sources.
            • -
            -

            These reasons can cause Edius to fail to load the skin files that are responsible for the appearance and functionality of the user interface. As a result, the error message will pop up and prevent you from using the software.

            -

            How to troubleshoot and solve the error

            -

            Depending on the cause of the error, there are different methods that you can try to fix it. Here are some of the most common and effective ones:

            -

            Method 1: Reinstall Edius

            -

            One of the simplest ways to fix the error is to reinstall Edius. This will replace any corrupted or missing files in the installation folder and restore the default settings. To reinstall Edius, follow these steps:

            -
              -
            1. Close Edius if it is running.
            2. -
            3. Go to Control Panel > Programs > Programs and Features.
            4. -
            5. Select Edius from the list of installed programs and click Uninstall.
            6. -
            7. Follow the on-screen instructions to complete the uninstallation process.
            8. -
            9. Restart your computer.
            10. -
            11. Download the latest version of Edius from its official website or insert the installation disc if you have one.
            12. -
            13. Run the setup file and follow the on-screen instructions to complete the installation process.
            14. -
            15. Launch Edius and check if the error is gone.
            16. -
            -

            Method 2: Run Edius as administrator

            -

            Sometimes, the error may occur due to insufficient permissions for Edius to access or modify certain files or settings. To solve this problem, you can try running Edius as administrator. This will grant Edius full access rights to the system resources and avoid any permission issues. To run Edius as administrator, follow these steps:

            -

            -
              -
            1. Right-click on the Edius shortcut icon on your desktop or in the Start menu.
            2. -
            3. Select Properties from the context menu.
            4. -
            5. Go to the Compatibility tab.
            6. -
            7. Check the box next to Run this program as an administrator.
            8. -
            9. Click Apply and then OK.
            10. -
            11. Double-click on the Edius shortcut icon to launch the software and check if the error is gone.
            12. -
            -

            Method 3: Update your graphics driver

            -

            The error may also occur due to an outdated or incompatible graphics driver. The graphics driver is a software component that enables your computer to communicate with your graphics card and display the video output. If the driver is not up to date or compatible with your system or Edius, it may cause conflicts or errors. To fix this problem, you can try updating your graphics driver to the latest version. To update your graphics driver, follow these steps:

            -
              -
            1. Press Windows + R keys on your keyboard to open the Run dialog box.
            2. -
            3. Type devmgmt.msc and click OK. This will open the Device Manager window.
            4. -
            5. Expand the Display adapters category and right-click on your graphics card name.
            6. -
            7. Select Update driver from the context menu.
            8. -
            9. Select Search automatically for updated driver software.
            10. -
            11. Wait for Windows to search for and install the latest driver for your graphics card.
            12. -
            13. Restart your computer and launch Edius to check if the error is gone.
            14. -
            -

            Method 4: Scan your system for malware

            -

            The error may also occur due to a malware infection in your system. Malware is a malicious software that can harm your computer or steal your data. Some malware can infect or modify the Edius executable file or the Windows system files, causing errors or crashes. To fix this problem, you can try scanning your system for malware and removing any threats. To scan your system for malware, follow these steps:

            -
              -
            1. Download and install a reputable antivirus or anti-malware software, such as Malwarebytes, Norton, or Bitdefender.
            2. -
            3. Run the software and perform a full scan of your system.
            4. -
            5. Wait for the scan to complete and review the results.
            6. -
            7. Delete or quarantine any detected malware threats.
            8. -
            9. Restart your computer and launch Edius to check if the error is gone.
            10. -
            -

            Method 5: Contact Edius support

            -

            If none of the above methods work, you may need to contact Edius support for further assistance. Edius support can help you diagnose and resolve the error, or provide you with alternative solutions. To contact Edius support, follow these steps:

            -
              -
            1. Go to the Edius official website and click on Contact Us.
            2. -
            3. Select your region and country from the drop-down menus.
            4. -
            5. Select Tech Support from the list of options.
            6. -
            7. Fill out the form with your name, email address, product name, serial number, operating system, and a brief description of your problem.
            8. -
            9. Click Submit.
            10. -
            11. Wait for a response from Edius support via email or phone.
            12. -
            -

            Conclusion

            -

            The Failed to initialize skin error is a common issue that many Edius users face. It can prevent you from using the software and editing your videos. However, there are some possible solutions that you can try to fix this error and get back to your work. In this article, we have explained what Edius is, why it may show this error, and how to troubleshoot and solve it. We hope that this article has helped you resolve the error and enjoy using Edius.

            -

            Frequently Asked Questions (FAQs)

            -

            Q1: What are the system requirements for Edius?

            -

            Q2: How can I update Edius to the latest version?

            -

            Q3: How can I backup and restore my Edius projects?

            -

            Q4: How can I customize the Edius skin and interface?

            -

            Q5: How can I get more help and resources for Edius?

            -

            A1: What are the system requirements for Edius?

            -

            The minimum system requirements for Edius are as follows:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
            ComponentRequirement
            Operating systemWindows 10 64-bit (version 1809 or later)
            CPUAny Intel Core 2 or Core iX CPU; any Intel or AMD CPU with SSSE3
            Memory8 GB RAM minimum (16 GB or more recommended)
            Hard disk space6 GB for installation; 10 GB or more for working space
            Graphics cardSupporting higher resolution than 1024x768 32-bit; Direct3D 9.0c or later and PixelShader Model 3.0 or later required; supporting hardware mode with 256 MB of graphics memory or more recommended; NVIDIA GeForce GTX/RTX series recommended for HDR projects and 8K projects
            Sound cardA sound card with WDM driver support is required
            Optical driveBlu-ray Disc writer is required when creating Blu-ray Discs; DVD-R/RW or DVD+R/RW drive is required when creating DVDs; a CD-R/RW drive is required when creating CDs
            Internet connectionAn Internet connection is required for software license activation and validation, software update, and user registration.
            -

            The recommended system requirements for Edius are as follows:

            - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

            A2: How can I update Edius to the latest version?

            -

            To update Edius to the latest version, you can use the Edius Update Manager. This is a tool that automatically checks for and installs the latest updates for Edius. To use the Edius Update Manager, follow these steps:

            -
              -
            1. Launch Edius and go to Help > Check for Updates....
            2. -
            3. The Edius Update Manager will open and scan for available updates.
            4. -
            5. If there are any updates, select them and click Download and Install.
            6. -
            7. Wait for the updates to download and install.
            8. -
            9. Restart Edius and enjoy the new features and improvements.
            10. -
            -

            A3: How can I backup and restore my Edius projects?

            -

            To backup and restore your Edius projects, you can use the Edius Project Backup Tool. This is a tool that allows you to save and load your project files, settings, media files, and other data in a single archive file. To use the Edius Project Backup Tool, follow these steps:

            -
              -
            1. To backup your project, go to File > Backup Project....
            2. -
            3. Select a destination folder and a file name for the backup file.
            4. -
            5. Select the items that you want to include in the backup file, such as project file, settings, media files, etc.
            6. -
            7. Click Backup.
            8. -
            9. Wait for the backup process to complete.
            10. -
            11. To restore your project, go to File > Restore Project....
            12. -
            13. Select the backup file that you want to restore from.
            14. -
            15. Select a destination folder for the restored project.
            16. -
            17. Select the items that you want to restore from the backup file, such as project file, settings, media files, etc.
            18. -
            19. Click Restore.
            20. -
            21. Wait for the restore process to complete.
            22. -
            23. Open the restored project in Edius and continue your work.
            24. -
            -

            A4: How can I customize the Edius skin and interface?

            -

            To customize the Edius skin and interface, you can use the Edius Layouter. This is a tool that allows you to change the appearance and layout of the Edius user interface. To use the Edius Layouter, follow these steps:

            -
              -
            1. Go to View > Layouter....
            2. -
            3. The Edius Layouter will open and show you the current layout of the user interface.
            4. -
            5. You can drag and drop the elements of the user interface, such as windows, panels, buttons, menus, etc., to change their position, size, and visibility.
            6. -
            7. You can also right-click on any element and select from various options, such as docking, floating, hiding, locking, etc.
            8. -
            9. You can also change the color scheme of the user interface by clicking on the Color button at the bottom of the Edius Layouter window.
            10. -
            11. You can save your customized layout by clicking on the Save button at the bottom of the Edius Layouter window.
            12. -
            13. You can load your saved layout by clicking on the Load button at the bottom of the Edius Layouter window.
            14. -
            15. You can reset your layout to the default by clicking on the Reset button at the bottom of the Edius Layouter window.
            16. -
            17. Click OK to apply your changes and close the Edius Layouter window.
            18. -
            -

            A5: How can I get more help and resources for Edius?

            -

            If you need more help and resources for Edius, you can visit the following websites:

            -
              -
            • The Edius official website has a lot of information and features, such as product overview, specifications, downloads, tutorials, news, events, etc.
            • -
            • The Edius support website has a lot of resources and services, such as manuals, FAQs, updates, patches, drivers, plugins, contact information, etc.
            • -
            • The Edius forum is a place where you can interact with other Edius users and experts, ask questions, share tips, showcase your work, etc.
            • -
            • The Edius YouTube channel has a lot of videos that demonstrate and explain various aspects and functions of Edius.
            • -

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Harold Rosenberg The Tradition Of The New Pdf Files.md b/spaces/stomexserde/gpt4-ui/Examples/Harold Rosenberg The Tradition Of The New Pdf Files.md deleted file mode 100644 index 6de6655fe1941be48594e45d5f62dfc8e4c08600..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Harold Rosenberg The Tradition Of The New Pdf Files.md +++ /dev/null @@ -1,30 +0,0 @@ -
            -

            Harold Rosenberg: The Tradition of the New and the American Action Painters

            - -

            Harold Rosenberg (1906-1978) was one of the most influential American art critics of the twentieth century. He is best known for coining the term "action painting" to describe the work of abstract expressionists such as Jackson Pollock, Willem de Kooning, and Franz Kline. In his book The Tradition of the New (1959), Rosenberg collected his essays on art, poetry, culture, and politics, offering a unique perspective on the development of modernism in America and beyond.

            - -

            In this article, we will explore some of the main themes and arguments of Rosenberg's book, focusing on his seminal essay "The American Action Painters". We will also examine how Rosenberg's ideas influenced and challenged the artists and critics of his time, as well as how they remain relevant today.

            -

            Harold Rosenberg The Tradition Of The New Pdf Files


            Downloadhttps://urlgoal.com/2uIc4l



            - -

            The Tradition of the New

            - -

            Rosenberg's book The Tradition of the New is divided into four parts: "American Painting Today", "The Profession of Poetry", "War of Phantoms", and "The Herd of Independent Minds". In each part, Rosenberg analyzes various aspects of cultural and artistic production in the twentieth century, from the rise of abstract expressionism to the role of poetry in society, from the impact of World War II to the crisis of individuality in mass culture.

            - -

            Rosenberg's main thesis is that modern art is defined by its novelty, not by its formal qualities or its historical context. He argues that modern artists are constantly seeking new ways of expression, new modes of action, new forms of communication. He writes: "If the sole thing of significance for modern art is the novelty of the work, and it is defined not by an analysis but by its social power and didactic value, then the avant-garde artist will exist in a milieu completely indifferent to the content of his work."[^2^]

            -

            - -

            Rosenberg also criticizes the prevailing theories and methods of art criticism, especially those of Clement Greenberg, his rival and colleague at Art News. Rosenberg accuses Greenberg of reducing art to a set of formal criteria and historical determinants, ignoring the creative process and the human dimension of art. He writes: "The critic who starts out to study a painting by measuring it against a preconceived 'idea' will end up by interpreting all paintings as illustrations to that idea."[^1^]

            - -

            Rosenberg proposes a different approach to art criticism, one that is based on the experience and interpretation of the individual viewer. He writes: "The critic who would understand a work must place himself at that point in history where he can experience it as something new."[^1^]

            - -

            The American Action Painters

            - -

            The most famous and influential essay in Rosenberg's book is "The American Action Painters", first published in Art News in 1952. In this essay, Rosenberg introduces the term "action painting" to describe the work of a group of American abstract expressionists who emerged after World War II. He writes: "At a certain moment the canvas began to appear to one American painter after another as an arena in which to act - rather than as a space in which to reproduce, re-design, analyze or 'express' an object, actual or imagined. What was to go on the canvas was not a picture but an event."[^3^]

            - -

            Rosenberg argues that action painting is not a style or a movement, but a mode of expression that reflects the existential condition of modern man. He writes: "The gesture on the canvas was a gesture of liberation from Value - political, aesthetic, moral."[^3^]

            - -

            Rosenberg also emphasizes the importance of improvisation and spontaneity in action painting. He writes: "The act-painting is of the same metaphysical substance as the artist's existence. The new painting has broken down every distinction between art and life."[^3^]

            - -

            Rosenberg's essay had a profound impact on both artists and critics. Some artists embraced his concept of action painting as a validation of their work and their vision. Others rejected it as a misrepresentation or a limitation of their art. Some critics praised Rosenberg's essay as a brilliant insight into the nature and meaning of modern art. Others criticized it as

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/prompts/generate_skill.md b/spaces/sub314xxl/MetaGPT/metagpt/prompts/generate_skill.md deleted file mode 100644 index fd950c1439866572b5b65b68399f8e06bde18783..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/prompts/generate_skill.md +++ /dev/null @@ -1,76 +0,0 @@ -你是一个富有帮助的助理,可以帮助撰写、抽象、注释、摘要Python代码 - -1. 不要提到类/函数名 -2. 不要提到除了系统库与公共库以外的类/函数 -3. 试着将类/函数总结为不超过6句话 -4. 你的回答应该是一行文本 - -举例,如果上下文是: - -```python -from typing import Optional -from abc import ABC -from metagpt.llm import LLM # 大语言模型,类似GPT - -class Action(ABC): - def __init__(self, name='', context=None, llm: LLM = LLM()): - self.name = name - self.llm = llm - self.context = context - self.prefix = "" - self.desc = "" - - def set_prefix(self, prefix): - """设置前缀以供后续使用""" - self.prefix = prefix - - async def _aask(self, prompt: str, system_msgs: Optional[list[str]] = None): - """加上默认的prefix来使用prompt""" - if not system_msgs: - system_msgs = [] - system_msgs.append(self.prefix) - return await self.llm.aask(prompt, system_msgs) - - async def run(self, *args, **kwargs): - """运行动作""" - raise NotImplementedError("The run method should be implemented in a subclass.") - -PROMPT_TEMPLATE = """ -# 需求 -{requirements} - -# PRD -根据需求创建一个产品需求文档(PRD),填补以下空缺 - -产品/功能介绍: - -目标: - -用户和使用场景: - -需求: - -约束与限制: - -性能指标: - -""" - - -class WritePRD(Action): - def __init__(self, name="", context=None, llm=None): - super().__init__(name, context, llm) - - async def run(self, requirements, *args, **kwargs): - prompt = PROMPT_TEMPLATE.format(requirements=requirements) - prd = await self._aask(prompt) - return prd -``` - - -主类/函数是 `WritePRD`。 - -那么你应该写: - -这个类用来根据输入需求生成PRD。首先注意到有一个提示词模板,其中有产品、功能、目标、用户和使用场景、需求、约束与限制、性能指标,这个模板会以输入需求填充,然后调用接口询问大语言模型,让大语言模型返回具体的PRD。 - diff --git a/spaces/sudo-ai/zero123plus-demo-space/app.py b/spaces/sudo-ai/zero123plus-demo-space/app.py deleted file mode 100644 index b79f87c937d6582f7cf7bdda37240ec088e5929f..0000000000000000000000000000000000000000 --- a/spaces/sudo-ai/zero123plus-demo-space/app.py +++ /dev/null @@ -1,270 +0,0 @@ -import os -import sys -import numpy -import torch -import rembg -import threading -import urllib.request -from PIL import Image -import streamlit as st -import huggingface_hub - - -img_example_counter = 0 -iret_base = 'resources/examples' -iret = [ - dict(rimageinput=os.path.join(iret_base, x), dispi=os.path.join(iret_base, x)) - for x in sorted(os.listdir(iret_base)) -] - - -class SAMAPI: - predictor = None - - @staticmethod - @st.cache_resource - def get_instance(sam_checkpoint=None): - if SAMAPI.predictor is None: - if sam_checkpoint is None: - sam_checkpoint = "tmp/sam_vit_h_4b8939.pth" - if not os.path.exists(sam_checkpoint): - os.makedirs('tmp', exist_ok=True) - urllib.request.urlretrieve( - "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth", - sam_checkpoint - ) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - model_type = "default" - - from segment_anything import sam_model_registry, SamPredictor - - sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) - sam.to(device=device) - - predictor = SamPredictor(sam) - SAMAPI.predictor = predictor - return SAMAPI.predictor - - @staticmethod - def segment_api(rgb, mask=None, bbox=None, sam_checkpoint=None): - """ - - Parameters - ---------- - rgb : np.ndarray h,w,3 uint8 - mask: np.ndarray h,w bool - - Returns - ------- - - """ - np = numpy - predictor = SAMAPI.get_instance(sam_checkpoint) - predictor.set_image(rgb) - if mask is None and bbox is None: - box_input = None - else: - # mask to bbox - if bbox is None: - y1, y2, x1, x2 = np.nonzero(mask)[0].min(), np.nonzero(mask)[0].max(), np.nonzero(mask)[1].min(), \ - np.nonzero(mask)[1].max() - else: - x1, y1, x2, y2 = bbox - box_input = np.array([[x1, y1, x2, y2]]) - masks, scores, logits = predictor.predict( - box=box_input, - multimask_output=True, - return_logits=False, - ) - mask = masks[-1] - return mask - - -def image_examples(samples, ncols, return_key=None, example_text="Examples"): - global img_example_counter - trigger = False - with st.expander(example_text, True): - for i in range(len(samples) // ncols): - cols = st.columns(ncols) - for j in range(ncols): - idx = i * ncols + j - if idx >= len(samples): - continue - entry = samples[idx] - with cols[j]: - st.image(entry['dispi']) - img_example_counter += 1 - with st.columns(5)[2]: - this_trigger = st.button('\+', key='imgexuse%d' % img_example_counter) - trigger = trigger or this_trigger - if this_trigger: - trigger = entry[return_key] - return trigger - - -def segment_img(img: Image): - output = rembg.remove(img) - mask = numpy.array(output)[:, :, 3] > 0 - sam_mask = SAMAPI.segment_api(numpy.array(img)[:, :, :3], mask) - segmented_img = Image.new("RGBA", img.size, (0, 0, 0, 0)) - segmented_img.paste(img, mask=Image.fromarray(sam_mask)) - return segmented_img - - -def segment_6imgs(zero123pp_imgs): - imgs = [zero123pp_imgs.crop([0, 0, 320, 320]), - zero123pp_imgs.crop([320, 0, 640, 320]), - zero123pp_imgs.crop([0, 320, 320, 640]), - zero123pp_imgs.crop([320, 320, 640, 640]), - zero123pp_imgs.crop([0, 640, 320, 960]), - zero123pp_imgs.crop([320, 640, 640, 960])] - segmented_imgs = [] - for i, img in enumerate(imgs): - output = rembg.remove(img) - mask = numpy.array(output)[:, :, 3] - mask = SAMAPI.segment_api(numpy.array(img)[:, :, :3], mask) - data = numpy.array(img)[:,:,:3] - data[mask == 0] = [255, 255, 255] - segmented_imgs.append(data) - result = numpy.concatenate([ - numpy.concatenate([segmented_imgs[0], segmented_imgs[1]], axis=1), - numpy.concatenate([segmented_imgs[2], segmented_imgs[3]], axis=1), - numpy.concatenate([segmented_imgs[4], segmented_imgs[5]], axis=1) - ]) - return Image.fromarray(result) - - -def expand2square(pil_img, background_color): - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - - -@st.cache_data -def check_dependencies(): - reqs = [] - try: - import diffusers - except ImportError: - import traceback - traceback.print_exc() - print("Error: `diffusers` not found.", file=sys.stderr) - reqs.append("diffusers==0.20.2") - else: - if not diffusers.__version__.startswith("0.20"): - print( - f"Warning: You are using an unsupported version of diffusers ({diffusers.__version__}), which may lead to performance issues.", - file=sys.stderr - ) - print("Recommended version is `diffusers==0.20.2`.", file=sys.stderr) - try: - import transformers - except ImportError: - import traceback - traceback.print_exc() - print("Error: `transformers` not found.", file=sys.stderr) - reqs.append("transformers==4.29.2") - if torch.__version__ < '2.0': - try: - import xformers - except ImportError: - print("Warning: You are using PyTorch 1.x without a working `xformers` installation.", file=sys.stderr) - print("You may see a significant memory overhead when running the model.", file=sys.stderr) - if len(reqs): - print(f"Info: Fix all dependency errors with `pip install {' '.join(reqs)}`.") - - -@st.cache_resource -def load_zero123plus_pipeline(): - if 'HF_TOKEN' in os.environ: - huggingface_hub.login(os.environ['HF_TOKEN']) - from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler - pipeline = DiffusionPipeline.from_pretrained( - "sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline", - torch_dtype=torch.float16 - ) - # Feel free to tune the scheduler - pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config( - pipeline.scheduler.config, timestep_spacing='trailing' - ) - if torch.cuda.is_available(): - pipeline.to('cuda:0') - sys.main_lock = threading.Lock() - return pipeline - - -check_dependencies() -pipeline = load_zero123plus_pipeline() -SAMAPI.get_instance() -torch.set_grad_enabled(False) - -st.title("Zero123++ Demo") -# st.caption("For faster inference without waiting in queue, you may clone the space and run it yourself.") -prog = st.progress(0.0, "Idle") -pic = st.file_uploader("Upload an Image", key='imageinput', type=['png', 'jpg', 'webp']) -left, right = st.columns(2) -with left: - rem_input_bg = st.checkbox("Remove Input Background") -with right: - rem_output_bg = st.checkbox("Remove Output Background") -num_inference_steps = st.slider("Number of Inference Steps", 15, 100, 75) -st.caption("Diffusion Steps. For general real or synthetic objects, around 28 is enough. For objects with delicate details such as faces (either realistic or illustration), you may need 75 or more steps.") -cfg_scale = st.slider("Classifier Free Guidance Scale", 1.0, 10.0, 4.0) -seed = st.text_input("Seed", "42") -submit = False -if st.button("Submit"): - submit = True -results_container = st.container() -sample_got = image_examples(iret, 4, 'rimageinput') -if sample_got: - pic = sample_got -with results_container: - if sample_got or submit: - prog.progress(0.03, "Waiting in Queue...") - with sys.main_lock: - seed = int(seed) - torch.manual_seed(seed) - img = Image.open(pic) - if max(img.size) > 1280: - w, h = img.size - w = round(1280 / max(img.size) * w) - h = round(1280 / max(img.size) * h) - img = img.resize((w, h)) - left, right = st.columns(2) - with left: - st.image(img) - st.caption("Input Image") - prog.progress(0.1, "Preparing Inputs") - if rem_input_bg: - with right: - img = segment_img(img) - st.image(img) - st.caption("Input (Background Removed)") - img = expand2square(img, (127, 127, 127, 0)) - pipeline.set_progress_bar_config(disable=True) - result = pipeline( - img, - num_inference_steps=num_inference_steps, - guidance_scale=cfg_scale, - generator=torch.Generator(pipeline.device).manual_seed(seed), - callback=lambda i, t, latents: prog.progress(0.1 + 0.8 * i / num_inference_steps, "Diffusion Step %d" % i) - ).images[0] - prog.progress(0.9, "Post Processing") - left, right = st.columns(2) - with left: - st.image(result) - st.caption("Result") - if rem_output_bg: - result = segment_6imgs(result) - with right: - st.image(result) - st.caption("Result (Background Removed)") - prog.progress(1.0, "Idle") diff --git a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/attentions.py b/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/scripts/openpose/body.py b/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/scripts/openpose/body.py deleted file mode 100644 index 47d379d9c8872f01c176a6622fd4f434d00b5d55..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions/openpose-editor/scripts/openpose/body.py +++ /dev/null @@ -1,222 +0,0 @@ -# This code from https://github.com/lllyasviel/ControlNet - -import cv2 -import numpy as np -import math -import time -from scipy.ndimage.filters import gaussian_filter -import matplotlib.pyplot as plt -import matplotlib -import torch -from torchvision import transforms -from modules import extensions - -from . import util -from .model import bodypose_model - -class Body(object): - def __init__(self, model_path): - self.model = bodypose_model() - if torch.cuda.is_available(): - self.model = self.model.cuda() - model_dict = util.transfer(self.model, torch.load(model_path)) - self.model.load_state_dict(model_dict) - self.model.eval() - - def __call__(self, oriImg): - # scale_search = [0.5, 1.0, 1.5, 2.0] - scale_search = [0.5] - boxsize = 368 - stride = 8 - padValue = 128 - thre1 = 0.1 - thre2 = 0.05 - multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search] - heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19)) - paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38)) - - for m in range(len(multiplier)): - scale = multiplier[m] - imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC) - imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue) - im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5 - im = np.ascontiguousarray(im) - - data = torch.from_numpy(im).float() - if torch.cuda.is_available(): - data = data.cuda() - # data = data.permute([2, 0, 1]).unsqueeze(0).float() - with torch.no_grad(): - Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data) - Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy() - Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy() - - # extract outputs, resize, and remove padding - # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps - heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps - heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs - paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs - paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC) - paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :] - paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC) - - heatmap_avg += heatmap_avg + heatmap / len(multiplier) - paf_avg += + paf / len(multiplier) - - all_peaks = [] - peak_counter = 0 - - for part in range(18): - map_ori = heatmap_avg[:, :, part] - one_heatmap = gaussian_filter(map_ori, sigma=3) - - map_left = np.zeros(one_heatmap.shape) - map_left[1:, :] = one_heatmap[:-1, :] - map_right = np.zeros(one_heatmap.shape) - map_right[:-1, :] = one_heatmap[1:, :] - map_up = np.zeros(one_heatmap.shape) - map_up[:, 1:] = one_heatmap[:, :-1] - map_down = np.zeros(one_heatmap.shape) - map_down[:, :-1] = one_heatmap[:, 1:] - - peaks_binary = np.logical_and.reduce( - (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1)) - peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse - peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks] - peak_id = range(peak_counter, peak_counter + len(peaks)) - peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))] - - all_peaks.append(peaks_with_score_and_id) - peak_counter += len(peaks) - - # find connection in the specified sequence, center 29 is in the position 15 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - # the middle joints heatmap correpondence - mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \ - [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \ - [55, 56], [37, 38], [45, 46]] - - connection_all = [] - special_k = [] - mid_num = 10 - - for k in range(len(mapIdx)): - score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]] - candA = all_peaks[limbSeq[k][0] - 1] - candB = all_peaks[limbSeq[k][1] - 1] - nA = len(candA) - nB = len(candB) - indexA, indexB = limbSeq[k] - if (nA != 0 and nB != 0): - connection_candidate = [] - for i in range(nA): - for j in range(nB): - vec = np.subtract(candB[j][:2], candA[i][:2]) - norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1]) - norm = max(0.001, norm) - vec = np.divide(vec, norm) - - startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \ - np.linspace(candA[i][1], candB[j][1], num=mid_num))) - - vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \ - for I in range(len(startend))]) - vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \ - for I in range(len(startend))]) - - score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1]) - score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min( - 0.5 * oriImg.shape[0] / norm - 1, 0) - criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts) - criterion2 = score_with_dist_prior > 0 - if criterion1 and criterion2: - connection_candidate.append( - [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]]) - - connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True) - connection = np.zeros((0, 5)) - for c in range(len(connection_candidate)): - i, j, s = connection_candidate[c][0:3] - if (i not in connection[:, 3] and j not in connection[:, 4]): - connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]]) - if (len(connection) >= min(nA, nB)): - break - - connection_all.append(connection) - else: - special_k.append(k) - connection_all.append([]) - - # last number in each row is the total parts number of that person - # the second last number in each row is the score of the overall configuration - subset = -1 * np.ones((0, 20)) - candidate = np.array([item for sublist in all_peaks for item in sublist]) - - for k in range(len(mapIdx)): - if k not in special_k: - partAs = connection_all[k][:, 0] - partBs = connection_all[k][:, 1] - indexA, indexB = np.array(limbSeq[k]) - 1 - - for i in range(len(connection_all[k])): # = 1:size(temp,1) - found = 0 - subset_idx = [-1, -1] - for j in range(len(subset)): # 1:size(subset,1): - if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]: - subset_idx[found] = j - found += 1 - - if found == 1: - j = subset_idx[0] - if subset[j][indexB] != partBs[i]: - subset[j][indexB] = partBs[i] - subset[j][-1] += 1 - subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - elif found == 2: # if found 2 and disjoint, merge them - j1, j2 = subset_idx - membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2] - if len(np.nonzero(membership == 2)[0]) == 0: # merge - subset[j1][:-2] += (subset[j2][:-2] + 1) - subset[j1][-2:] += subset[j2][-2:] - subset[j1][-2] += connection_all[k][i][2] - subset = np.delete(subset, j2, 0) - else: # as like found == 1 - subset[j1][indexB] = partBs[i] - subset[j1][-1] += 1 - subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2] - - # if find no partA in the subset, create a new subset - elif not found and k < 17: - row = -1 * np.ones(20) - row[indexA] = partAs[i] - row[indexB] = partBs[i] - row[-1] = 2 - row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2] - subset = np.vstack([subset, row]) - # delete some rows of subset which has few parts occur - deleteIdx = [] - for i in range(len(subset)): - if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4: - deleteIdx.append(i) - subset = np.delete(subset, deleteIdx, axis=0) - - # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts - # candidate: x, y, score, id - return candidate, subset - -if __name__ == "__main__": - modeldir = os.path.join(extensions.extensions_dir, "sd-webui-controlnet", "annotator", "openpose") - body_estimation = Body(os.path.join(modeldir, "body_pose_model.pth")) - - test_image = '../images/ski.jpg' - oriImg = cv2.imread(test_image) # B,G,R order - candidate, subset = body_estimation(oriImg) - canvas = util.draw_bodypose(oriImg, candidate, subset) - plt.imshow(canvas[:, :, [2, 1, 0]]) - plt.show() diff --git a/spaces/supertori/files/stable-diffusion-webui/javascript/contextMenus.js b/spaces/supertori/files/stable-diffusion-webui/javascript/contextMenus.js deleted file mode 100644 index 163743c9debc160d46bd3a5a43dec6879e86a43e..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/javascript/contextMenus.js +++ /dev/null @@ -1,177 +0,0 @@ - -contextMenuInit = function(){ - let eventListenerApplied=false; - let menuSpecs = new Map(); - - const uid = function(){ - return Date.now().toString(36) + Math.random().toString(36).substr(2); - } - - function showContextMenu(event,element,menuEntries){ - let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; - let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop; - - let oldMenu = gradioApp().querySelector('#context-menu') - if(oldMenu){ - oldMenu.remove() - } - - let tabButton = uiCurrentTab - let baseStyle = window.getComputedStyle(tabButton) - - const contextMenu = document.createElement('nav') - contextMenu.id = "context-menu" - contextMenu.style.background = baseStyle.background - contextMenu.style.color = baseStyle.color - contextMenu.style.fontFamily = baseStyle.fontFamily - contextMenu.style.top = posy+'px' - contextMenu.style.left = posx+'px' - - - - const contextMenuList = document.createElement('ul') - contextMenuList.className = 'context-menu-items'; - contextMenu.append(contextMenuList); - - menuEntries.forEach(function(entry){ - let contextMenuEntry = document.createElement('a') - contextMenuEntry.innerHTML = entry['name'] - contextMenuEntry.addEventListener("click", function(e) { - entry['func'](); - }) - contextMenuList.append(contextMenuEntry); - - }) - - gradioApp().getRootNode().appendChild(contextMenu) - - let menuWidth = contextMenu.offsetWidth + 4; - let menuHeight = contextMenu.offsetHeight + 4; - - let windowWidth = window.innerWidth; - let windowHeight = window.innerHeight; - - if ( (windowWidth - posx) < menuWidth ) { - contextMenu.style.left = windowWidth - menuWidth + "px"; - } - - if ( (windowHeight - posy) < menuHeight ) { - contextMenu.style.top = windowHeight - menuHeight + "px"; - } - - } - - function appendContextMenuOption(targetElementSelector,entryName,entryFunction){ - - currentItems = menuSpecs.get(targetElementSelector) - - if(!currentItems){ - currentItems = [] - menuSpecs.set(targetElementSelector,currentItems); - } - let newItem = {'id':targetElementSelector+'_'+uid(), - 'name':entryName, - 'func':entryFunction, - 'isNew':true} - - currentItems.push(newItem) - return newItem['id'] - } - - function removeContextMenuOption(uid){ - menuSpecs.forEach(function(v,k) { - let index = -1 - v.forEach(function(e,ei){if(e['id']==uid){index=ei}}) - if(index>=0){ - v.splice(index, 1); - } - }) - } - - function addContextMenuEventListener(){ - if(eventListenerApplied){ - return; - } - gradioApp().addEventListener("click", function(e) { - let source = e.composedPath()[0] - if(source.id && source.id.indexOf('check_progress')>-1){ - return - } - - let oldMenu = gradioApp().querySelector('#context-menu') - if(oldMenu){ - oldMenu.remove() - } - }); - gradioApp().addEventListener("contextmenu", function(e) { - let oldMenu = gradioApp().querySelector('#context-menu') - if(oldMenu){ - oldMenu.remove() - } - menuSpecs.forEach(function(v,k) { - if(e.composedPath()[0].matches(k)){ - showContextMenu(e,e.composedPath()[0],v) - e.preventDefault() - return - } - }) - }); - eventListenerApplied=true - - } - - return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener] -} - -initResponse = contextMenuInit(); -appendContextMenuOption = initResponse[0]; -removeContextMenuOption = initResponse[1]; -addContextMenuEventListener = initResponse[2]; - -(function(){ - //Start example Context Menu Items - let generateOnRepeat = function(genbuttonid,interruptbuttonid){ - let genbutton = gradioApp().querySelector(genbuttonid); - let interruptbutton = gradioApp().querySelector(interruptbuttonid); - if(!interruptbutton.offsetParent){ - genbutton.click(); - } - clearInterval(window.generateOnRepeatInterval) - window.generateOnRepeatInterval = setInterval(function(){ - if(!interruptbutton.offsetParent){ - genbutton.click(); - } - }, - 500) - } - - appendContextMenuOption('#txt2img_generate','Generate forever',function(){ - generateOnRepeat('#txt2img_generate','#txt2img_interrupt'); - }) - appendContextMenuOption('#img2img_generate','Generate forever',function(){ - generateOnRepeat('#img2img_generate','#img2img_interrupt'); - }) - - let cancelGenerateForever = function(){ - clearInterval(window.generateOnRepeatInterval) - } - - appendContextMenuOption('#txt2img_interrupt','Cancel generate forever',cancelGenerateForever) - appendContextMenuOption('#txt2img_generate', 'Cancel generate forever',cancelGenerateForever) - appendContextMenuOption('#img2img_interrupt','Cancel generate forever',cancelGenerateForever) - appendContextMenuOption('#img2img_generate', 'Cancel generate forever',cancelGenerateForever) - - appendContextMenuOption('#roll','Roll three', - function(){ - let rollbutton = get_uiCurrentTabContent().querySelector('#roll'); - setTimeout(function(){rollbutton.click()},100) - setTimeout(function(){rollbutton.click()},200) - setTimeout(function(){rollbutton.click()},300) - } - ) -})(); -//End example Context Menu Items - -onUiUpdate(function(){ - addContextMenuEventListener() -}); diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harold Pinter The Homecoming Full Text Pdf !NEW!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harold Pinter The Homecoming Full Text Pdf !NEW!.md deleted file mode 100644 index 748fd94403e11f79616f244483d89e98098c8021..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harold Pinter The Homecoming Full Text Pdf !NEW!.md +++ /dev/null @@ -1,151 +0,0 @@ -
            -

            Harold pinter the homecoming full text pdf

            - -

            If you are a fan of Harold Pinter, one of the most influential and acclaimed playwrights of the 20th century, you might be interested in reading his masterpiece, The Homecoming, in its full text pdf format. The Homecoming is a dark and disturbing play that explores the themes of power, sexuality, identity and family dynamics. It is widely regarded as one of Pinter's best and most controversial works.

            - -

            What is The Homecoming about?

            - -

            The Homecoming is a play that revolves around the events that take place when Teddy, a professor of philosophy who lives in America, returns to his childhood home in London with his wife Ruth, whom he has never introduced to his family. He finds his father Max, a retired butcher, his uncle Sam, a chauffeur, and his two brothers Lenny, a pimp, and Joey, a boxer, living in a rundown house full of tension and hostility. The arrival of Teddy and Ruth triggers a series of conflicts and confrontations that reveal the hidden secrets and desires of each character. The play ends with a shocking twist that challenges the conventional notions of morality and family.

            -

            Harold pinter the homecoming full text pdf


            Download Filehttps://cinurl.com/2uEX4D



            - -

            How to get Harold pinter the homecoming full text pdf?

            - -

            There are several ways to get Harold pinter the homecoming full text pdf online. One way is to visit Archive.org, a website that provides free access to millions of books, movies, music and other digital content. You can find the full text of The Homecoming by Harold Pinter on this link: https://archive.org/stream/lp_the-homecoming_harold-pinter/lp_the-homecoming_harold-pinter_djvu.txt. You can read it online or download it as a pdf file.

            - -

            Another way is to visit Documents and E-books, a website that allows you to download and view various documents and e-books for free. You can find Harold Pinter - The Homecoming (play).pdf on this link: https://idoc.pub/documents/harold-pinter-the-homecoming-playpdf-34wmq03r7wl7. You can download it as a pdf file or view it as an image.

            - -

            A third way is to visit Internet Archive Books, a website that offers free access to scanned books from libraries and publishers around the world. You can find The homecoming by Harold Pinter on this link: https://archive.org/details/homecoming00pint_0. You can read it online or download it as a pdf file.

            - -

            Why should you read Harold pinter the homecoming full text pdf?

            - -

            There are many reasons why you should read Harold pinter the homecoming full text pdf. Some of them are:

            - -
              -
            • You will enjoy reading one of the most brilliant and provocative plays ever written by one of the greatest playwrights of all time.
            • -
            • You will experience the power and beauty of Pinter's language, which combines realism and absurdism, comedy and tragedy, dialogue and silence.
            • -
            • You will explore the complex and fascinating characters that Pinter creates, who are both realistic and symbolic, familiar and mysterious.
            • -
            • You will discover the themes and messages that Pinter conveys through his play, such as the nature of human relationships, the role of gender and sexuality, the meaning of identity and belonging.
            • -
            • You will challenge your own assumptions and perspectives on morality and family values.
            • -
            - -

            Conclusion

            - -

            Harold pinter the homecoming full text pdf is a great way to read one of the most important and influential plays of modern drama. The Homecoming by Harold Pinter is a masterpiece that will captivate you with its plot, characters, language and themes. You can get Harold pinter the homecoming full text pdf online from various websites for free. If you are looking for a stimulating and rewarding reading experience, you should definitely read Harold pinter the homecoming full text pdf.

            -

            How to read Harold pinter the homecoming full text pdf?

            - -

            Reading Harold pinter the homecoming full text pdf is not a difficult task, but it requires some attention and concentration. The play is written in a minimalist and ambiguous style, with sparse dialogue and frequent pauses and silences. The characters often speak in a cryptic and indirect way, leaving much to the interpretation of the reader or the audience. The play also has a nonlinear and circular structure, with flashbacks and repetitions that create a sense of confusion and uncertainty.

            -

            - -

            To read Harold pinter the homecoming full text pdf effectively, you should pay attention to the following aspects:

            - -
              -
            • The context and background of the play, such as the time period, the setting, the social and political situation, and the biography of the author.
            • -
            • The plot and structure of the play, such as the sequence of events, the turning points, the climax and the resolution.
            • -
            • The characters and their relationships, such as their names, their roles, their motivations, their conflicts and their interactions.
            • -
            • The themes and messages of the play, such as the meaning of home, family, power, sexuality, identity and violence.
            • -
            • The language and style of the play, such as the use of dialogue, pauses, silences, symbols, metaphors and irony.
            • -
            • The tone and mood of the play, such as the atmosphere, the emotions, the humor and the tension.
            • -
            - -

            What are some of the challenges and rewards of reading Harold pinter the homecoming full text pdf?

            - -

            Reading Harold pinter the homecoming full text pdf can be both challenging and rewarding for different reasons. Some of the challenges are:

            - -
              -
            • The play can be confusing and frustrating for some readers who expect a clear and straightforward story with a logical and satisfying ending.
            • -
            • The play can be disturbing and shocking for some readers who are uncomfortable with the graphic and violent scenes and language that depict sexual abuse, incest and betrayal.
            • -
            • The play can be difficult and demanding for some readers who have to deal with multiple layers of meaning and interpretation that require close reading and analysis.
            • -
            - -

            Some of the rewards are:

            - -
              -
            • The play can be fascinating and intriguing for some readers who enjoy exploring -the hidden depths -and complexities -of human nature -and behavior.
            • -
            • The play can be stimulating -and challenging -for some readers who appreciate -the artistic -and intellectual -quality -and originality -of Pinter's work.
            • -
            • The play can be enlightening -and inspiring -for some readers who discover -new perspectives -and insights -on themselves -and others -through Pinter's vision.
            • -
            - -

            Conclusion

            - -

            Harold pinter the homecoming full text pdf is a great way to read one of the most important -and influential plays -of modern drama. -The Homecoming by Harold Pinter is a masterpiece that will captivate you with its plot, -characters, -language -and themes. -You can get Harold pinter the homecoming full text pdf online from various websites for free. -If you are looking for a stimulating -and rewarding reading experience, -you should definitely read Harold pinter the homecoming full text pdf.

            -

            How to appreciate Harold pinter the homecoming full text pdf?

            - -

            Appreciating Harold pinter the homecoming full text pdf is not a simple task, but it can be a rewarding one. The play is not meant to be taken literally or at face value, but rather to be interpreted and explored in different ways. The play invites the reader or the audience to participate in the creation of meaning and to question their own assumptions and values.

            - -

            To appreciate Harold pinter the homecoming full text pdf effectively, you should consider the following aspects:

            - -
              -
            • The context and influence of the play, such as the historical and cultural background, the literary and theatrical traditions, and the personal and professional experiences of the author.
            • -
            • The reception and impact of the play, such as the critical and popular response, the awards and recognition, and the adaptations and productions.
            • -
            • The analysis and interpretation of the play, such as the different perspectives and approaches, the themes and symbols, and the subtext and implications.
            • -
            • The evaluation and appreciation of the play, such as the aesthetic and artistic value, the social and political relevance, and the personal and emotional resonance.
            • -
            - -

            What are some of the resources and references for reading Harold pinter the homecoming full text pdf?

            - -

            There are many resources and references for reading Harold pinter the homecoming full text pdf online. Some of them are:

            - -
              -
            • The official website of Harold Pinter, which provides information about his life, works, awards, archives and foundation. You can visit it at http://www.haroldpinter.org/home/index.shtml.
            • -
            • The Harold Pinter Society, which is an organization dedicated to promoting the study and appreciation of Pinter's works. You can visit it at http://www.haroldpintersociety.org/.
            • -
            • The Harold Pinter Review, which is a peer-reviewed journal that publishes articles, reviews, interviews and other materials related to Pinter's works. You can visit it at https://www.psupress.org/journals/jnls_hpr.html.
            • -
            • The Harold Pinter Archive Blog, which is a blog that showcases some of the items from Pinter's personal archive at the British Library. You can visit it at https://blogs.bl.uk/english-and-drama/harold-pinter-archive-blog/.
            • -
            • The Harold Pinter Theatre, which is a theatre in London that was renamed after Pinter in 2011. You can visit it at https://www.atgtickets.com/venues/harold-pinter-theatre/.
            • -
            - -

            Conclusion

            - -

            Harold pinter the homecoming full text pdf is a great way to read one of the most important -and influential plays -of modern drama. -The Homecoming by Harold Pinter is a masterpiece that will captivate you with its plot, -characters, -language -and themes. -You can get Harold pinter the homecoming full text pdf online from various websites for free. -If you are looking for a stimulating -and rewarding reading experience, -you should definitely read Harold pinter the homecoming full text pdf.

            -

            Conclusion

            - -

            Harold pinter the homecoming full text pdf is a great way to read one of the most important -and influential plays -of modern drama. -The Homecoming by Harold Pinter is a masterpiece that will captivate you with its plot, -characters, -language -and themes. -You can get Harold pinter the homecoming full text pdf online from various websites for free. -If you are looking for a stimulating -and rewarding reading experience, -you should definitely read Harold pinter the homecoming full text pdf.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Syncro TruEmu Team H2O - Cubase Dongle FIX Full UPD Version.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Syncro TruEmu Team H2O - Cubase Dongle FIX Full UPD Version.md deleted file mode 100644 index f8190a751a1f77294f2d71b0accf02b82e22ab55..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Syncro TruEmu Team H2O - Cubase Dongle FIX Full UPD Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Syncro TruEmu Team H2O - cubase dongle FIX full version


            DOWNLOAD »»» https://cinurl.com/2uEYc6



            -
            -Nuendo 4 Full Version Download ... It relates to the H2o Driver which works as a dongle for Cubase or Nuendo or a number of. ... If your laptop has started to show the side effects of a malfunctioning driver (Syncro TruEmu Team H2O is ... this), it's essential to take quick measures to fix the specific situation. 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/szukevin/VISOR-GPT/train/finetune/run_text2text.py b/spaces/szukevin/VISOR-GPT/train/finetune/run_text2text.py deleted file mode 100644 index f18ad28aa892f3fc5ba59af9990b066165d1728e..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/finetune/run_text2text.py +++ /dev/null @@ -1,314 +0,0 @@ -""" -This script provides an example to wrap TencentPretrain for text-to-text fine-tuning. -""" -import sys -import os -import random -import argparse -import torch - -tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) -sys.path.append(tencentpretrain_dir) - -from tencentpretrain.model_saver import save_model -from tencentpretrain.decoders import * -from tencentpretrain.targets import * -from finetune.run_classifier import * - - -class Text2text(torch.nn.Module): - def __init__(self, args): - super(Text2text, self).__init__() - self.embedding = Embedding(args) - for embedding_name in args.embedding: - tmp_emb = str2embedding[embedding_name](args, len(args.tokenizer.vocab)) - self.embedding.update(tmp_emb, embedding_name) - self.encoder = str2encoder[args.encoder](args) - self.tgt_embedding = Embedding(args) - for embedding_name in args.tgt_embedding: - tmp_emb = str2embedding[embedding_name](args, len(args.tokenizer.vocab)) - self.tgt_embedding.update(tmp_emb, embedding_name) - self.decoder = str2decoder[args.decoder](args) - self.target = Target() - self.target.update(LmTarget(args, len(args.tokenizer.vocab)), "lm") - if args.tie_weights: - self.target.lm.output_layer.weight = self.embedding.word.embedding.weight - if args.share_embedding: - self.tgt_embedding.word.embedding.weight = self.embedding.word.embedding.weight - - def encode(self, src, seg): - emb = self.embedding(src, seg) - memory_bank = self.encoder(emb, seg) - return memory_bank - - def decode(self, src, memory_bank, tgt, tgt_seg): - tgt_in, tgt_out, _ = tgt - decoder_emb = self.tgt_embedding(tgt_in, tgt_seg) - hidden = self.decoder(memory_bank, decoder_emb, (src,)) - output = self.target.lm.output_layer(hidden) - return output - - def forward(self, src, tgt, seg, tgt_seg, memory_bank=None, only_use_encoder=False): - if only_use_encoder: - return self.encode(src, seg) - if memory_bank is not None: - return self.decode(src, memory_bank, tgt, tgt_seg) - tgt_in, tgt_out, _ = tgt - memory_bank = self.encode(src, seg) - if tgt_out is None: - output = self.decode(src, memory_bank, tgt) - return None, output - else: - decoder_emb = self.tgt_embedding(tgt_in, tgt_seg) - hidden = self.decoder(memory_bank, decoder_emb, (seg,)) - loss = self.target(hidden, tgt_out, None)[0] - return loss, None - - -def read_dataset(args, path): - dataset, columns = [], {} - with open(path, mode="r", encoding="utf-8") as f: - for line_id, line in enumerate(f): - if line_id == 0: - for i, column_name in enumerate(line.rstrip("\r\n").split("\t")): - columns[column_name] = i - continue - line = line.rstrip("\r\n").split("\t") - - if "text_b" in columns: - text = line[columns["text_a"]] + SEP_TOKEN + line[columns["text_b"]] - label = line[columns["label"]] - else: - text, label = line[columns["text_a"]], line[columns["label"]] - - src = args.tokenizer.convert_tokens_to_ids([CLS_TOKEN] + args.tokenizer.tokenize(text) + [SEP_TOKEN]) - tgt_in = args.tokenizer.convert_tokens_to_ids([CLS_TOKEN] + args.tokenizer.tokenize(label) + [SEP_TOKEN]) - PAD_ID = args.tokenizer.convert_tokens_to_ids([PAD_TOKEN])[0] - seg = [1] * len(src) - tgt_seg = [1] * len(tgt_in) - - if len(src) > args.seq_length: - src = src[: args.seq_length] - seg = seg[: args.seq_length] - if len(tgt_in) > args.tgt_seq_length: - tgt_in = tgt_in[: args.tgt_seq_length] - tgt_seg = tgt_seg[: args.tgt_seq_length] - tgt_out = tgt_in[1:] + [PAD_ID] - - while len(src) < args.seq_length: - src.append(PAD_ID) - seg.append(0) - while len(tgt_in) < args.tgt_seq_length: - tgt_in.append(PAD_ID) - tgt_out.append(PAD_ID) - tgt_seg.append(PAD_ID) - - dataset.append((src, tgt_in, tgt_out, seg, tgt_seg)) - - return dataset - - -def batch_loader(batch_size, src, tgt_in, tgt_out, seg, tgt_seg): - instances_num = src.size()[0] - for i in range(instances_num // batch_size): - src_batch = src[i * batch_size : (i + 1) * batch_size, :] - tgt_in_batch = tgt_in[i * batch_size : (i + 1) * batch_size, :] - tgt_out_batch = tgt_out[i * batch_size : (i + 1) * batch_size, :] - seg_batch = seg[i * batch_size : (i + 1) * batch_size, :] - tgt_seg_batch = tgt_seg[i * batch_size : (i + 1) * batch_size, :] - yield src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch - - if instances_num > instances_num // batch_size * batch_size: - src_batch = src[instances_num // batch_size * batch_size :, :] - tgt_in_batch = tgt_in[instances_num // batch_size * batch_size :, :] - tgt_out_batch = tgt_out[instances_num // batch_size * batch_size :, :] - seg_batch = seg[instances_num // batch_size * batch_size :, :] - tgt_seg_batch = tgt_seg[instances_num // batch_size * batch_size :, :] - yield src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch - - -def train_model(args, model, optimizer, scheduler, src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch): - model.zero_grad() - - src_batch = src_batch.to(args.device) - tgt_in_batch = tgt_in_batch.to(args.device) - tgt_out_batch = tgt_out_batch.to(args.device) - seg_batch = seg_batch.to(args.device) - tgt_seg_batch = tgt_seg_batch.to(args.device) - - loss, _ = model(src_batch, (tgt_in_batch, tgt_out_batch, src_batch), seg_batch, tgt_seg_batch) - - if torch.cuda.device_count() > 1: - loss = torch.mean(loss) - - if args.fp16: - with args.amp.scale_loss(loss, optimizer) as scaled_loss: - scaled_loss.backward() - else: - loss.backward() - - optimizer.step() - scheduler.step() - - return loss - - -def evaluate(args, dataset): - - src = torch.LongTensor([example[0] for example in dataset]) - tgt_in = torch.LongTensor([example[1] for example in dataset]) - tgt_out = torch.LongTensor([example[2] for example in dataset]) - seg = torch.LongTensor([example[3] for example in dataset]) - tgt_seg = torch.LongTensor([example[4] for example in dataset]) - - generated_sentences = [] - args.model.eval() - - for i, (src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch) in enumerate(batch_loader(args.batch_size, src, tgt_in, tgt_out, seg, tgt_seg)): - - src_batch = src_batch.to(args.device) - tgt_in_batch = torch.zeros(tgt_in_batch.size()[0], 1, dtype=torch.long, device=args.device) - tgt_seg_batch = torch.ones(tgt_in_batch.size()[0], 1, dtype=torch.long, device=args.device) - for j in range(tgt_in_batch.size()[0]): - tgt_in_batch[j][-1] = args.tokenizer.vocab.get(CLS_TOKEN) - - seg_batch = seg_batch.to(args.device) - - with torch.no_grad(): - memory_bank = args.model(src_batch, None, seg_batch, tgt_seg_batch, only_use_encoder=True) - - for _ in range(args.tgt_seq_length): - tgt_out_batch = tgt_in_batch - with torch.no_grad(): - outputs = args.model(src_batch, (tgt_in_batch, tgt_out_batch, src_batch), None, tgt_seg_batch, memory_bank=memory_bank) - - next_token_logits = outputs[:, -1] - next_tokens = torch.argmax(next_token_logits, dim=1).unsqueeze(1) - tgt_in_batch = torch.cat([tgt_in_batch, next_tokens], dim=1) - tgt_seg_batch = torch.ones(tgt_in_batch.size()[0], tgt_in_batch.size()[1], dtype=torch.long, device=args.device) - for j in range(len(outputs)): - sentence = " ".join([args.tokenizer.inv_vocab[token_id.item()] for token_id in tgt_in_batch[j][1:]]) - generated_sentences.append(sentence) - - labels = {} - labels_num = 0 - for example in dataset: - label = "".join([args.tokenizer.inv_vocab[token_id] for token_id in example[2][:-2]]).split(SEP_TOKEN)[0] - if not labels.get(label, None): - labels[label] = labels_num - labels_num += 1 - confusion_matrix = torch.zeros(labels_num, labels_num, dtype=torch.long) - correct = 0 - - for i, example in enumerate(dataset): - tgt = example[2] - tgt_token = " ".join([args.tokenizer.inv_vocab[token_id] for token_id in tgt[:-2]]) - generated_sentences[i] = generated_sentences[i].split(SEP_TOKEN)[0] - - pred = "".join(generated_sentences[i].split(" ")) - gold = "".join(tgt_token.split(SEP_TOKEN)[0].split(" ")) - - if pred in labels.keys(): - confusion_matrix[labels[pred], labels[gold]] += 1 - - if pred == gold: - correct += 1 - - args.logger.info("Acc. (Correct/Total): {:.4f} ({}/{}) ".format(correct / len(dataset), correct, len(dataset))) - return correct / len(dataset) - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - finetune_opts(parser) - - tokenizer_opts(parser) - - parser.add_argument("--tgt_seq_length", type=int, default=32, - help="Output sequence length.") - - args = parser.parse_args() - - # Load the hyperparameters from the config file. - args = load_hyperparam(args) - - set_seed(args.seed) - - # Build tokenizer. - args.tokenizer = str2tokenizer[args.tokenizer](args) - - # Build classification model. - model = Text2text(args) - - # Load or initialize parameters. - load_or_initialize_parameters(args, model) - - # Get logger. - args.logger = init_logger(args) - - args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model = model.to(args.device) - - # Training phase. - trainset = read_dataset(args, args.train_path) - instances_num = len(trainset) - batch_size = args.batch_size - - args.train_steps = int(instances_num * args.epochs_num / batch_size) + 1 - - args.logger.info("Batch size: {}".format(batch_size)) - args.logger.info("The number of training instances: {}".format(instances_num)) - - optimizer, scheduler = build_optimizer(args, model) - - if args.fp16: - try: - from apex import amp - except ImportError: - raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.") - model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level) - args.amp = amp - - if torch.cuda.device_count() > 1: - args.logger.info("{} GPUs are available. Let's use them.".format(torch.cuda.device_count())) - model = torch.nn.DataParallel(model) - args.model = model - - total_loss, result, best_result = 0.0, 0.0, 0.0 - - args.logger.info("Start training.") - - for epoch in range(1, args.epochs_num + 1): - random.shuffle(trainset) - src = torch.LongTensor([example[0] for example in trainset]) - tgt_in = torch.LongTensor([example[1] for example in trainset]) - tgt_out = torch.LongTensor([example[2] for example in trainset]) - seg = torch.LongTensor([example[3] for example in trainset]) - tgt_seg = torch.LongTensor([example[4] for example in trainset]) - - model.train() - for i, (src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch) in enumerate(batch_loader(batch_size, src, tgt_in, tgt_out, seg, tgt_seg)): - loss = train_model(args, model, optimizer, scheduler, src_batch, tgt_in_batch, tgt_out_batch, seg_batch, tgt_seg_batch) - total_loss += loss.item() - if (i + 1) % args.report_steps == 0: - args.logger.info("Epoch id: {}, Training steps: {}, Avg loss: {:.3f}".format(epoch, i + 1, total_loss / args.report_steps)) - total_loss = 0.0 - - result = evaluate(args, read_dataset(args, args.dev_path)) - if result > best_result: - best_result = result - save_model(model, args.output_model_path) - - # Evaluation phase. - if args.test_path is not None: - args.logger.info("Test set evaluation.") - if torch.cuda.device_count() > 1: - args.model.module.load_state_dict(torch.load(args.output_model_path)) - else: - args.model.load_state_dict(torch.load(args.output_model_path)) - evaluate(args, read_dataset(args, args.test_path)) - - -if __name__ == "__main__": - main() diff --git a/spaces/team-ai-law-assistant/CUAD/README.md b/spaces/team-ai-law-assistant/CUAD/README.md deleted file mode 100644 index 27569f15576796eb5ebb91dc627730b7fdff646e..0000000000000000000000000000000000000000 --- a/spaces/team-ai-law-assistant/CUAD/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: CUAD Base -emoji: 🌖 -colorFrom: green -colorTo: green -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/tejatrivikram/MyGenAIAvatar/README.md b/spaces/tejatrivikram/MyGenAIAvatar/README.md deleted file mode 100644 index 2f644e6e515f82407aed84a17cda7f853d238019..0000000000000000000000000000000000000000 --- a/spaces/tejatrivikram/MyGenAIAvatar/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAIAvatar -emoji: 🌖 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/temion/KoGPT_API/app.py b/spaces/temion/KoGPT_API/app.py deleted file mode 100644 index 3059540b4bd4cef3eda46f455f850696db1a6c71..0000000000000000000000000000000000000000 --- a/spaces/temion/KoGPT_API/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import requests -import json -import gradio as gr -import os - -REST_API_KEY = os.environ.get("REST_API_KEY") - -def kogpt_api(prompt, max_tokens = 32, temperature = 1.0, top_p = 1.0, n = 1): - r = requests.post( - url='https://api.kakaobrain.com/v1/inference/kogpt/generation', - headers = { - 'Authorization': 'KakaoAK ' + REST_API_KEY, - 'Content-Type': 'application/json' - }, - json = { - 'prompt': prompt, - 'max_tokens': max_tokens, - 'temperature': temperature, - 'top_p': top_p, - 'n': n - } - ) - - response = json.loads(r.content) - return response - -def greet(prompt, max_tokens, temperature, top_p, n): - response = kogpt_api( - prompt = prompt, - max_tokens = int(max_tokens), - temperature = float(temperature), - top_p = float(top_p), - n = int(n) - ) - return response['generations'][0]['text'] - -iface = gr.Interface( - fn=greet, - inputs=[ - gr.Textbox(lines=2, placeholder="Input Prompt Here...", label="textbox"), - gr.Number(value=32), - gr.Slider(0.1, 1.0, step=0.1, value=0.5), - gr.Slider(0.1, 1.0, step=0.1, value=0.5), - gr.Slider(1, 16, step=0.1, value=1)], - outputs=["text"] - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/templates/fastapi-uvicorn/modules/inference.py b/spaces/templates/fastapi-uvicorn/modules/inference.py deleted file mode 100644 index fbf5cce09c4dd0844bb300e7afb161a15f7b0149..0000000000000000000000000000000000000000 --- a/spaces/templates/fastapi-uvicorn/modules/inference.py +++ /dev/null @@ -1,11 +0,0 @@ -from transformers import T5Tokenizer, T5ForConditionalGeneration - -tokenizer = T5Tokenizer.from_pretrained("t5-small") -model = T5ForConditionalGeneration.from_pretrained("t5-small") - - -def infer_t5(input): - input_ids = tokenizer(input, return_tensors="pt").input_ids - outputs = model.generate(input_ids) - - return tokenizer.decode(outputs[0], skip_special_tokens=True) diff --git a/spaces/terfces0erbo/CollegeProjectV2/Celemony Melodyne 2.1.0.45 STANDALONE VST.VST3 X86 X64 REPACK - Utorrent.md b/spaces/terfces0erbo/CollegeProjectV2/Celemony Melodyne 2.1.0.45 STANDALONE VST.VST3 X86 X64 REPACK - Utorrent.md deleted file mode 100644 index 2963cbea4a64489e05a847c20834308d8e249704..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Celemony Melodyne 2.1.0.45 STANDALONE VST.VST3 X86 X64 REPACK - Utorrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Celemony Melodyne 2.1.0.45 STANDALONE VST.VST3 x86 x64 REPACK - utorrent


            Download Zip –––––>>> https://bytlly.com/2uGlYS



            - -Celemony Melodyne 2.1.0.45 STANDALONE VST.VST3. X86 X64 REPACK - Utorrent -- http://bltlly.com/127dvj ef38ba1d05 .... Celemony – Melodyne Studio 4 ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kamasutra.3D.mp4.md b/spaces/terfces0erbo/CollegeProjectV2/Kamasutra.3D.mp4.md deleted file mode 100644 index 57970b61ef491a343bc513dc745ec8facff2fc19..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Kamasutra.3D.mp4.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Kamasutra.3D.mp4


            Download https://bytlly.com/2uGiCG



            -
            - 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/thak123/Whisper-Konkani/app.py b/spaces/thak123/Whisper-Konkani/app.py deleted file mode 100644 index fc3d438ad7aa32a8b66c623a66d9727578f74856..0000000000000000000000000000000000000000 --- a/spaces/thak123/Whisper-Konkani/app.py +++ /dev/null @@ -1,36 +0,0 @@ -from transformers import WhisperTokenizer -import os -tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small") #, language="marathi", task="transcribe" - -from transformers import pipeline -import gradio as gr -import torch - -pipe = pipeline(model="thak123/gom-stt-v3", #"thak123/whisper-small-LDC-V1", #"thak123/whisper-small-gom", - task="automatic-speech-recognition", tokenizer= tokenizer) # change to "your-username/the-name-you-picked" - -# pipe.model.config.forced_decoder_ids = ( -# pipe.tokenizer.get_decoder_prompt_ids( -# language="marathi", task="transcribe" -# ) -# ) - -def transcribe(audio): - text = pipe(audio)["text"] - return text - -iface = gr.Interface( - fn=transcribe, - inputs=gr.Audio(source="microphone", type="filepath"), - outputs="text", - examples=[ - [os.path.join(os.path.dirname("."),"audio/chalyaami.mp3")], - [os.path.join(os.path.dirname("."),"audio/ekdonteen.flac")], - [os.path.join(os.path.dirname("."),"audio/heyatachadjaale.mp3")], - ], - title="Whisper Konkani", - description="Realtime demo for Konkani speech recognition using a fine-tuned Whisper small model.", -) - - -iface.launch() \ No newline at end of file diff --git a/spaces/thejagstudio/picxai/templates/generate.html b/spaces/thejagstudio/picxai/templates/generate.html deleted file mode 100644 index e700269feeac4c6c78563e63a2c6af71f58ac3ba..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/picxai/templates/generate.html +++ /dev/null @@ -1,349 +0,0 @@ - - - - - - - PicxAI - - - - - - - - - - - - - - - - - - -
            - -
            -
            -
            - -
            -
            - - - - -
            -
            -
            - -
            -
            - - - - - - - - - -
            -
            -
            - -
            -
            - - - -
            -
            -
            - -
            -
            - - - -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -

            Describe your image

            -

            Making variations ⤵

            -
            -
            - -
            -
            -

            Negative prompt

            - -
            -
            - -
            -
            - -
            -
            -
            -
            -
            - -

            Dimensions

            -
            - - - - -
            - - - - -
            640 × 640
            - - - - -
            -
            -
            - - -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            -
            - -
            - - - \ No newline at end of file diff --git a/spaces/thenethi1603/mygenAIChatbot/README.md b/spaces/thenethi1603/mygenAIChatbot/README.md deleted file mode 100644 index 611b02fdc11734c4ff98473045b0e17de4273944..0000000000000000000000000000000000000000 --- a/spaces/thenethi1603/mygenAIChatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MygenAIChatbot -emoji: 📉 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Angeldust Download For Pc [License] Choose Your Character and Enter PvP Battles.md b/spaces/tialenAdioni/chat-gpt-api/logs/Angeldust Download For Pc [License] Choose Your Character and Enter PvP Battles.md deleted file mode 100644 index ce16ab1858ef0ac75495cb07888e98ba94d8edf7..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Angeldust Download For Pc [License] Choose Your Character and Enter PvP Battles.md +++ /dev/null @@ -1,218 +0,0 @@ -
            -

            Angeldust Download For PC [License]: A Guide to the Multiplayer Adventure Game

            - -

            If you are looking for a game that lets you explore, build, and fight in a colorful and procedurally generated world, you might want to check out Angeldust. Angeldust is a multiplayer adventure game that can be played on PC and Mac, as well as on mobile devices. In this article, we will show you how to download Angeldust for PC with a license key, and what features and benefits the game offers.

            - -

            What is Angeldust?

            - -

            Angeldust is a game developed by Metagaming B.V., a Dutch indie studio. The game was released in 2016 and has been updated regularly since then. The game is inspired by classic sandbox games like Minecraft and Terraria, but with its own unique twist.

            -

            Angeldust Download For Pc [License]


            Download Ziphttps://urlcod.com/2uK6ml



            - -

            In Angeldust, you can choose to play as one of four classes: Fighter, Builder, Scout, or Sorcerer. Each class has its own abilities and skills, and you can switch between them at any time. You can also customize your character with hundreds of outfits and accessories.

            - -

            The game features over 275 creatures to discover and defeat, ranging from cute animals to fearsome monsters. You can also ride your own horse, bear, or moa, and use them to travel faster and fight better. You can also craft beautiful buildings with over 450 materials and objects, and create your own home or castle.

            - -

            The game can be played online with friends and millions of other players from around the world. You can chat with them, trade with them, or join forces to complete quests and defeat bosses. You can also enter PvP battles to show off your skills and compete for glory.

            - -

            How to Download Angeldust for PC with a License Key?

            - -

            If you want to play Angeldust on your PC or Mac, you will need to download the game from the official website or from a trusted source like Pcmacstore.com. The game costs $1.99, but you can also get a free trial version that lets you play for 24 hours.

            - -

            To download Angeldust for PC with a license key, follow these steps:

            - -
              -
            1. Go to https://angeldu.st/en/download or https://pcmacstore.com/en/app/1042018823/angeldust and click on the download button for your operating system.
            2. -
            3. Save the file to your computer and run it to install the game.
            4. -
            5. Launch the game and create an account or log in with an existing one.
            6. -
            7. Enter your license key when prompted. You can get a license key by purchasing the game from the website or from Pcmacstore.com, or by requesting one from the developer via email or social media.
            8. -
            9. Enjoy playing Angeldust!
            10. -
            - -

            Why Should You Play Angeldust?

            - -

            Angeldust is a game that offers endless fun and creativity for players of all ages and preferences. Whether you like exploring, building, fighting, or socializing, you will find something to enjoy in Angeldust. Here are some of the reasons why you should play Angeldust:

            - -
              -
            • The game has stunning graphics and sound effects that create a vibrant and immersive world.
            • -
            • The game has a simple and intuitive interface that makes it easy to play and control.
            • -
            • The game has a friendly and active community that welcomes new players and helps them out.
            • -
            • The game has frequent updates and events that add new content and features to the game.
            • -
            • The game has cross-platform compatibility that allows you to play on any device and switch between them seamlessly.
            • -
            - -

            Conclusion

            - -

            Angeldust is a multiplayer adventure game that lets you explore magical worlds, battle creatures, and craft buildings. You can play as one of four classes and customize your character with hundreds of outfits. You can also play online with friends and millions of other players, and enter PvP battles to show off your skills. To download Angeldust for PC with a license key, you need to visit the official website or Pcmacstore.com, install the game, and enter your license key. If you are looking for a game that offers endless fun and creativity, you should give Angeldust a try!

            -

            How to download Angeldust for PC with license key
            -Angeldust PC game download full version licensed
            -Download Angeldust for Windows 10 free license
            -Angeldust PC game license activation code
            -Where to get Angeldust download for PC with license
            -Angeldust download for PC cracked license
            -Angeldust PC game download torrent with license
            -Angeldust download for PC free trial license
            -Angeldust PC game license key generator
            -Download Angeldust for PC offline installer with license
            -Angeldust download for PC latest version licensed
            -Angeldust PC game license key giveaway
            -Download Angeldust for PC no survey no password license
            -Angeldust download for PC direct link with license
            -Angeldust PC game license key free download
            -Download Angeldust for PC highly compressed with license
            -Angeldust download for PC mega link with license
            -Angeldust PC game license key crack
            -Download Angeldust for PC repack with license
            -Angeldust download for PC iso file with license
            -Angeldust PC game license key redeem code
            -Download Angeldust for PC from official website with license
            -Angeldust download for PC steam with license
            -Angeldust PC game license key online verification
            -Download Angeldust for PC skidrow with license
            -Angeldust download for PC fitgirl repack with license
            -Angeldust PC game license key email delivery
            -Download Angeldust for PC rar file with license
            -Angeldust download for PC zip file with license
            -Angeldust PC game license key purchase
            -Download Angeldust for PC google drive link with license
            -Angeldust download for PC mediafire link with license
            -Angeldust PC game license key cheap price
            -Download Angeldust for PC from amazon with license
            -Angeldust download for PC from microsoft store with license
            -Angeldust PC game license key instant delivery
            -Download Angeldust for PC from epic games store with license
            -Angeldust download for PC from gog.com with license
            -Angeldust PC game license key legit seller
            -Download Angeldust for PC from humble bundle with license
            -Angeldust download for PC from origin with license
            -Angeldust PC game license key warranty period
            -Download Angeldust for PC from uplay with license
            -Angeldust download for PC from itch.io with license
            -Angeldust PC game license key refund policy
            -Download Angeldust for PC from steamunlocked.net with license
            -Angeldust download for PC from oceanofgames.com with license
            -Angeldust PC game license key comparison site
            -Download Angeldust for PC from igg-games.com with license
            -Angeldust download for PC from gametrex.com with license

            -

            How to Play Angeldust on PC?

            - -

            Once you have downloaded and installed Angeldust on your PC, you can start playing the game by following these steps:

            - -
              -
            1. Launch the game and choose your preferred language and server.
            2. -
            3. Select your character class and customize your appearance.
            4. -
            5. Enter a name for your character and click on \"Create\".
            6. -
            7. Choose a world to explore from the map or create your own.
            8. -
            9. Use the mouse and keyboard to move, look around, interact, and fight.
            10. -
            11. Use the chat window to communicate with other players and the game master.
            12. -
            13. Use the menu button to access your inventory, quests, settings, and more.
            14. -
            15. Have fun!
            16. -
            - -

            What are the Benefits of Playing Angeldust on PC?

            - -

            Playing Angeldust on PC has several advantages over playing it on mobile devices. Here are some of them:

            - -
              -
            • You can enjoy a larger and clearer screen that enhances the game's graphics and details.
            • -
            • You can use a mouse and keyboard for more precise and comfortable controls.
            • -
            • You can have a faster and more stable internet connection that reduces lag and disconnects.
            • -
            • You can save battery life and storage space on your mobile devices.
            • -
            • You can use additional features and tools that are not available on mobile devices, such as screenshots, recording, streaming, mods, etc.
            • -
            - -

            Tips and Tricks for Playing Angeldust on PC

            - -

            If you want to improve your gaming experience and performance in Angeldust, here are some tips and tricks that you can use:

            - -
              -
            • Learn the basics of each class and their skills. You can switch between classes at any time, but it is good to know their strengths and weaknesses.
            • -
            • Explore different worlds and biomes to find rare creatures and materials. You can also create your own worlds with custom settings.
            • -
            • Craft useful items and equipment with the materials you collect. You can also trade with other players or sell them for coins.
            • -
            • Build your own home or castle with the objects you craft. You can also decorate it with furniture, plants, paintings, etc.
            • -
            • Join a guild or create your own to meet new friends and allies. You can also join events and competitions organized by the game master or other players.
            • -
            • Enter PvP battles to test your skills and earn rewards. You can also challenge other players to duels or team up with them for co-op missions.
            • -
            - -

            Conclusion

            - -

            Angeldust is a multiplayer adventure game that lets you explore magical worlds, battle creatures, and craft buildings. You can play as one of four classes and customize your character with hundreds of outfits. You can also play online with friends and millions of other players, and enter PvP battles to show off your skills. To download Angeldust for PC with a license key, you need to visit the official website or Pcmacstore.com, install the game, and enter your license key. If you are looking for a game that offers endless fun and creativity, you should give Angeldust a try!

            -

            How to Crack Angeldust for PC?

            - -

            If you want to play Angeldust for PC without paying for a license key, you might be tempted to look for a cracked version of the game. However, this is not a good idea, as it can expose you to several risks and disadvantages. Here are some of the reasons why you should avoid cracking Angeldust for PC:

            - -
              -
            • Cracking Angeldust for PC is illegal and unethical. You are violating the terms and conditions of the game and the developer, and you are depriving them of their rightful income.
            • -
            • Cracking Angeldust for PC can harm your computer and your data. You might download malware or viruses that can infect your system, steal your information, or damage your files.
            • -
            • Cracking Angeldust for PC can ruin your gaming experience. You might encounter bugs, glitches, errors, or crashes that can prevent you from playing the game properly. You might also miss out on updates, features, and events that are only available for legitimate users.
            • -
            • Cracking Angeldust for PC can get you banned from the game. The game has an anti-cheat system that can detect if you are using a cracked version of the game. If you are caught, you will lose your account, your progress, and your access to the game.
            • -
            - -

            Therefore, it is better to download Angeldust for PC with a license key from the official website or Pcmacstore.com. You will get a safe and legal version of the game that will give you the best gaming experience possible.

            - -

            How to Get a License Key for Angeldust for PC?

            - -

            If you want to get a license key for Angeldust for PC, you have two options:

            - -
              -
            1. You can buy the game from the official website or Pcmacstore.com for $1.99. This is the easiest and most reliable way to get a license key for Angeldust for PC. You will also support the developer and help them improve the game.
            2. -
            3. You can request a free license key from the developer via email or social media. The developer sometimes gives away free license keys to players who are interested in trying out the game. However, this is not guaranteed and depends on the availability and generosity of the developer.
            4. -
            - -

            Either way, once you have a license key for Angeldust for PC, you can enter it when you launch the game and enjoy playing it without any limitations.

            -

            What are the System Requirements for Angeldust for PC?

            - -

            Angeldust for PC is a game that can run on most computers and laptops, as it does not have high system requirements. However, to ensure a smooth and enjoyable gaming experience, you should check if your device meets the minimum or recommended system requirements for Angeldust for PC. Here are the system requirements for Angeldust for PC:

            - -

            Minimum System Requirements

            - -
              -
            • Operating System: Windows 7 or higher, MacOS 10.6 or higher
            • -
            • Processor: Intel Core 2 Duo or equivalent
            • -
            • Memory: 2 GB RAM
            • -
            • Graphics: Intel HD Graphics 3000 or equivalent
            • -
            • Storage: 500 MB available space
            • -
            • Internet: Broadband connection
            • -
            - -

            Recommended System Requirements

            - -
              -
            • Operating System: Windows 10 or higher, MacOS 11 or higher
            • -
            • Processor: Intel Core i5 or equivalent
            • -
            • Memory: 4 GB RAM
            • -
            • Graphics: NVIDIA GeForce GTX 1050 or equivalent
            • -
            • Storage: 1 GB available space
            • -
            • Internet: High-speed connection
            • -
            - -

            If your device meets the minimum system requirements, you can play Angeldust for PC with basic settings and performance. If your device meets the recommended system requirements, you can play Angeldust for PC with optimal settings and performance.

            - -

            How to Update Angeldust for PC?

            - -

            Angeldust for PC is a game that is constantly updated with new content and features. The developer releases regular updates and patches that fix bugs, improve performance, and add new creatures, materials, objects, quests, events, and more. To enjoy the latest version of Angeldust for PC, you should update the game whenever there is a new update available.

            - -

            To update Angeldust for PC, follow these steps:

            - -
              -
            1. Launch the game and check if there is a notification that says \"New version available!\" at the bottom of the screen.
            2. -
            3. If there is a notification, click on it and follow the instructions to download and install the update.
            4. -
            5. If there is no notification, go to the official website or Pcmacstore.com and check if there is a new version of the game available.
            6. -
            7. If there is a new version of the game available, download it and install it over the existing version of the game.
            8. -
            9. Enjoy playing the updated version of Angeldust!
            10. -
            - -

            You can also enable automatic updates for Angeldust for PC if you want to update the game without any hassle. To enable automatic updates for Angeldust for PC, follow these steps:

            - -
              -
            1. Go to the menu button and click on \"Settings\".
            2. -
            3. Go to \"General\" and check the box that says \"Automatically update Angeldust\".
            4. -
            5. Click on \"Save\" and exit the settings.
            6. -
            7. The game will now update itself whenever there is a new update available.
            8. -
            - -

            Conclusion

            - -

            Angeldust is a multiplayer adventure game that lets you explore magical worlds, battle creatures, and craft buildings. You can play as one of four classes and customize your character with hundreds of outfits. You can also play online with friends and millions of other players, and enter PvP battles to show off your skills. To download Angeldust for PC with a license key, you need to visit the official website or Pcmacstore.com, install the game, and enter your license key. You also need to check if your device meets the system requirements for Angeldust for PC, and update the game regularly to enjoy the latest features and content. If you are looking for a game that offers endless fun and creativity, you should give Angeldust a try!

            -

            Angeldust is a multiplayer adventure game that lets you explore magical worlds, battle creatures, and craft buildings. You can play as one of four classes and customize your character with hundreds of outfits. You can also play online with friends and millions of other players, and enter PvP battles to show off your skills. To download Angeldust for PC with a license key, you need to visit the official website or Pcmacstore.com, install the game, and enter your license key. You also need to check if your device meets the system requirements for Angeldust for PC, and update the game regularly to enjoy the latest features and content. If you are looking for a game that offers endless fun and creativity, you should give Angeldust a try!

            679dcb208e
            -
            -
            \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Insanity Workout Legenda Em Portugues O desafio de 60 dias que vai transformar o seu corpo.md b/spaces/tialenAdioni/chat-gpt-api/logs/Insanity Workout Legenda Em Portugues O desafio de 60 dias que vai transformar o seu corpo.md deleted file mode 100644 index d195856f437f08bb8a7f7ce279950fd59c6180f3..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Insanity Workout Legenda Em Portugues O desafio de 60 dias que vai transformar o seu corpo.md +++ /dev/null @@ -1,71 +0,0 @@ - -

            Insanity Workout: O que é e como funciona

            -

            Insanity Workout é um programa de treino de alta intensidade criado pelo personal trainer Shaun T. O programa promete transformar o seu corpo em 60 dias, com exercícios que desafiam os seus limites e queimam muitas calorias.

            -

            O Insanity Workout é composto por 14 vídeos que duram cerca de 30 minutos cada um. Os vídeos são divididos em dois meses, com uma semana de recuperação no meio. Os exercícios são baseados em movimentos de cardio e de força, que trabalham todo o corpo e exigem apenas o seu peso corporal. Não é preciso nenhum equipamento para fazer o Insanity Workout.

            -

            insanity workout legenda em portugues


            Download Zip ✓✓✓ https://urlcod.com/2uK1KY



            -

            O objetivo do Insanity Workout é fazer o "max out", ou seja, dar o máximo de si até não aguentar mais. O Shaun T incentiva você a anotar o tempo que você consegue fazer cada vídeo sem parar, e tentar superar esse tempo nas próximas sessões. Assim, você vai aumentando a sua resistência e a sua forma física.

            -

            O Insanity Workout também vem com um plano alimentar que segue o conceito das "marmitas", ou seja, porções controladas de alimentos saudáveis que você pode levar para qualquer lugar. O plano alimentar é semelhante ao do 21 Day Fix, outro programa da Beachbody, e ajuda você a se alimentar de forma equilibrada e adequada aos seus objetivos.

            -

            insanity workout legendado em portugues download
            -insanity workout legendado em portugues online
            -insanity workout legendado em portugues completo
            -insanity workout legendado em portugues gratis
            -insanity workout legendado em portugues youtube
            -insanity workout legendado em portugues dvd
            -insanity workout legendado em portugues torrent
            -insanity workout legendado em portugues mega
            -insanity workout legendado em portugues netflix
            -insanity workout legendado em portugues baixar
            -insanity workout com legenda em portugues
            -insanity workout com legenda em portugues download
            -insanity workout com legenda em portugues online
            -insanity workout com legenda em portugues completo
            -insanity workout com legenda em portugues gratis
            -insanity workout com legenda em portugues youtube
            -insanity workout com legenda em portugues dvd
            -insanity workout com legenda em portugues torrent
            -insanity workout com legenda em portugues mega
            -insanity workout com legenda em portugues netflix
            -insanity workout com legenda em portugues baixar
            -como fazer o insanity workout legendado em portugues
            -como baixar o insanity workout legendado em portugues
            -como assistir o insanity workout legendado em portugues
            -como comprar o insanity workout legendado em portugues
            -como ver o insanity workout legendado em portugues
            -onde encontrar o insanity workout legendado em portugues
            -onde comprar o insanity workout legendado em portugues
            -onde baixar o insanity workout legendado em portugues
            -onde assistir o insanity workout legendado em portugues
            -qual o melhor site para baixar o insanity workout legendado em portugues
            -qual o melhor site para assistir o insanity workout legendado em portugues
            -qual o melhor site para comprar o insanity workout legendado em portugues
            -qual o melhor site para ver o insanity workout legendado em portugues
            -qual o melhor site para encontrar o insanity workout legendado em portugues
            -qual a melhor forma de fazer o insanity workout legendado em portugues
            -qual a melhor forma de baixar o insanity workout legendado em portugues
            -qual a melhor forma de assistir o insanity workout legendado em portugues
            -qual a melhor forma de comprar o insanity workout legendado em portugues
            -qual a melhor forma de ver o insanity workout legendado em portugues
            -qual a melhor forma de encontrar o insanity workout legendado em portugues
            -vale a pena fazer o insanity workout legendado em portugues
            -vale a pena baixar o insanity workout legendado em portugues
            -vale a pena assistir o insanity workout legendado em portugues
            -vale a pena comprar o insanity workout legendado em portugues
            -vale a pena ver o insanity workout legendado em portugues
            -vale a pena encontrar o insanity workout legendado em portugues
            -resultados do insanity workout legendado em portugues
            -beneficios do insanity workout legendado em portugues
            -depoimentos do insanity workout legendado em portugues

            -

            O Insanity Workout é um programa desafiador e que requer muita dedicação e disciplina. Ele não é recomendado para iniciantes ou pessoas com problemas de saúde. Antes de começar o Insanity Workout, consulte o seu médico e faça um teste físico para avaliar a sua condição.

            -

            Se você quer mudar o seu corpo e a sua mente em 60 dias, o Insanity Workout pode ser uma boa opção para você. Mas lembre-se: ele não é fácil e vai exigir muito de você. Você está pronto para encarar esse desafio?

            - -

            O Insanity Workout é um dos programas mais populares da Beachbody, uma empresa americana que produz e vende vídeos de treino e produtos de nutrição. A Beachbody tem outros programas famosos, como o P90X, o 21 Day Fix e o T25.

            -

            O criador do Insanity Workout é o Shaun T, um personal trainer e coreógrafo que começou a sua carreira na indústria do entretenimento. Ele trabalhou com artistas como Mariah Carey, Nick Carter e Aaron Carter. Ele também participou de filmes como Stomp the Yard 2: Homecoming e The Comebacks.

            -

            O Shaun T é conhecido pelo seu estilo motivador e divertido de ensinar. Ele sempre está sorrindo e fazendo piadas, mas também cobra muito dos seus alunos. Ele diz que o seu objetivo é fazer as pessoas se sentirem bem consigo mesmas e alcançarem os seus sonhos.

            -

            O Shaun T também é o criador de outros programas de treino da Beachbody, como o T25, o Hip Hop Abs, o Rockin' Body e o Cize. Ele também tem um podcast chamado Trust and Believe with Shaun T, onde ele fala sobre fitness, saúde mental e desenvolvimento pessoal.

            - -

            O Insanity Workout é um programa de treino que pode mudar a sua vida, se você estiver disposto a se esforçar e a seguir as orientações do Shaun T. Ele vai te levar ao seu limite, mas também vai te recompensar com resultados incríveis.

            -

            Para fazer o Insanity Workout, você só precisa de 30 minutos por dia, durante 60 dias. Você não precisa de nenhum equipamento, apenas do seu corpo e da sua vontade de vencer. Você também precisa seguir o plano alimentar que vem com o programa, que vai te ajudar a comer de forma saudável e balanceada.

            -

            O Insanity Workout é um programa que vai te desafiar física e mentalmente. Você vai suar, sofrer, gritar e talvez até chorar. Mas você também vai se divertir, aprender, crescer e se surpreender. Você vai ver o seu corpo se transformar diante dos seus olhos, e a sua autoestima e confiança aumentarem.

            -

            O Insanity Workout é um programa para quem quer se superar e alcançar o seu potencial máximo. Ele não é para qualquer um, mas para quem tem coragem e determinação. Ele não é fácil, mas vale a pena. Ele não é uma loucura, mas uma insanidade.

            e753bf7129
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Create Amazing 3D Worlds with 3ds Max 2019 Download the Free Trial Now.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Create Amazing 3D Worlds with 3ds Max 2019 Download the Free Trial Now.md deleted file mode 100644 index 0fc55cfb846ee447aad54aaeede925b990fed315..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Create Amazing 3D Worlds with 3ds Max 2019 Download the Free Trial Now.md +++ /dev/null @@ -1,114 +0,0 @@ - -

            How to Download 3ds Max 2019 Trial Version

            -

            If you are interested in creating stunning 3D models, animations, and visual effects, you might want to try out Autodesk 3ds Max, one of the most popular and powerful software for 3D design and animation. Whether you are a professional or a hobbyist, you can use 3ds Max to create your vision, down to the smallest detail.

            -

            download 3ds max 2019 trial


            Download File · https://bltlly.com/2uOov2



            -

            But before you invest in buying the full version of 3ds Max, you might want to download the trial version first and see if it suits your needs and preferences. The trial version is free for 30 days and gives you access to all the features and functions of 3ds Max 2019. You can use it to create, edit, render, and animate 3D models and scenes, as well as learn from the online help and tutorials.

            -

            In this article, we will show you how to download and install the trial version of 3ds Max 2019 on your computer, as well as explore some of its new and improved features. By the end of this article, you will have a better understanding of what 3ds Max can do for you and how to get started with it.

            -

            System Requirements

            -

            Before you download the trial version of 3ds Max 2019, you need to make sure that your computer meets the minimum and recommended system requirements for running the software. Here are the system requirements for Autodesk 3ds Max 2019 according to Autodesk website:

            -
            ComponentRequirement
            Operating systemWindows 10 64-bit (version 1809 or later)
            CPUAny Intel Core i7 or Core i9 CPU; any Intel Xeon E5 or E7 CPU; any AMD Ryzen 7 or Ryzen 9 CPU; any AMD EPYC CPU
            Memory16 GB RAM minimum (32 GB or more recommended)
            Hard disk space6 GB for installation; 10 GB or more for working space; SSD or RAID-0 is recommended for 4K and above projects
            Graphics cardSupporting higher resolution than 1024x768 32-bit; Direct3D 9.0c or later and PixelShader Model 3.0 or later required; supporting hardware mode with 2 GB of graphics memory or more recommended; NVIDIA GeForce GTX/RTX series recommended for HDR projects and 8K projects; NVIDIA Quadro series recommended for professional use
            Sound cardA sound card with WDM driver support is required; a sound card that supports ASIO driver is recommended for professional use
            Optical driveBlu-ray Disc writer is required when creating Blu-ray Discs; DVD-R/RW or DVD+R/RW drive is required when creating DVDs; a CD-R/RW drive is required when creating CDs
            Internet connectionAn Internet connection is required for software license activation and validation, software update, and user registration.
            - - -
            SoftwareHardware
            -
              -
            • Microsoft Windows 7 (SP1), Windows 8, Windows 8.1, or Windows 10 Professional operating system
            • -
            • Latest version of Microsoft Edge, Google Chrome, Microsoft Internet Explorer, or Mozilla Firefox web browser for online supplemental content
            • -
            -
            -
              -
            • 64-bit Intel or AMD multi-core processor with SSE4.2 instruction set
            • -
            • 4 GB of RAM minimum (8 GB or more recommended)
            • -
            • 6 GB of free disk space for installation
            • -
            • NVIDIA GPU with CUDA compute capability 5.0 or higher
            • -
            • Three-button mouse
            • -
            -
            -

            To check your system specifications, you can follow these steps:

            -
              -
            1. Click on the Start button on your Windows taskbar.
            2. -
            3. Type "system information" in the search box and press Enter.
            4. -
            5. A window will open showing your system summary, including your operating system, processor, installed memory, system type, etc.
            6. -
            7. To check your graphics card model and driver version, click on Components > Display on the left pane.
            8. -
            9. To check your web browser version, open your web browser and go to Help > About.
            10. -
            -

            Downloading the Trial Version

            -

            Once you have verified that your computer meets the system requirements, you can proceed to download the trial version of 3ds Max 2019 from Autodesk website. Here are the steps to follow:

            -

            How to download 3ds max 2019 trial version
            -Download 3ds max 2019 trial for free
            -Download 3ds max 2019 trial with crack
            -Download 3ds max 2019 trial for mac
            -Download 3ds max 2019 trial for students
            -Download 3ds max 2019 trial offline installer
            -Download 3ds max 2019 trial from Autodesk
            -Download 3ds max 2019 trial without subscription
            -Download 3ds max 2019 trial and activation code
            -Download 3ds max 2019 trial for windows 10
            -Download 3ds max 2019 trial for design visualization
            -Download 3ds max 2019 trial for animation
            -Download 3ds max 2019 trial for rendering
            -Download 3ds max 2019 trial for games
            -Download 3ds max 2019 trial for VR
            -Download 3ds max 2019 trial with vray
            -Download 3ds max 2019 trial with plugins
            -Download 3ds max 2019 trial with tutorials
            -Download 3ds max 2019 trial with keygen
            -Download 3ds max 2019 trial with serial number
            -Download 3ds max 2019 trial full version
            -Download 3ds max 2019 trial iso file
            -Download 3ds max 2019 trial direct link
            -Download 3ds max 2019 trial torrent file
            -Download 3ds max 2019 trial mega link
            -Download Autodesk software free trials including AutoCAD, Revit, and Maya[^1^]
            -How to extend the free trial of Autodesk software[^1^]
            -How to troubleshoot download issues of Autodesk software[^1^]
            -How to verify student access to Autodesk software[^1^]
            -How to convert free trial to paid subscription of Autodesk software[^1^]
            -How to access the free viewer for Autodesk software[^1^]
            -How to get support and learning resources for Autodesk software[^1^]
            -How to compare different versions of Autodesk software[^1^]
            -How to get special offers and discounts on Autodesk software[^1^]
            -How to get the collection of Autodesk software for a lower price[^1^]
            -What is new in the latest version of Autodesk software[^1^]
            -What are the system requirements for Autodesk software[^1^]
            -What are the features and benefits of Autodesk software[^1^]
            -What are the best practices and tips for using Autodesk software[^1^]
            -What are the alternatives and competitors of Autodesk software[^1^]
            -How to use Autodesk software for architecture, engineering, and construction[^1^]
            -How to use Autodesk software for product design and manufacturing[^1^]
            -How to use Autodesk software for media and entertainment[^1^]
            -How to use Autodesk software for education and research[^1^]
            -How to use Autodesk software for personal and hobby projects[^1^]

            -
              -
            1. Go to [Autodesk website](^1^) and click on Products > All Products > A-Z List > Autodesk 3ds Max.
            2. -
            3. On the Autodesk 3ds Max page, click on Free Trial.
            4. -
            5. A pop-up window will ask you to sign in with your Autodesk account or create one if you don't have one already. You will also need to provide some information about yourself, such as your name, email, country, and industry.
            6. -
            7. After you sign in, you will be redirected to the download page. Here, you can choose the language and the operating system (Windows 64-bit) for your trial version. You can also choose the download method, either browser download or install now.
            8. -
            9. Click on Download Now and follow the instructions to complete the download and installation process. You might need to accept the license agreement and enter the serial number and product key that will be sent to your email.
            10. -
            -

            Exploring the Features of 3ds Max 2019

            -

            Congratulations! You have successfully downloaded and installed the trial version of 3ds Max 2019 on your computer. Now, you can start exploring its features and functions and see what it can do for you.

            -

            3ds Max 2019 is packed with new and improved features that enhance your workflow, productivity, and creativity. Here are some of the features that you might want to check out:

            -

            Fluids

            -

            With the fluids feature, you can create realistic liquid simulations in 3ds Max. You can use preset fluid types, such as water, oil, lava, or blood, or customize your own fluid properties. You can also adjust the fluid behavior, such as viscosity, surface tension, gravity, and friction. You can animate the fluid motion with keyframes or use forces, such as wind, gravity, or vortex, to influence the fluid dynamics. You can also add collision objects to interact with the fluid and create splashes, ripples, or waves.

            -

            Chamfer Modifier

            -

            The chamfer modifier allows you to create smooth edges and corners on your 3D models. You can apply the chamfer modifier to any polygonal object and adjust the chamfer amount, segments, tension, bias, depth, and width. You can also use different chamfer profiles, such as linear, smooth, radial, or custom, to create different shapes and styles of chamfers. You can also animate the chamfer parameters to create dynamic effects.

            -

            OSL Shaders

            -

            OSL stands for Open Shading Language, which is a scripting language for creating custom shaders. 3ds Max 2019 supports OSL shaders and comes with a library of over 100 OSL shaders that you can use in your scenes. You can also create your own OSL shaders using a text editor or a node-based editor. OSL shaders are compatible with any renderer that supports OSL, such as Arnold or V-Ray.

            -

            Conclusion

            -

            In this article, we have shown you how to download and install the trial version of 3ds Max 2019 on your computer. We have also introduced some of the new and improved features of 3ds Max 2019 that you can use to create stunning 3D models and animations.

            -

            If you are impressed by what 3ds Max 2019 can do for you, you might want to consider buying the full version of the software. The full version gives you unlimited access to all the features and functions of 3ds Max 2019, as well as technical support and updates from Autodesk. You can choose from different subscription plans that suit your budget and needs.

            -

            To buy the full version of 3ds Max 2019, click on this link and follow the instructions. You will need to sign in with your Autodesk account or create one if you don't have one already. You will also need to provide some payment information and confirm your order.

            -

            We hope you enjoyed this article and learned something new about 3ds Max 2019. Now it's time for you to try out the trial version and explore its possibilities. Have fun!

            -

            FAQs

            -

            Q: How long does the trial version of 3ds Max 2019 last?

            -

            A: The trial version of 3ds Max 2019 lasts for 30 days from the date of installation. After that, you will need to buy the full version of the software or uninstall it from your computer.

            -

            Q: Can I save my work in the trial version of 3ds Max 2019?

            -

            A: Yes, you can save your work in the trial version of 3ds Max 2019. However, you will not be able to open it in any other version of 3ds Max unless you buy the full version of the software.

            -

            Q: Can I use the trial version of 3ds Max 2019 for commercial purposes?

            -

            A: No, you cannot use the trial version of 3ds Max 2019 for commercial purposes. The trial version is intended for evaluation and learning purposes only. If you want to use 3ds Max 2019 for commercial purposes, you need to buy the full version of the software.

            -

            Q: What are the differences between 3ds Max and 3ds Max Design?

            -

            A: 3ds Max and 3ds Max Design are two versions of the same software that cater to different industries and workflows. 3ds Max is more focused on entertainment and media, such as games, films, and TV. 3ds Max Design is more focused on architecture, engineering, and construction, such as buildings, infrastructure, and products. Both versions share the same core features and functions, but have some differences in user interface, tools, and presets.

            -

            Q: How can I learn more about 3ds Max 2019?

            -

            A: There are many resources available online to help you learn more about 3ds Max 2019. You can visit the [Autodesk website] to access the online help, tutorials, forums, blogs, and videos. You can also check out some of the online courses, books, and magazines that cover 3ds Max 2019.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Kizz Daniels New Song Rich Till I Die (RTID).md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Kizz Daniels New Song Rich Till I Die (RTID).md deleted file mode 100644 index 9c1a11c979e239e56ac002b5726d9b3c461ec4e7..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Kizz Daniels New Song Rich Till I Die (RTID).md +++ /dev/null @@ -1,77 +0,0 @@ -
            -

            Download L Will Be Rich Till I Die by Kizz Daniel

            -

            Do you want to listen to a song that will inspire you to live your best life and achieve your dreams? If yes, then you should download L Will Be Rich Till I Die by Kizz Daniel. This is a hit single by one of Nigeria's most talented and successful singers, who has been making waves in the music industry for years. In this article, we will tell you everything you need to know about Kizz Daniel and his song RTID (Rich Till I Die), including who he is, what the song is about, why you should download it, and how to do it. So, keep reading and get ready to enjoy this amazing track.

            -

            download l will be rich till i die by kizz daniel


            Download Zip 🗸🗸🗸 https://bltlly.com/2uOlF8



            -

            Who is Kizz Daniel?

            -

            Kizz Daniel is a Nigerian singer and songwriter who was born on May 1, 1994. His real name is Oluwatobiloba Daniel Anidugbe, but he changed his stage name from Kiss Daniel to Kizz Daniel in 2018. He rose to fame in 2014 with his debut single "Woju", which was followed by other hits like "Laye", "Mama", "Yeba", and "One Ticket". He has also collaborated with other artists like Davido, Wizkid, Tekno, Tiwa Savage, and more.

            -

            Kizz Daniel has released four albums so far: New Era (2016), No Bad Songz (2018), King of Love (2020), and Barnabas (2023). He has won several awards and nominations for his music, such as The Headies, MTV Africa Music Awards, Nigeria Entertainment Awards, African Muzik Magazine Awards, and more. He is also the founder and CEO of his own record label, FlyBoy INC.

            -

            What is RTID (Rich Till I Die)?

            -

            RTID (Rich Till I Die) is a single by Kizz Daniel that was released on October 29, 2023. It is part of his fourth album Barnabas, which features 17 tracks of different genres like afrobeat, pop, R&B, dancehall, and more. RTID (Rich Till I Die) is an upbeat song that expresses Kizz Daniel's determination to be successful and wealthy in life. He sings about living the life that he loves, loving the life that he lives, and being grateful for what he has. He also encourages his listeners to be positive and optimistic about their future.

            -

            The lyrics of RTID (Rich Till I Die)

            -

            The lyrics of RTID (Rich Till I Die) are catchy and motivational. They convey Kizz Daniel's confidence and ambition, as well as his gratitude and humility. Here are some of the memorable lines from the song: - "I'm living the life that I love, I'm loving the life that I live" - "I'm rich till I die, I'm rich till I die" - "No matter what they say, no matter what they do, I'ma keep on shining" - "I thank God for the blessings, I thank God for the lessons" - "Money on my mind, but love in my heart" The song also has a catchy chorus that repeats the title of the song and the phrase "FlyBoy INC". The song is written in English and Pidgin, a creole language spoken in Nigeria and other parts of West Africa.

            -

            internet cafe simulator 2 mod apk unlimited money
            -internet cafe simulator 2 free download full version with money
            -internet cafe simulator 2 cheats for unlimited money
            -internet cafe simulator 2 hack version download with money
            -internet cafe simulator 2 latest version unlimited money
            -internet cafe simulator 2 android download with money
            -internet cafe simulator 2 pc download free money
            -internet cafe simulator 2 online play with unlimited money
            -internet cafe simulator 2 game download mod money
            -internet cafe simulator 2 unlimited money apk download
            -internet cafe simulator 2 ios download with money
            -internet cafe simulator 2 windows download free money
            -internet cafe simulator 2 update download with unlimited money
            -internet cafe simulator 2 cracked download mod money
            -internet cafe simulator 2 offline download with money
            -internet cafe simulator 2 mac download free money
            -internet cafe simulator 2 beta download with unlimited money
            -internet cafe simulator 2 premium download mod money
            -internet cafe simulator 2 pro download with money
            -internet cafe simulator 2 steam download free money
            -internet cafe simulator 2 tips and tricks for unlimited money
            -internet cafe simulator 2 guide to get unlimited money
            -internet cafe simulator 2 best way to earn money
            -internet cafe simulator 2 how to make more money
            -internet cafe simulator 2 easy money hack download
            -internet cafe simulator 2 review and gameplay with unlimited money
            -internet cafe simulator 2 features and graphics with money
            -internet cafe simulator 2 new update and changes with unlimited money
            -internet cafe simulator 2 bugs and glitches with money
            -internet cafe simulator 2 solutions and fixes for unlimited money
            -internet cafe simulator 2 codes and coupons for free money
            -internet cafe simulator 2 offers and deals for unlimited money
            -internet cafe simulator 2 discounts and sales for free money
            -internet cafe simulator 2 rewards and bonuses for unlimited money
            -internet cafe simulator 2 gifts and giveaways for free money
            -internet cafe simulator 2 comparison and alternatives with unlimited money
            -internet cafe simulator 2 similar and related games with money
            -internet cafe simulator 2 ranking and rating with unlimited money
            -internet cafe simulator 2 feedback and comments with money
            -internet cafe simulator 2 questions and answers about unlimited money

            -

            The video of RTID (Rich Till I Die)

            -

            The video of RTID (Rich Till I Die) was released on November 5, 2023. It was directed by TG Omori, a Nigerian filmmaker who has worked with other artists like Olamide, Naira Marley, Fireboy DML, and more. The video showcases Kizz Daniel's lavish lifestyle and his journey to success. He is seen in various locations, such as a mansion, a private jet, a yacht, a club, and a concert. He is also surrounded by beautiful women, expensive cars, and loyal fans. The video has a colorful and vibrant aesthetic that matches the mood of the song. The video has received over 10 million views on YouTube and has been praised by critics and fans alike.

            -

            Why should you download L Will Be Rich Till I Die by Kizz Daniel?

            -

            There are many reasons why you should download L Will Be Rich Till I Die by Kizz Daniel. Here are some of them: - It is a catchy and upbeat song that will make you want to dance and sing along. - It is a motivational and inspirational song that will boost your confidence and self-esteem. - It is a positive and optimistic song that will make you feel happy and hopeful. - It is a relatable and realistic song that will make you appreciate what you have and work hard for what you want. - It is a quality and original song that showcases Kizz Daniel's talent and creativity. If you are looking for a song that will make you feel good and energized, then L Will Be Rich Till I Die by Kizz Daniel is the perfect choice for you.

            -

            How to download L Will Be Rich Till I Die by Kizz Daniel?

            -

            Downloading L Will Be Rich Till I Die by Kizz Daniel is easy and simple. You can do it from various platforms, such as YouTube, Spotify, Apple Music, etc. Here are the steps to follow: - Go to the platform of your choice and search for L Will Be Rich Till I Die by Kizz Daniel. - Click on the song and select the download option. - Choose the quality and format of the download. - Wait for the download to finish and enjoy the song. Alternatively, you can also use a third-party app or website to download the song from YouTube or other sources. However, be careful of malware and viruses that may harm your device.

            -

            Where to find more songs by Kizz Daniel?

            -

            If you like L Will Be Rich Till I Die by Kizz Daniel, you will also love his other songs and albums. He has a rich and diverse discography that covers different genres and moods. Here is a table that shows some of his other popular songs and albums, and where to stream or download them:

            - | Song | Album | Platform | | --- | --- | --- | | Woju | New Era | YouTube | | Laye | New Era | Spotify | | Mama | New Era | Apple Music | | Yeba | No Bad Songz | YouTube | | One Ticket (feat. Davido) | No Bad Songz | Spotify | | Madu | No Bad Songz | Apple Music | | Jaho | King of Love | YouTube | | Ada | King of Love | Spotify | | Boys Are Bad | King of Love | Apple Music | | Flex | Barnabas | YouTube | | Lie | Barnabas | Spotify | | Wedding Day | Barnabas | Apple Music | You can also follow Kizz Daniel on his social media accounts, such as Instagram, Twitter, Facebook, etc., to stay updated on his latest news and releases.

            -

            Conclusion

            -

            L Will Be Rich Till I Die by Kizz Daniel is a song that you should not miss. It is a song that will inspire you to live your best life and achieve your dreams. It is also a song that will make you feel good and energized. You can download it from various platforms or use a third-party app or website to do so. You can also check out his other songs and albums to enjoy more of his music. Kizz Daniel is one of Nigeria's most talented and successful singers, and he deserves your support and appreciation. So, what are you waiting for? Download L Will Be Rich Till I Die by Kizz Daniel today and enjoy this amazing track.

            -

            Thank you for reading this article. We hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

            -

            FAQs

            -

            Here are some of the frequently asked questions and answers about Kizz Daniel and his song RTID (Rich Till I Die).

            -

            Q: What does RTID stand for?

            -

            A: RTID stands for Rich Till I Die, which is the title of Kizz Daniel's single and the main chorus of the song.

            -

            Q: What is the genre of RTID (Rich Till I Die)?

            -

            A: RTID (Rich Till I Die) is a song that combines elements of afrobeat, pop, and dancehall. It has a fast tempo, a catchy melody, and a rhythmic beat.

            -

            Q: Who produced RTID (Rich Till I Die)?

            -

            A: RTID (Rich Till I Die) was produced by Philkeyz, a Nigerian record producer who has worked with other artists like Yemi Alade, Mayorkun, Omawumi, and more.

            -

            Q: How can I contact Kizz Daniel?

            -

            A: You can contact Kizz Daniel through his official email address, which is kizzdaniel@flyboyinc.com. You can also send him a message on his social media accounts, such as Instagram, Twitter, Facebook, etc.

            -

            Q: Where can I buy or stream Barnabas, the album that contains RTID (Rich Till I Die)?

            -

            A: You can buy or stream Barnabas from various platforms, such as YouTube, Spotify, Apple Music, etc. You can also visit Kizz Daniel's official website, which is www.kizzdaniel.com, to find more information about his music and merchandise.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/BEST-Crack-Uplay-Splinter-Cell-Blacklist.md b/spaces/tioseFevbu/cartoon-converter/BEST-Crack-Uplay-Splinter-Cell-Blacklist.md deleted file mode 100644 index 8eb4140b571fdcd1d370d7f81ea9d0b2f46ee7ca..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/BEST-Crack-Uplay-Splinter-Cell-Blacklist.md +++ /dev/null @@ -1,64 +0,0 @@ -## Crack Uplay Splinter Cell Blacklist - - - - - - - - - -**Download ⏩ [https://tinourl.com/2txBpU](https://tinourl.com/2txBpU)** - - - - - - - - - - - - - -# How to Play Splinter Cell Blacklist on Uplay - - - -Splinter Cell Blacklist is the sixth installment in the popular stealth action series created by Tom Clancy. In this game, you play as Sam Fisher, a former Navy SEAL and leader of the newly formed Fourth Echelon, a covert unit that answers only to the President of the United States. Your mission is to stop a group of rogue terrorists called The Engineers, who have launched a series of attacks on US interests around the world and threatened to unleash more unless the US withdraws its troops from foreign soil. - - - -To play Splinter Cell Blacklist on Uplay, you need to have a Uplay account and the Uplay PC client installed on your computer. You can create a Uplay account for free at [https://uplay.ubisoft.com/](https://uplay.ubisoft.com/) and download the Uplay PC client from there. Once you have installed the Uplay PC client, you can launch it and log in with your Uplay account credentials. - - - -There are two ways to get Splinter Cell Blacklist on Uplay: either by purchasing it from the Ubisoft Store or by redeeming a code from another retailer. If you buy Splinter Cell Blacklist from the Ubisoft Store, you can download it directly from the Uplay PC client by clicking on the Games tab and selecting Splinter Cell Blacklist from your library. If you have a code from another retailer, you can redeem it by clicking on the Activate Product button in the top right corner of the Uplay PC client and entering your code. - - - -Once you have downloaded Splinter Cell Blacklist, you can start playing it by clicking on the Play button in your library. You can also access additional features such as achievements, rewards, leaderboards, and multiplayer modes by clicking on the game's icon in your library and selecting the desired option from the menu. You can also customize your game settings, such as graphics, audio, controls, and difficulty, by clicking on the Options button in the game's main menu. - - - -Splinter Cell Blacklist is a thrilling and immersive game that will challenge your stealth skills and strategic thinking. You can play it solo or with friends in co-op and multiplayer modes. You can also unlock new weapons, gadgets, suits, and upgrades by completing missions and objectives. Splinter Cell Blacklist is one of the best games in the Splinter Cell franchise and a must-play for fans of stealth action games. - - - -One of the main features of Splinter Cell Blacklist is the gameplay variety. You can choose to play the game in three different styles: Ghost, Panther, and Assault. Ghost style is for players who prefer to remain undetected and use non-lethal methods to neutralize enemies. Panther style is for players who like to strike from the shadows and use lethal force with precision. Assault style is for players who want to go loud and use guns and explosives to eliminate enemies. Each style has its own rewards and challenges, and you can switch between them at any time during the game. - - - -Another feature of Splinter Cell Blacklist is the co-op mode. You can team up with another player online or locally and play through 14 co-op missions that are linked to the main story. You can choose to play as Sam Fisher or his partner Isaac Briggs, each with their own abilities and equipment. You can also communicate with your partner using voice chat or in-game gestures. Co-op mode requires teamwork and coordination, as some missions have objectives that can only be completed by both players working together. - - - -The last feature of Splinter Cell Blacklist is the multiplayer mode. You can compete with other players online in two modes: Spies vs Mercs and Uplink Control. Spies vs Mercs is a classic mode from previous Splinter Cell games, where two teams of four players each face off in a game of cat and mouse. One team plays as spies, who have to hack terminals and avoid detection. The other team plays as mercs, who have to defend the terminals and hunt down the spies. Uplink Control is a new mode, where two teams of four players each have to capture and hold three uplinks on the map. The team with the most points at the end of the match wins. - - 1b8d091108 - - - - - diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/8Dio Adagio Violins Vol 1 V10 KONTAKT.md b/spaces/tioseFevbu/cartoon-converter/scripts/8Dio Adagio Violins Vol 1 V10 KONTAKT.md deleted file mode 100644 index d1f2b8b2cf7f52e364fac90bd128940f00566f05..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/8Dio Adagio Violins Vol 1 V10 KONTAKT.md +++ /dev/null @@ -1,64 +0,0 @@ - -

            8Dio Adagio Violins Vol 1 V10 KONTAKT: A Review

            -

            If you are looking for a realistic, expressive, and versatile violin library for your music production, you might want to check out 8Dio Adagio Violins Vol 1 V10 KONTAKT. This library is the first release from 8Dio Productions, a new company founded by Academy Award-winning composer Troels Folmann and Emmy-nominated composer and orchestrator Colin O'Malley. It features three main groups of master violinists: Full Ensemble (11), Chamber Ensemble (3) and a Solo Virtuoso. It also boasts as a major selling point no less than 10 different legato styles, each with up to three times round robin. In this review, we will take a closer look at the features and benefits of this library, as well as its pros and cons.

            -

            Introduction

            -

            What is 8Dio Adagio Violins Vol 1 V10 KONTAKT?

            -

            8Dio Adagio Violins Vol 1 V10 KONTAKT is a sample library for Kontakt VST/AU/AAX that contains 19,518 samples of orchestral violins recorded in a beautiful church environment. It is part of the Adagio series, which also includes cellos, violas, and double basses. The library aims to capture the raw emotions and true expression within each string ensemble, using multiple microphone positions, dynamic layers, articulations, and playing styles. The library requires Kontakt 5.8.1 Full Retail (or later) to run.

            -

            8Dio Adagio Violins Vol 1 V10 KONTAKT


            Download Zip ===> https://urlcod.com/2uHx9H



            -

            Why do you need it?

            -

            If you are a composer, producer, or musician who works with orchestral music, film scores, or any genre that requires realistic and expressive string sounds, you might find this library very useful. It can help you create beautiful melodies, harmonies, textures, and effects with the violins, using various legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. You can also customize your sound with the custom browser with built-in articulation matrix, which allows you to create your own matrix of custom playing styles and assign them to key-switches or midi CCs. You can also add depth, realism, and movement to your sound with the custom convolution, EQ & other Chaos FX.

            -

            Features and Benefits

            Features and Benefits

            -

            Ensemble, Chamber, and Solo Violins

            -

            One of the main features of this library is that it offers three different groups of violinists, each with their own sound and character. You can choose from Full Ensemble (11 players), Chamber Ensemble (3 players), or Solo Virtuoso (1 player), depending on the mood and style of your music. You can also mix and match them to create your own custom ensemble size and sound. Each group has its own set of articulations and playing styles, which you can access from the custom browser with built-in articulation matrix.

            -

            Full Ensemble Violins (11 Players)

            -

            The Full Ensemble Violins are ideal for creating rich and powerful orchestral sounds, with a wide dynamic range and a full-bodied tone. They can play both lyrical and aggressive passages, with a variety of legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. They also have a unique feature called "Dynamic Bowing", which allows you to control the speed and intensity of the bowing with your mod wheel or expression pedal. This can create realistic and expressive transitions between notes and articulations.

            -

            Small Ensemble or Chamber Violins (3 Players)

            -

            The Small Ensemble or Chamber Violins are perfect for creating intimate and delicate sounds, with a more detailed and nuanced tone. They can play both soft and loud passages, with a variety of legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. They also have a unique feature called "Emotional Legato", which allows you to control the amount of vibrato and emotion in the legato transitions with your mod wheel or expression pedal. This can create subtle and realistic variations in the legato performance.

            -

            Solo Virtuoso Violinist (1 Player)

            -

            The Solo Virtuoso Violinist is ideal for creating soloistic and expressive sounds, with a clear and bright tone. He can play both fast and slow passages, with a variety of legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. He also has a unique feature called "Ornamentation", which allows you to add grace notes, turns, mordents, trills, glissandi, and other ornaments to your performance with key-switches or midi CCs. This can create complex and realistic embellishments to your melody.

            -

            Multiple Legato Styles

            -

            Another main feature of this library is that it offers no less than 10 different legato styles for each group of violinists. Legato is a musical term that means playing notes smoothly and connectedly without any gaps or breaks between them. It is one of the most important techniques for creating realistic and expressive string sounds. The 10 legato styles are:

            -

            -

            Fluid Velocity Layered Legatos

            -

            This is the default legato style for each group of violinists. It uses velocity layers to control the volume and intensity of the legato transitions. The higher the velocity you play with your keyboard or midi controller, the louder and faster the transition will be. The lower the velocity you play with your keyboard or midi controller, the softer and slower the transition will be. This can create natural and dynamic variations in the legato performance.

            -

            Varying types of Short Notes with up to 8 repetitions

            -

            This is a special legato style that combines short notes with legato transitions. Short notes are notes that are played quickly and detachedly without any sustain or decay. They are often used for rhythmic or staccato passages in string music. This legato style allows you to play up to 8 short notes in a row with realistic repetitions and variations in volume and timing. After the 8th short note, the next note will be played as a normal legato transition. This can create interesting and realistic patterns in your performance.

            -

            Custom Browser with Built-In Articulation Matrix

            -

            Another main feature of this library is that it offers a custom browser with built-in articulation matrix for each group of violinists. The custom browser is a graphical user interface that allows you to access all the features and settings of the library in one place. You can load presets, adjust microphone positions, change tuning, enable/disable effects, and more. The articulation matrix is a table that allows you to create your own matrix of custom playing styles and assign them to key-switches or midi CCs. You can balance each articulation and adjust its volume, pan, tuning, and other parameters. You can also save and load your own custom matrices for different projects. This can give you full control and flexibility over your sound and performance.

            -

            Custom Convolution, EQ & other Chaos FX

            -

            Another main feature of this library is that it offers a custom convolution, EQ & other Chaos FX for each group of violinists. The custom convolution is a feature that allows you to add realistic reverb and ambience to your sound, using impulse responses from various spaces and environments. You can choose from a variety of presets, such as churches, halls, studios, forests, caves, and more. You can also adjust the size, decay, pre-delay, and other parameters of the convolution. The EQ is a feature that allows you to shape the tone and frequency of your sound, using a graphical equalizer with 5 bands. You can boost or cut the lows, low-mids, mids, high-mids, and highs of your sound, as well as adjust the gain and Q of each band. The other Chaos FX are features that allow you to add movement, modulation, distortion, and other effects to your sound, using various parameters and sliders. You can choose from a variety of presets, such as phaser, flanger, chorus, delay, distortion, bit-crusher, lo-fi, rotator, stereo spreader, and more. You can also adjust the amount, rate, depth, feedback, mix, and other parameters of each effect.

            -

            Pros and Cons

            -

            Pros

            -

            Some of the pros of this library are:

            -
              -
            • High-quality sound and performance: The library delivers a realistic and expressive sound of orchestral violins, with a wide dynamic range and a full-bodied tone. The samples are recorded in a beautiful church environment with multiple microphone positions. The performance is smooth and natural, with realistic transitions and variations.
            • -
            • Flexible and versatile articulations and playing styles: The library offers a variety of articulations and playing styles for each group of violinists. You can choose from 10 different legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. You can also use unique features such as Dynamic Bowing, Emotional Legato, and Ornamentation to add more realism and expression to your performance.
            • -
            • Easy to use and intuitive interface: The library offers a custom browser with built-in articulation matrix that allows you to access all the features and settings of the library in one place. You can load presets, adjust microphone positions, change tuning, enable/disable effects, and more. You can also create your own matrix of custom playing styles and assign them to key-switches or midi CCs. You can also save and load your own custom matrices for different projects.
            • -
            -

            Cons

            -

            Some of the cons of this library are:

            -
              -
            • Requires a lot of disk space and RAM: The library contains 19,518 samples of orchestral violins recorded in 24-bit/44.1 kHz quality. It requires 22 GB of disk space (compressed from 32 GB) to install. It also requires at least 4 GB of RAM (8 GB recommended) to run smoothly.
            • -
            • Expensive compared to other violin libraries: The library costs $399 USD (excluding VAT) to purchase from the official website. This is quite expensive compared to other violin libraries on the market that offer similar or more features for less money.
            • -
            -

            Conclusion

            -

            Summary of the main points

            -

            In conclusion, 8Dio Adagio Violins Vol 1 V10 KONTAKT is a sample library for Kontakt VST/AU/AAX that contains 19,518 samples of orchestral violins recorded in a beautiful church environment. It features three main groups of master violinists: Full Ensemble (11), Chamber Ensemble (3) and a Solo Virtuoso. It also boasts as a major selling point no less than 10 different legato styles, each with up to three times round robin. It also offers a custom browser with built-in articulation matrix that allows you to create your own matrix of custom playing styles and assign them to key-switches or midi CCs. It also offers a custom convolution, EQ & other Chaos FX that allows you to add depth, realism, and movement to your sound with various effects and parameters. The library has some pros and cons, such as high-quality sound and performance, flexible and versatile articulations and playing styles, easy to use and intuitive interface, but also requires a lot of disk space and RAM, and is expensive compared to other violin libraries.

            -

            Recommendation and rating

            -

            Based on this review, we would recommend this library to anyone who is looking for a realistic, expressive, and versatile violin library for their music production. It can help you create beautiful melodies, harmonies, textures, and effects with the violins, using various legato styles, short notes, tremolos, trills, pizzicatos, sordino, sul ponticello, sul tasto, bartok snaps, col legno, harmonics, and more. You can also customize your sound with the custom browser with built-in articulation matrix, which allows you to create your own matrix of custom playing styles and assign them to key-switches or midi CCs. You can also add depth, realism, and movement to your sound with the custom convolution, EQ & other Chaos FX. However, you should also be aware of the drawbacks of this library, such as the high disk space and RAM requirements, and the high price tag. If you have enough resources and budget to afford this library, you will not regret it. We would rate this library 4.5 out of 5 stars.

            -

            FAQs

            -

            Here are some frequently asked questions about this library:

            -
              -
            • Q: How can I purchase this library?
            • -
            • A: You can purchase this library from the official website of 8Dio Productions. You will need to create an account and log in to complete the purchase. You will also need to download the 8Dio Downloader app to download the library files.
            • -
            • Q: How can I install this library?
            • -
            • A: You will need to have Kontakt 5.8.1 Full Retail (or later) installed on your computer to run this library. You will also need to have at least 22 GB of free disk space (compressed from 32 GB) to install this library. You will need to use the 8Dio Downloader app to download the library files and extract them to your desired location. You will then need to add the library folder to your Kontakt Library tab using the Add Library function.
            • -
            • Q: How can I update this library?
            • -
            • A: You can update this library by using the 8Dio Downloader app. You will need to log in with your account and check for updates. You will then need to download and install the latest version of the library files.
            • -
            • Q: How can I get support for this library?
            • -
            • A: You can get support for this library by contacting the 8Dio Productions team via email or phone. You can also visit their website and check their FAQ section or their forum for more information.
            • -
            • Q: How can I get more sounds for this library?
            • -
            • A: You can get more sounds for this library by purchasing other libraries from the Adagio series, such as cellos, violas, and double basses. You can also purchase other libraries from 8Dio Productions that are compatible with this library, such as Anthology Strings or Century Strings.
            • -

            b2dd77e56b
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ariana Grande Christmas And Chill Mp3 Download Fixedxmass.md b/spaces/tioseFevbu/cartoon-converter/scripts/Ariana Grande Christmas And Chill Mp3 Download Fixedxmass.md deleted file mode 100644 index cc59a98b4e5ff06c79eb2299aeb16e3b1c8dee66..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ariana Grande Christmas And Chill Mp3 Download Fixedxmass.md +++ /dev/null @@ -1,34 +0,0 @@ -
            -`

            Ariana Grande Christmas And Chill Mp3 Downloadxmass: How to Enjoy the Holiday Season with the Pop Star's EP

            ` - -`

            If you are looking for a way to spice up your Christmas playlist, you might want to check out Ariana Grande's Christmas and Chill EP. This six-track project was released in 2015 and features some of the singer's most festive and sensual songs. Whether you want to cuddle up with your partner, dance around the tree, or just relax by the fireplace, this EP has something for everyone.

            ` - -`

            In this article, we will show you how to download Ariana Grande Christmas and Chill Mp3 Downloadxmass, a special version of the EP that includes a bonus track and some extra goodies. We will also give you some tips on how to enjoy the EP to the fullest and make your holiday season more fun and romantic.

            -

            Ariana Grande Christmas And Chill Mp3 Downloadxmass


            Download >> https://urlcod.com/2uHvZu



            ` - -`

            How to Download Ariana Grande Christmas and Chill Mp3 Downloadxmass

            ` - -`

            Ariana Grande Christmas and Chill Mp3 Downloadxmass is a limited edition of the EP that was released in 2016 as a gift for her fans. It includes the original six tracks, plus a remix of "Wit It This Christmas" featuring Mac Miller, her boyfriend at the time. It also comes with a digital booklet that contains lyrics, photos, and a personal message from Ariana.

            ` - -`

            To download Ariana Grande Christmas and Chill Mp3 Downloadxmass, you need to follow these steps:

            ` - -`
              ` -`
            1. Go to https://arianagrande.com/christmas-and-chill-downloadxmass and enter your email address.
            2. ` -`
            3. You will receive a confirmation email with a link to download the EP. Click on the link and save the file to your device.
            4. ` -`
            5. Enjoy listening to Ariana Grande Christmas and Chill Mp3 Downloadxmass on your preferred music player.
            6. ` -`
            ` - -`

            How to Enjoy Ariana Grande Christmas and Chill Mp3 Downloadxmass

            ` - -`

            Ariana Grande Christmas and Chill Mp3 Downloadxmass is more than just an EP. It is a mood setter, a vibe enhancer, and a holiday treat. Here are some ways you can enjoy it:

            ` - -`
              ` -`
            • Make it your soundtrack for a cozy night in with your partner. Light some candles, pour some wine, and play the EP on repeat. You will feel the chemistry between Ariana and Mac Miller on songs like "Wit It This Christmas" and "Winter Things". You can also sing along to the catchy hooks and harmonies on songs like "True Love" and "December".
            • ` -`
            • Use it as a background music for your holiday party. The EP has a mix of upbeat and chill songs that will suit any mood. You can dance to the groovy beats of "Intro" and "Not Just On Christmas". You can also relax to the soothing melodies of "Christmas & Chill" and "Santa Tell Me". The EP will keep your guests entertained and festive.
            • ` -`
            • Listen to it while you wrap gifts, decorate your home, or bake cookies. The EP will make you feel more cheerful and creative as you prepare for the holiday season. You can also get inspired by Ariana's style and attitude on songs like "Santa Baby" and "Bad Idea". The EP will make you feel more confident and glamorous.
            • ` -`
            ` - -`

            Ariana Grande Christmas and Chill Mp3 Downloadxmass is a must-have for any fan of the pop star or anyone who loves Christmas music. It is a unique and fun way to celebrate the holiday season with one of the most talented and popular singers of our time. Don't miss this opportunity to download Ariana Grande Christmas and Chill Mp3 Downloadxmass today!

            -

            ` cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Edson Lopes Guitar Pdf 16 [EXCLUSIVE].md b/spaces/tioseFevbu/cartoon-converter/scripts/Edson Lopes Guitar Pdf 16 [EXCLUSIVE].md deleted file mode 100644 index 70eaba328f455dc4f3f8b451ebd450cc6c863488..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Edson Lopes Guitar Pdf 16 [EXCLUSIVE].md +++ /dev/null @@ -1,25 +0,0 @@ -
            -

            Edson Lopes Guitar PDF 16: A Collection of Classical Guitar Pieces

            -

            If you are looking for a new challenge in your classical guitar practice, you might want to check out Edson Lopes Guitar PDF 16. This is a collection of 16 pieces by the Brazilian guitarist and composer Edson Lopes, who is known for his virtuosic and expressive style. The pieces range from easy to advanced level, and cover various genres and periods, such as baroque, romantic, contemporary, and Brazilian music. You can find the PDF file on Edson Lopes' website, where you can also listen to his recordings and watch his videos.

            -

            Some of the highlights of Edson Lopes Guitar PDF 16 are:

            -

            edson lopes guitar pdf 16


            Download File ★★★ https://urlcod.com/2uHy2G



            -
              -
            • Choro Triste, a melancholic and rhythmic piece that showcases the Brazilian choro style.
            • -
            • Estudo em Mi Menor, a study in E minor that combines arpeggios, scales, and slurs.
            • -
            • Minueto em Sol Maior, a graceful and elegant minuet by J.S. Bach.
            • -
            • Recuerdos de la Alhambra, a famous tremolo piece by Francisco Tárrega that evokes the sound of water fountains in the Alhambra palace.
            • -
            • Sonata K. 322, a lively and playful sonata by Domenico Scarlatti.
            • -
            • Valsa-Choro, a beautiful and lyrical waltz-choro by Heitor Villa-Lobos.
            • -
            -

            If you want to improve your classical guitar technique, repertoire, and musicality, Edson Lopes Guitar PDF 16 is a great resource to have. You can download it for free from Edson Lopes' website and start playing these wonderful pieces today.

            - -

            Edson Lopes is one of the most respected and prolific classical guitarists in Brazil. He started playing the guitar at the age of 10, and studied with renowned teachers such as Henrique Pinto, Abel Carlevaro, and Leo Brouwer. He has won several national and international competitions, and has performed in many countries around the world. He is also a composer and arranger, and has written more than 200 works for guitar solo, guitar duo, guitar and orchestra, and other instruments. He has recorded more than 20 albums, and has published several books of guitar music.

            -

            -

            Edson Lopes Guitar PDF 16 is one of his latest publications, and it reflects his passion for the classical guitar and its diverse repertoire. He has carefully selected and edited each piece, and has provided fingerings and performance notes to help the players. He has also recorded each piece and uploaded them on his YouTube channel, where you can watch him play with great skill and expression. You can also find other videos of him playing his own compositions and arrangements, as well as pieces by other composers.

            -

            If you are a fan of classical guitar music, you should definitely check out Edson Lopes Guitar PDF 16 and his other works. You will be amazed by his talent and creativity, and inspired by his love for the guitar.

            - -

            Edson Lopes Guitar PDF 16 is not only a collection of classical guitar pieces, but also a tribute to some of the greatest composers and guitarists of all time. Edson Lopes has chosen pieces that represent different styles and periods of music history, and that showcase the versatility and beauty of the guitar. He has also included some pieces that are influenced by his own Brazilian culture and heritage, such as choro, samba, and bossa nova. By playing these pieces, you will not only enjoy the music, but also learn more about the history and culture behind them.

            -

            Edson Lopes Guitar PDF 16 is suitable for intermediate to advanced level players, who are looking for new challenges and opportunities to grow as musicians. The pieces vary in difficulty and length, and require different skills and techniques, such as fingerstyle, tremolo, harmonics, rasgueado, and more. Edson Lopes has provided helpful tips and suggestions on how to practice and perform each piece, as well as how to interpret the musical expression and dynamics. He has also shared his own personal insights and experiences on playing these pieces, and how they have influenced his musical journey.

            -

            Edson Lopes Guitar PDF 16 is a valuable and enjoyable resource for any classical guitar enthusiast. It is a testament to Edson Lopes' dedication and generosity as a guitarist, composer, teacher, and promoter of the classical guitar. You can download it for free from his website, where you can also find more information about him and his other works. You can also follow him on social media and subscribe to his YouTube channel, where you can watch him play these pieces and many others. You will be impressed by his mastery and artistry of the classical guitar, and motivated to improve your own playing.

            81aa517590
            -
            -
            \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/cli/core/__init__.py b/spaces/tomofi/NDLOCR/cli/core/__init__.py deleted file mode 100644 index 7c4010657390ec78f46c298def82e1b092724032..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/cli/core/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2022, National Diet Library, Japan -# -# This software is released under the CC BY 4.0. -# https://creativecommons.org/licenses/by/4.0/ - - -from .inference import OcrInferencer - -__all__ = ['OcrInferencer'] diff --git a/spaces/ttt246/brain/Brain/src/__init__.py b/spaces/ttt246/brain/Brain/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/? ?son???? ?i?nj?m?r???j?n.md b/spaces/usbethFlerru/sovits-modelsV2/example/? ?son???? ?i?nj?m?r???j?n.md deleted file mode 100644 index 56ffe0538e939ebe25850d7f510e0280d7c35796..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/? ?son???? ?i?nj?m?r???j?n.md +++ /dev/null @@ -1,6 +0,0 @@ -

            ? ?son???? ?i?nj?m?r???j?n


            Download Filehttps://urlcod.com/2uyVC1



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/engine/validator.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/engine/validator.py deleted file mode 100644 index f84c8d0b0fb7274454625b87111056f83a30963a..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/engine/validator.py +++ /dev/null @@ -1,276 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license -""" -Check a model's accuracy on a test or val split of a dataset - -Usage: - $ yolo mode=val model=yolov8n.pt data=coco128.yaml imgsz=640 - -Usage - formats: - $ yolo mode=val model=yolov8n.pt # PyTorch - yolov8n.torchscript # TorchScript - yolov8n.onnx # ONNX Runtime or OpenCV DNN with dnn=True - yolov8n_openvino_model # OpenVINO - yolov8n.engine # TensorRT - yolov8n.mlmodel # CoreML (macOS-only) - yolov8n_saved_model # TensorFlow SavedModel - yolov8n.pb # TensorFlow GraphDef - yolov8n.tflite # TensorFlow Lite - yolov8n_edgetpu.tflite # TensorFlow Edge TPU - yolov8n_paddle_model # PaddlePaddle -""" -import json -import time -from pathlib import Path - -import torch -from tqdm import tqdm - -from ultralytics.nn.autobackend import AutoBackend -from ultralytics.yolo.cfg import get_cfg -from ultralytics.yolo.data.utils import check_cls_dataset, check_det_dataset -from ultralytics.yolo.utils import DEFAULT_CFG, LOGGER, RANK, SETTINGS, TQDM_BAR_FORMAT, callbacks, colorstr, emojis -from ultralytics.yolo.utils.checks import check_imgsz -from ultralytics.yolo.utils.files import increment_path -from ultralytics.yolo.utils.ops import Profile -from ultralytics.yolo.utils.torch_utils import de_parallel, select_device, smart_inference_mode - - -class BaseValidator: - """ - BaseValidator - - A base class for creating validators. - - Attributes: - dataloader (DataLoader): Dataloader to use for validation. - pbar (tqdm): Progress bar to update during validation. - args (SimpleNamespace): Configuration for the validator. - model (nn.Module): Model to validate. - data (dict): Data dictionary. - device (torch.device): Device to use for validation. - batch_i (int): Current batch index. - training (bool): Whether the model is in training mode. - speed (float): Batch processing speed in seconds. - jdict (dict): Dictionary to store validation results. - save_dir (Path): Directory to save results. - """ - - def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): - """ - Initializes a BaseValidator instance. - - Args: - dataloader (torch.utils.data.DataLoader): Dataloader to be used for validation. - save_dir (Path): Directory to save results. - pbar (tqdm.tqdm): Progress bar for displaying progress. - args (SimpleNamespace): Configuration for the validator. - """ - self.dataloader = dataloader - self.pbar = pbar - self.args = args or get_cfg(DEFAULT_CFG) - self.model = None - self.data = None - self.device = None - self.batch_i = None - self.training = True - self.speed = {'preprocess': 0.0, 'inference': 0.0, 'loss': 0.0, 'postprocess': 0.0} - self.jdict = None - - project = self.args.project or Path(SETTINGS['runs_dir']) / self.args.task - name = self.args.name or f'{self.args.mode}' - self.save_dir = save_dir or increment_path(Path(project) / name, - exist_ok=self.args.exist_ok if RANK in (-1, 0) else True) - (self.save_dir / 'labels' if self.args.save_txt else self.save_dir).mkdir(parents=True, exist_ok=True) - - if self.args.conf is None: - self.args.conf = 0.001 # default conf=0.001 - - self.plots = {} - self.callbacks = _callbacks or callbacks.get_default_callbacks() - - @smart_inference_mode() - def __call__(self, trainer=None, model=None): - """ - Supports validation of a pre-trained model if passed or a model being trained - if trainer is passed (trainer gets priority). - """ - self.training = trainer is not None - if self.training: - self.device = trainer.device - self.data = trainer.data - model = trainer.ema.ema or trainer.model - self.args.half = self.device.type != 'cpu' # force FP16 val during training - model = model.half() if self.args.half else model.float() - self.model = model - self.loss = torch.zeros_like(trainer.loss_items, device=trainer.device) - self.args.plots = trainer.stopper.possible_stop or (trainer.epoch == trainer.epochs - 1) - model.eval() - else: - callbacks.add_integration_callbacks(self) - self.run_callbacks('on_val_start') - assert model is not None, 'Either trainer or model is needed for validation' - self.device = select_device(self.args.device, self.args.batch) - self.args.half &= self.device.type != 'cpu' - model = AutoBackend(model, device=self.device, dnn=self.args.dnn, data=self.args.data, fp16=self.args.half) - self.model = model - stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine - imgsz = check_imgsz(self.args.imgsz, stride=stride) - if engine: - self.args.batch = model.batch_size - else: - self.device = model.device - if not pt and not jit: - self.args.batch = 1 # export.py models default to batch-size 1 - LOGGER.info(f'Forcing batch=1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') - - if isinstance(self.args.data, str) and self.args.data.endswith('.yaml'): - self.data = check_det_dataset(self.args.data) - elif self.args.task == 'classify': - self.data = check_cls_dataset(self.args.data, split=self.args.split) - else: - raise FileNotFoundError(emojis(f"Dataset '{self.args.data}' for task={self.args.task} not found ❌")) - - if self.device.type == 'cpu': - self.args.workers = 0 # faster CPU val as time dominated by inference, not dataloading - if not pt: - self.args.rect = False - self.dataloader = self.dataloader or self.get_dataloader(self.data.get(self.args.split), self.args.batch) - - model.eval() - model.warmup(imgsz=(1 if pt else self.args.batch, 3, imgsz, imgsz)) # warmup - - dt = Profile(), Profile(), Profile(), Profile() - n_batches = len(self.dataloader) - desc = self.get_desc() - # NOTE: keeping `not self.training` in tqdm will eliminate pbar after segmentation evaluation during training, - # which may affect classification task since this arg is in yolov5/classify/val.py. - # bar = tqdm(self.dataloader, desc, n_batches, not self.training, bar_format=TQDM_BAR_FORMAT) - bar = tqdm(self.dataloader, desc, n_batches, bar_format=TQDM_BAR_FORMAT) - self.init_metrics(de_parallel(model)) - self.jdict = [] # empty before each val - for batch_i, batch in enumerate(bar): - self.run_callbacks('on_val_batch_start') - self.batch_i = batch_i - # Preprocess - with dt[0]: - batch = self.preprocess(batch) - - # Inference - with dt[1]: - preds = model(batch['img'], augment=self.args.augment) - - # Loss - with dt[2]: - if self.training: - self.loss += model.loss(batch, preds)[1] - - # Postprocess - with dt[3]: - preds = self.postprocess(preds) - - self.update_metrics(preds, batch) - if self.args.plots and batch_i < 3: - self.plot_val_samples(batch, batch_i) - self.plot_predictions(batch, preds, batch_i) - - self.run_callbacks('on_val_batch_end') - stats = self.get_stats() - self.check_stats(stats) - self.speed = dict(zip(self.speed.keys(), (x.t / len(self.dataloader.dataset) * 1E3 for x in dt))) - self.finalize_metrics() - self.print_results() - self.run_callbacks('on_val_end') - if self.training: - model.float() - results = {**stats, **trainer.label_loss_items(self.loss.cpu() / len(self.dataloader), prefix='val')} - return {k: round(float(v), 5) for k, v in results.items()} # return results as 5 decimal place floats - else: - LOGGER.info('Speed: %.1fms preprocess, %.1fms inference, %.1fms loss, %.1fms postprocess per image' % - tuple(self.speed.values())) - if self.args.save_json and self.jdict: - with open(str(self.save_dir / 'predictions.json'), 'w') as f: - LOGGER.info(f'Saving {f.name}...') - json.dump(self.jdict, f) # flatten and save - stats = self.eval_json(stats) # update stats - if self.args.plots or self.args.save_json: - LOGGER.info(f"Results saved to {colorstr('bold', self.save_dir)}") - return stats - - def add_callback(self, event: str, callback): - """Appends the given callback.""" - self.callbacks[event].append(callback) - - def run_callbacks(self, event: str): - """Runs all callbacks associated with a specified event.""" - for callback in self.callbacks.get(event, []): - callback(self) - - def get_dataloader(self, dataset_path, batch_size): - """Get data loader from dataset path and batch size.""" - raise NotImplementedError('get_dataloader function not implemented for this validator') - - def build_dataset(self, img_path): - """Build dataset""" - raise NotImplementedError('build_dataset function not implemented in validator') - - def preprocess(self, batch): - """Preprocesses an input batch.""" - return batch - - def postprocess(self, preds): - """Describes and summarizes the purpose of 'postprocess()' but no details mentioned.""" - return preds - - def init_metrics(self, model): - """Initialize performance metrics for the YOLO model.""" - pass - - def update_metrics(self, preds, batch): - """Updates metrics based on predictions and batch.""" - pass - - def finalize_metrics(self, *args, **kwargs): - """Finalizes and returns all metrics.""" - pass - - def get_stats(self): - """Returns statistics about the model's performance.""" - return {} - - def check_stats(self, stats): - """Checks statistics.""" - pass - - def print_results(self): - """Prints the results of the model's predictions.""" - pass - - def get_desc(self): - """Get description of the YOLO model.""" - pass - - @property - def metric_keys(self): - """Returns the metric keys used in YOLO training/validation.""" - return [] - - def on_plot(self, name, data=None): - """Registers plots (e.g. to be consumed in callbacks)""" - self.plots[name] = {'data': data, 'timestamp': time.time()} - - # TODO: may need to put these following functions into callback - def plot_val_samples(self, batch, ni): - """Plots validation samples during training.""" - pass - - def plot_predictions(self, batch, preds, ni): - """Plots YOLO model predictions on batch images.""" - pass - - def pred_to_json(self, preds, batch): - """Convert predictions to JSON format.""" - pass - - def eval_json(self, stats): - """Evaluate and return JSON format of prediction statistics.""" - pass diff --git a/spaces/vanessa9178/anime-anything-v4.0/README.md b/spaces/vanessa9178/anime-anything-v4.0/README.md deleted file mode 100644 index 0be167982677f57c7c7372dfa307547869ff0b49..0000000000000000000000000000000000000000 --- a/spaces/vanessa9178/anime-anything-v4.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anime Anything V4.0 -emoji: 🏃 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/utils/drop.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/utils/drop.py deleted file mode 100644 index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/utils/drop.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import torch -from torch import nn - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - Args: - drop_prob (float): Drop rate for paths of model. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, drop_prob=0.): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.keep_prob = 1 - drop_prob - - def forward(self, x): - if self.drop_prob == 0. or not self.training: - return x - shape = (x.shape[0], ) + (1, ) * ( - x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = self.keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(self.keep_prob) * random_tensor - return output diff --git a/spaces/weiren119/AudiogramDigitization/src/digitizer/digitization.py b/spaces/weiren119/AudiogramDigitization/src/digitizer/digitization.py deleted file mode 100644 index 502c09763b9723c99f2a25c250ed7e1c64c0e801..0000000000000000000000000000000000000000 --- a/spaces/weiren119/AudiogramDigitization/src/digitizer/digitization.py +++ /dev/null @@ -1,414 +0,0 @@ -#!/usr/bin/env python3 -""" -Copyright (c) 2020, Carleton University Biomedical Informatics Collaboratory - -This source code is licensed under the MIT license found in the -LICENSE file in the root directory of this source tree. -""" - -import pathlib -import json -import os -import subprocess as sp -import tempfile -from typing import List, Callable - -from tqdm import tqdm -import numpy as np - -from interfaces import AudiogramDict, AudiogramAnnotationDict, ThresholdDict -from digitizer.report_components.grid import Grid -from digitizer.report_components.label import Label -from digitizer.report_components.symbol import Symbol -from digitizer.report_components.report import Report -import utils.audiology as Audiology -from utils.geometry import compute_rotation_angle, apply_rotation - -DIR = os.path.join(pathlib.Path(__file__).parent.absolute(), "..") # current directory - -def detect_audiograms(filepath: str, weights: str, device: str = "cpu") -> List[AudiogramDict]: - """Runs the audiogram detector. - - The detector is run as a subprocess. - - Parameters - ---------- - filepath : str - Path to the image on which the detector is to be run. - weights : str - Path to the file holding the weights of the neural network (detector). - device : str - "cpu" or "gpu" - - Returns - ------- - List[AudiogramDict] - The AudiogramDict corresponding to the audiograms detected in the report. - """ - subprocess = sp.Popen([ - "python3", - f"{os.path.join(DIR, 'digitizer/yolov5/detect_audiograms.py')}", - "--source", f"{filepath}", - "--weights", weights, - "--device", device - ], stdout=sp.PIPE) # TODO timeout should be an environment variable - output = subprocess.stdout.read().decode("utf-8") - audiograms = json.loads(output.split("$$$")[1]) - return audiograms - -def detect_labels(filepath: str, weights: str, audiogram_coordinates: dict, correction_angle: float, device: str = "cpu") -> List[Label]: - """Runs the label detector. - - The detector is run as a subprocess. - - Parameters - ---------- - filepath : str - Path to the image on which the detector is to be run. - audiogram_coordinates: dict - The coordinates of the audiogram { "x": int, "y": int } needed to convert the label locations - with respect to the top-left corner of the bounding audiogram to relative to the top-left corner - of the report. - correction_angle: float - The correction angle in degrees that was applied to the audiogram, so that it can be reversed to - get the coordinates of the label with respect to the top-left corner of the original unrotated report. - weights : str - Path to the file holding the weights of the neural network (detector). - device : str - "cpu" or "gpu" - - Returns - ------- - List[Label] - A list of Label objects (NOT LabelDict). - """ - subprocess = sp.Popen([ - "python3", - os.path.join(DIR, "digitizer/yolov5/detect_labels.py"), - "--source", f"{filepath}", - "--weights", weights, - "--device", device - ], stdout=sp.PIPE) - output = subprocess.stdout.read().decode("utf-8") - parsed = json.loads(output.split("$$$")[1]) - label_dicts = parsed - labels = [Label(label, audiogram_coordinates, correction_angle) for label in parsed] - return labels - -def detect_symbols(filepath: str, weights: str, audiogram_coordinates: dict, correction_angle: float, device: str = "cpu") -> List[Symbol]: - """Runs the symbol detector. - - The detector is run as a subprocess. - - Parameters - ---------- - filepath : str - Path to the image on which the detector is to be run. - audiogram_coordinates: dict - The coordinates of the audiogram { "x": int, "y": int } needed to convert the label locations - with respect to the top-left corner of the bounding audiogram to relative to the top-left corner - of the report. - correction_angle: float - The correction angle in degrees that was applied to the audiogram, so that it can be reversed to - get the coordinates of the label with respect to the top-left corner of the original unrotated report. - weights : str - Path to the file holding the weights of the neural network (detector). - device : str - "cpu" or "gpu" - - Returns - ------- - List[Label] - A list of Symbol objects (NOT SymbolDict). - """ - subprocess = sp.Popen([ - "python3", - os.path.join(DIR, "digitizer/yolov5/detect_symbols.py"), - "--source", filepath, - "--weights", weights, - "--device", device - ], stdout=sp.PIPE) - - output = json.loads(subprocess.stdout.read().decode("utf-8").split("$$$")[1]) - symbols = [Symbol(detection, audiogram_coordinates, correction_angle) for detection in output] - return symbols - -def detect_components(filepath: str, gpu: bool = False) -> List: - """Invokes the object detectors. - - Parameters - ---------- - filepath : str - Path to the image. - gpu : bool - Whether the GPU should be used (default: False). - - Returns - ------- - List - A list (of length 0, 1 or 2) of the form - [ - { "audiogram": AudiogramDict, "labels": List[Label], "symbols": List[Symbol] }, # plot 1 - { "audiogram": AudiogramDict, "labels": List[Label], "symbols": List[Symbol] } # plot 2 - ] - """ - - components = [] - - # Detect audiograms within the report - audiogram_model_weights_path = os.path.join(DIR, "..", "models/audiograms/latest/weights/best.pt") - audiograms = detect_audiograms(f"{filepath}", audiogram_model_weights_path) - - # If no audiogram is detected, return... - if len(audiograms) == 0: - return components - - # Iterate through every audiogram in the report - for i, audiogram in enumerate(audiograms): - components.append({}) - - # Load the report - report = Report(filename=filepath) - - # Generate a cropped version of the report around the detected audiogram - report = report.crop( - audiogram["boundingBox"]["x"], - audiogram["boundingBox"]["y"], - audiogram["boundingBox"]["x"] + audiogram["boundingBox"]["width"], - audiogram["boundingBox"]["y"] + audiogram["boundingBox"]["height"] - ) - - # Create a temporary file - cropped_file = tempfile.NamedTemporaryFile(suffix=".jpg") - - # Correct for rotation - lines = report.detect_lines(threshold=200) - perpendicular_lines = [ - line for line in lines - if line.has_a_perpendicular_line(lines) - and (abs(line.get_angle() - 90) < 10 - or abs(line.get_angle()) < 10) - ] - correction_angle = compute_rotation_angle(perpendicular_lines) - audiogram["correctionAngle"] = correction_angle - report = report.rotate(correction_angle) - report.save(cropped_file.name) - - audiogram_coordinates = { - "x": audiogram["boundingBox"]["x"], - "y": audiogram["boundingBox"]["y"] - } - - components[i]["audiogram"] = audiogram - - labels_model_weights_path = os.path.join(DIR, "..", "models/labels/latest/weights/best.pt") - components[i]["labels"] = detect_labels(cropped_file.name, labels_model_weights_path, audiogram_coordinates, correction_angle) - symbols_model_weights_path = os.path.join(DIR, "..", "models/symbols/latest/weights/best.pt") - components[i]["symbols"] = detect_symbols(cropped_file.name, symbols_model_weights_path, audiogram_coordinates, correction_angle) - - return components - -def generate_partial_annotation(filepath: str, gpu: bool = False) -> List[AudiogramAnnotationDict]: - """Generates a seed annotation to be completed in the nihl portal. - - It is ``partial`` because it does not locate the corners of the audiogram. - - Parameters - ---------- - filepath : str - Path to the file for which an initial annotation is to b - gpu : bool - Whether the gpu should be used. - - Returns - ------- - List[AudiogramAnnotationDict] - An Annotation dict. - """ - components = detect_components(filepath, gpu=gpu) - audiograms = [] - for i in range(len(components)): - audiogram = components[i]["audiogram"] - audiogram["labels"] = [label.to_dict() for label in components[i]["labels"]] - audiogram["symbols"] = [symbol.to_dict() for symbol in components[i]["symbols"]] - audiogram["corners"] = [] # these are not located by the algorithm - audiograms.append(audiogram) - return audiograms - -def extract_thresholds(filepath: str, gpu: bool = False) -> List[ThresholdDict]: - """Extracts the thresholds from the report. - - parameters - ---------- - filepath : str - Path to the file for which an initial annotation is to b - gpu : bool - Whether the gpu should be used. - - Returns - ------- - list[ThresholdDict] - A list of thresholds. - """ - components = detect_components(filepath, gpu=gpu) - - thresholds = [] - - # For each audiogram, extract the thresholds and append them to the - # thresholds list - for i in range(len(components)): - audiogram = components[i]["audiogram"] - labels = components[i]["labels"] - symbols = components[i]["symbols"] - - report = Report(filename=filepath) - report = report.crop( - audiogram["boundingBox"]["x"], - audiogram["boundingBox"]["y"], - audiogram["boundingBox"]["x"] + audiogram["boundingBox"]["width"], - audiogram["boundingBox"]["y"] + audiogram["boundingBox"]["height"] - ) - report = report.rotate(audiogram["correctionAngle"]) - - try: - grid = Grid(report, labels) - except Exception as e: - continue - - thresholds += [{ - "ear": symbol.ear, - "conduction": symbol.conduction, - "masking": symbol.masking, - "measurementType": Audiology.stringify_measurement(symbol.to_dict()), - "frequency": grid.get_snapped_frequency(symbol), - "threshold": grid.get_snapped_threshold(symbol), - "response": True # IMPORTANT: assume that a response was obtain for measurements - } - for symbol in symbols - ] - return thresholds - -def get_correction_angle(corners: List[dict]) -> float: - """Computes the rotation angle that must be applied based on - corner coordinates to get an unrotated audiogram. - - Parameters - ---------- - corners : List[dict] - A list of corners. - - Returns - ------- - float - The rotation angle that must be applied to correct for the rotation - of the audiogram. - """ - # sort the corners - corners = sorted(corners, key=lambda c: c["y"]) - top_corners = sorted(corners[2:], key=lambda c: c["x"]) - bottom_corners = sorted(corners[0:2], key=lambda c: c["x"]) - - # Find the rotation angle based on the top_corners 2 corners - dx1 = top_corners[1]["x"] - top_corners[0]["x"] - dy1 = top_corners[1]["y"] - top_corners[0]["y"] - angle1 = np.arcsin(abs(dy1)/abs(dx1)) - - # Repeat for the bottom_corners angles - dx2 = bottom_corners[1]["x"] - bottom_corners[0]["x"] - dy2 = bottom_corners[1]["y"] - bottom_corners[0]["y"] - angle2 = np.arcsin(abs(dy2)/abs(dx2)) - - return np.sign(dy1) * np.mean([angle1, angle2]) - -def get_conversion_maps(corners: List[dict]) -> List[Callable]: - """Computes the functions that map pixel coordinates to frequency-threshold coordinates - and vice versa. - - Parameters - ---------- - corners : List[dict] - The audiogram corners. - - Returns - ------- - List[Callable] - A list of lambda functions. These functions all accept a single float argument. - They are in the following order. - 1. pixel->frequency - 2. pixel->threshold - 3. frequency->pixel - 4. threshold->pixel - """ - - # For x axis - y_sorted_corners = sorted(corners, key=lambda c: c["y"]) - top_corners = sorted(y_sorted_corners[0:2], key=lambda c: c["x"]) - o_max = Audiology.frequency_to_octave(top_corners[1]["frequency"]) # max octave - x_max = top_corners[1]["x"] # max pixel value - o_min = Audiology.frequency_to_octave(top_corners[0]["frequency"]) # min octave - x_min = top_corners[0]["x"] - frequency_map = lambda p: Audiology.octave_to_frequency(o_min + (o_max - o_min)*(p - x_min)/(x_max - x_min)) - inverse_frequency_map = lambda f: x_min + (Audiology.frequency_to_octave(f) - o_min)*(x_max - x_min)/(o_max - o_min) - - # For y axis - x_sorted_corners = sorted(corners, key=lambda c: c["x"]) - left_corners = sorted(x_sorted_corners[0:2], key=lambda c: c["y"]) - t_max = left_corners[1]["threshold"] # max threshold - y_max = left_corners[1]["y"] # max pixel value - t_min = left_corners[0]["threshold"] - y_min = left_corners[0]["y"] - threshold_map = lambda p: t_min + (t_max - t_min)*(p - y_min)/(y_max - y_min) - inverse_threshold_map = lambda t: y_min + (t - t_min)*(y_max - y_min)/(t_max - t_min) - - return [frequency_map, threshold_map, inverse_frequency_map, inverse_threshold_map] - -def annotation_to_thresholds(audiograms: dict) -> List[ThresholdDict]: - """Extracts the thresholds from an annotation. - - Parameters - ---------- - audiograms : dict - An annotation. - - Returns - ------- - List[ThresholdDict] - A list of thresholds - """ - combined_thresholds = [] - for audiogram in audiograms: - correction_angle = get_correction_angle(audiogram["corners"]) - corners = [apply_rotation(corner, correction_angle) for corner in audiogram["corners"]] - frequency_map, threshold_map, inverse_frequency_map, inverse_threshold_map = get_conversion_maps(corners) - - thresholds: List[ThresholdDict] = [] - for symbol in audiogram["symbols"]: - symbol_center = { - "x": symbol["boundingBox"]["x"] + symbol["boundingBox"]["width"] / 2, - "y": symbol["boundingBox"]["y"] + symbol["boundingBox"]["height"] / 2, - } - symbol = { **symbol, "boundingBox": symbol_center } - new_symbol = {**symbol, "boundingBox": apply_rotation(symbol["boundingBox"], correction_angle) } - bounding_box = new_symbol["boundingBox"] - ear = "left" if "left" in new_symbol["measurementType"].lower() else "right" - conduction = "air" if "air" in new_symbol["measurementType"].lower() else "bone" - masking = False if "unmasked" in new_symbol["measurementType"].lower() else True - if conduction == "air": - frequency = Audiology.round_frequency(frequency_map(bounding_box["x"])) - else: - frequency = Audiology.round_frequency_bone(frequency_map(bounding_box["x"]), ear) - threshold = Audiology.round_threshold(threshold_map(bounding_box["y"])) - - thresholds.append({ - "ear": ear, - "conduction": conduction, - "masking": masking, - "frequency": frequency, - "threshold": threshold, - "response": True, # IMPORTANT: assume that a response was measured for threshold - "measurementType": f"{conduction}_{'MASKED' if masking else 'UNMASKED'}_{ear}".upper() - }) - - combined_thresholds += thresholds - - return combined_thresholds diff --git a/spaces/xcchen/vits-uma-genshin-honkai/text/symbols.py b/spaces/xcchen/vits-uma-genshin-honkai/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/xcchen/vits-uma-genshin-honkai/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/yolov5/export.py b/spaces/xfys/yolov5_tracking/yolov5/export.py deleted file mode 100644 index 067d9a22f292bf53004f9470f0198977a6880fab..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/export.py +++ /dev/null @@ -1,818 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit - -Format | `export.py --include` | Model ---- | --- | --- -PyTorch | - | yolov5s.pt -TorchScript | `torchscript` | yolov5s.torchscript -ONNX | `onnx` | yolov5s.onnx -OpenVINO | `openvino` | yolov5s_openvino_model/ -TensorRT | `engine` | yolov5s.engine -CoreML | `coreml` | yolov5s.mlmodel -TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ -TensorFlow GraphDef | `pb` | yolov5s.pb -TensorFlow Lite | `tflite` | yolov5s.tflite -TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite -TensorFlow.js | `tfjs` | yolov5s_web_model/ -PaddlePaddle | `paddle` | yolov5s_paddle_model/ - -Requirements: - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU - -Usage: - $ python export.py --weights yolov5s.pt --include torchscript onnx openvino engine coreml tflite ... - -Inference: - $ python detect.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle - -TensorFlow.js: - $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example - $ npm install - $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model - $ npm start -""" - -import argparse -import contextlib -import json -import os -import platform -import re -import subprocess -import sys -import time -import warnings -from pathlib import Path - -import pandas as pd -import torch -from torch.utils.mobile_optimizer import optimize_for_mobile - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -if platform.system() != 'Windows': - ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from yolov5.models.experimental import attempt_load -from yolov5.models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel -from yolov5.utils.dataloaders import LoadImages -from yolov5.utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version, - check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save) -from yolov5.utils.torch_utils import select_device, smart_inference_mode - -MACOS = platform.system() == 'Darwin' # macOS environment - - -class iOSModel(torch.nn.Module): - - def __init__(self, model, im): - super().__init__() - b, c, h, w = im.shape # batch, channel, height, width - self.model = model - self.nc = model.nc # number of classes - if w == h: - self.normalize = 1. / w - else: - self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]) # broadcast (slower, smaller) - # np = model(im)[0].shape[1] # number of points - # self.normalize = torch.tensor([1. / w, 1. / h, 1. / w, 1. / h]).expand(np, 4) # explicit (faster, larger) - - def forward(self, x): - xywh, conf, cls = self.model(x)[0].squeeze().split((4, 1, self.nc), 1) - return cls * conf, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4) - - -def export_formats(): - # YOLOv5 export formats - x = [ - ['PyTorch', '-', '.pt', True, True], - ['TorchScript', 'torchscript', '.torchscript', True, True], - ['ONNX', 'onnx', '.onnx', True, True], - ['OpenVINO', 'openvino', '_openvino_model', True, False], - ['TensorRT', 'engine', '.engine', False, True], - ['CoreML', 'coreml', '.mlmodel', True, False], - ['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True], - ['TensorFlow GraphDef', 'pb', '.pb', True, True], - ['TensorFlow Lite', 'tflite', '.tflite', True, False], - ['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False], - ['TensorFlow.js', 'tfjs', '_web_model', False, False], - ['PaddlePaddle', 'paddle', '_paddle_model', True, True],] - return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU']) - - -def try_export(inner_func): - # YOLOv5 export decorator, i..e @try_export - inner_args = get_default_args(inner_func) - - def outer_func(*args, **kwargs): - prefix = inner_args['prefix'] - try: - with Profile() as dt: - f, model = inner_func(*args, **kwargs) - LOGGER.info(f'{prefix} export success ✅ {dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB)') - return f, model - except Exception as e: - LOGGER.info(f'{prefix} export failure ❌ {dt.t:.1f}s: {e}') - return None, None - - return outer_func - - -@try_export -def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')): - # YOLOv5 TorchScript model export - LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...') - f = file.with_suffix('.torchscript') - - ts = torch.jit.trace(model, im, strict=False) - d = {'shape': im.shape, 'stride': int(max(model.stride)), 'names': model.names} - extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap() - if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html - optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files) - else: - ts.save(str(f), _extra_files=extra_files) - return f, None - - -@try_export -def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')): - # YOLOv5 ONNX export - check_requirements('onnx>=1.12.0') - import onnx - - LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...') - f = file.with_suffix('.onnx') - - output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0'] - if dynamic: - dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}} # shape(1,3,640,640) - if isinstance(model, SegmentationModel): - dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85) - dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'} # shape(1,32,160,160) - elif isinstance(model, DetectionModel): - dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85) - - torch.onnx.export( - model.cpu() if dynamic else model, # --dynamic only compatible with cpu - im.cpu() if dynamic else im, - f, - verbose=False, - opset_version=opset, - do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False - input_names=['images'], - output_names=output_names, - dynamic_axes=dynamic or None) - - # Checks - model_onnx = onnx.load(f) # load onnx model - onnx.checker.check_model(model_onnx) # check onnx model - - # Metadata - d = {'stride': int(max(model.stride)), 'names': model.names} - for k, v in d.items(): - meta = model_onnx.metadata_props.add() - meta.key, meta.value = k, str(v) - onnx.save(model_onnx, f) - - # Simplify - if simplify: - try: - cuda = torch.cuda.is_available() - check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1')) - import onnxsim - - LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...') - model_onnx, check = onnxsim.simplify(model_onnx) - assert check, 'assert check failed' - onnx.save(model_onnx, f) - except Exception as e: - LOGGER.info(f'{prefix} simplifier failure: {e}') - return f, model_onnx - - -@try_export -def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')): - # YOLOv5 OpenVINO export - check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/ - import openvino.inference_engine as ie - - LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...') - f = str(file).replace('.pt', f'_openvino_model{os.sep}') - - args = [ - 'mo', - '--input_model', - str(file.with_suffix('.onnx')), - '--output_dir', - f, - '--data_type', - ('FP16' if half else 'FP32'),] - subprocess.run(args, check=True, env=os.environ) # export - yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml - return f, None - - -@try_export -def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')): - # YOLOv5 Paddle export - check_requirements(('paddlepaddle', 'x2paddle')) - import x2paddle - from x2paddle.convert import pytorch2paddle - - LOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...') - f = str(file).replace('.pt', f'_paddle_model{os.sep}') - - pytorch2paddle(module=model, save_dir=f, jit_type='trace', input_examples=[im]) # export - yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml - return f, None - - -@try_export -def export_coreml(model, im, file, int8, half, nms, prefix=colorstr('CoreML:')): - # YOLOv5 CoreML export - check_requirements('coremltools') - import coremltools as ct - - LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...') - f = file.with_suffix('.mlmodel') - - if nms: - model = iOSModel(model, im) - ts = torch.jit.trace(model, im, strict=False) # TorchScript model - ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])]) - bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None) - if bits < 32: - if MACOS: # quantization only supported on macOS - with warnings.catch_warnings(): - warnings.filterwarnings('ignore', category=DeprecationWarning) # suppress numpy==1.20 float warning - ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode) - else: - print(f'{prefix} quantization only supported on macOS, skipping...') - ct_model.save(f) - return f, ct_model - - -@try_export -def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')): - # YOLOv5 TensorRT export https://developer.nvidia.com/tensorrt - assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`' - try: - import tensorrt as trt - except Exception: - if platform.system() == 'Linux': - check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com') - import tensorrt as trt - - if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012 - grid = model.model[-1].anchor_grid - model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid] - export_onnx(model, im, file, 12, dynamic, simplify) # opset 12 - model.model[-1].anchor_grid = grid - else: # TensorRT >= 8 - check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0 - export_onnx(model, im, file, 12, dynamic, simplify) # opset 12 - onnx = file.with_suffix('.onnx') - - LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...') - assert onnx.exists(), f'failed to export ONNX file: {onnx}' - f = file.with_suffix('.engine') # TensorRT engine file - logger = trt.Logger(trt.Logger.INFO) - if verbose: - logger.min_severity = trt.Logger.Severity.VERBOSE - - builder = trt.Builder(logger) - config = builder.create_builder_config() - config.max_workspace_size = workspace * 1 << 30 - # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice - - flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) - network = builder.create_network(flag) - parser = trt.OnnxParser(network, logger) - if not parser.parse_from_file(str(onnx)): - raise RuntimeError(f'failed to load ONNX file: {onnx}') - - inputs = [network.get_input(i) for i in range(network.num_inputs)] - outputs = [network.get_output(i) for i in range(network.num_outputs)] - for inp in inputs: - LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}') - for out in outputs: - LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}') - - if dynamic: - if im.shape[0] <= 1: - LOGGER.warning(f'{prefix} WARNING ⚠️ --dynamic model requires maximum --batch-size argument') - profile = builder.create_optimization_profile() - for inp in inputs: - profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape) - config.add_optimization_profile(profile) - - LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f}') - if builder.platform_has_fast_fp16 and half: - config.set_flag(trt.BuilderFlag.FP16) - with builder.build_engine(network, config) as engine, open(f, 'wb') as t: - t.write(engine.serialize()) - return f, None - - -@try_export -def export_saved_model(model, - im, - file, - dynamic, - tf_nms=False, - agnostic_nms=False, - topk_per_class=100, - topk_all=100, - iou_thres=0.45, - conf_thres=0.25, - keras=False, - prefix=colorstr('TensorFlow SavedModel:')): - # YOLOv5 TensorFlow SavedModel export - try: - import tensorflow as tf - except Exception: - check_requirements(f"tensorflow{'' if torch.cuda.is_available() else '-macos' if MACOS else '-cpu'}") - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 - - from models.tf import TFModel - - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - f = str(file).replace('.pt', '_saved_model') - batch_size, ch, *imgsz = list(im.shape) # BCHW - - tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz) - im = tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow - _ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) - inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size) - outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres) - keras_model = tf.keras.Model(inputs=inputs, outputs=outputs) - keras_model.trainable = False - keras_model.summary() - if keras: - keras_model.save(f, save_format='tf') - else: - spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype) - m = tf.function(lambda x: keras_model(x)) # full model - m = m.get_concrete_function(spec) - frozen_func = convert_variables_to_constants_v2(m) - tfm = tf.Module() - tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec]) - tfm.__call__(im) - tf.saved_model.save(tfm, - f, - options=tf.saved_model.SaveOptions(experimental_custom_gradients=False) if check_version( - tf.__version__, '2.6') else tf.saved_model.SaveOptions()) - return f, keras_model - - -@try_export -def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')): - # YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow - import tensorflow as tf - from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 - - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - f = file.with_suffix('.pb') - - m = tf.function(lambda x: keras_model(x)) # full model - m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)) - frozen_func = convert_variables_to_constants_v2(m) - frozen_func.graph.as_graph_def() - tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False) - return f, None - - -@try_export -def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')): - # YOLOv5 TensorFlow Lite export - import tensorflow as tf - - LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...') - batch_size, ch, *imgsz = list(im.shape) # BCHW - f = str(file).replace('.pt', '-fp16.tflite') - - converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) - converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] - converter.target_spec.supported_types = [tf.float16] - converter.optimizations = [tf.lite.Optimize.DEFAULT] - if int8: - from models.tf import representative_dataset_gen - dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False) - converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100) - converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] - converter.target_spec.supported_types = [] - converter.inference_input_type = tf.uint8 # or tf.int8 - converter.inference_output_type = tf.uint8 # or tf.int8 - converter.experimental_new_quantizer = True - f = str(file).replace('.pt', '-int8.tflite') - if nms or agnostic_nms: - converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS) - - tflite_model = converter.convert() - open(f, 'wb').write(tflite_model) - return f, None - - -@try_export -def export_edgetpu(file, prefix=colorstr('Edge TPU:')): - # YOLOv5 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/ - cmd = 'edgetpu_compiler --version' - help_url = 'https://coral.ai/docs/edgetpu/compiler/' - assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}' - if subprocess.run(f'{cmd} > /dev/null 2>&1', shell=True).returncode != 0: - LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}') - sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system - for c in ( - 'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -', - 'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list', - 'sudo apt-get update', 'sudo apt-get install edgetpu-compiler'): - subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True) - ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1] - - LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...') - f = str(file).replace('.pt', '-int8_edgetpu.tflite') # Edge TPU model - f_tfl = str(file).replace('.pt', '-int8.tflite') # TFLite model - - subprocess.run([ - 'edgetpu_compiler', - '-s', - '-d', - '-k', - '10', - '--out_dir', - str(file.parent), - f_tfl,], check=True) - return f, None - - -@try_export -def export_tfjs(file, int8, prefix=colorstr('TensorFlow.js:')): - # YOLOv5 TensorFlow.js export - check_requirements('tensorflowjs') - import tensorflowjs as tfjs - - LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...') - f = str(file).replace('.pt', '_web_model') # js dir - f_pb = file.with_suffix('.pb') # *.pb path - f_json = f'{f}/model.json' # *.json path - - args = [ - 'tensorflowjs_converter', - '--input_format=tf_frozen_model', - '--quantize_uint8' if int8 else '', - '--output_node_names=Identity,Identity_1,Identity_2,Identity_3', - str(f_pb), - str(f),] - subprocess.run([arg for arg in args if arg], check=True) - - json = Path(f_json).read_text() - with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order - subst = re.sub( - r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}, ' - r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, ' - r'"Identity_1": {"name": "Identity_1"}, ' - r'"Identity_2": {"name": "Identity_2"}, ' - r'"Identity_3": {"name": "Identity_3"}}}', json) - j.write(subst) - return f, None - - -def add_tflite_metadata(file, metadata, num_outputs): - # Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata - with contextlib.suppress(ImportError): - # check_requirements('tflite_support') - from tflite_support import flatbuffers - from tflite_support import metadata as _metadata - from tflite_support import metadata_schema_py_generated as _metadata_fb - - tmp_file = Path('/tmp/meta.txt') - with open(tmp_file, 'w') as meta_f: - meta_f.write(str(metadata)) - - model_meta = _metadata_fb.ModelMetadataT() - label_file = _metadata_fb.AssociatedFileT() - label_file.name = tmp_file.name - model_meta.associatedFiles = [label_file] - - subgraph = _metadata_fb.SubGraphMetadataT() - subgraph.inputTensorMetadata = [_metadata_fb.TensorMetadataT()] - subgraph.outputTensorMetadata = [_metadata_fb.TensorMetadataT()] * num_outputs - model_meta.subgraphMetadata = [subgraph] - - b = flatbuffers.Builder(0) - b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER) - metadata_buf = b.Output() - - populator = _metadata.MetadataPopulator.with_model_file(file) - populator.load_metadata_buffer(metadata_buf) - populator.load_associated_files([str(tmp_file)]) - populator.populate() - tmp_file.unlink() - - -def pipeline_coreml(model, im, file, names, y, prefix=colorstr('CoreML Pipeline:')): - # YOLOv5 CoreML pipeline - import coremltools as ct - from PIL import Image - - print(f'{prefix} starting pipeline with coremltools {ct.__version__}...') - batch_size, ch, h, w = list(im.shape) # BCHW - t = time.time() - - # Output shapes - spec = model.get_spec() - out0, out1 = iter(spec.description.output) - if platform.system() == 'Darwin': - img = Image.new('RGB', (w, h)) # img(192 width, 320 height) - # img = torch.zeros((*opt.img_size, 3)).numpy() # img size(320,192,3) iDetection - out = model.predict({'image': img}) - out0_shape, out1_shape = out[out0.name].shape, out[out1.name].shape - else: # linux and windows can not run model.predict(), get sizes from pytorch output y - s = tuple(y[0].shape) - out0_shape, out1_shape = (s[1], s[2] - 5), (s[1], 4) # (3780, 80), (3780, 4) - - # Checks - nx, ny = spec.description.input[0].type.imageType.width, spec.description.input[0].type.imageType.height - na, nc = out0_shape - # na, nc = out0.type.multiArrayType.shape # number anchors, classes - assert len(names) == nc, f'{len(names)} names found for nc={nc}' # check - - # Define output shapes (missing) - out0.type.multiArrayType.shape[:] = out0_shape # (3780, 80) - out1.type.multiArrayType.shape[:] = out1_shape # (3780, 4) - # spec.neuralNetwork.preprocessing[0].featureName = '0' - - # Flexible input shapes - # from coremltools.models.neural_network import flexible_shape_utils - # s = [] # shapes - # s.append(flexible_shape_utils.NeuralNetworkImageSize(320, 192)) - # s.append(flexible_shape_utils.NeuralNetworkImageSize(640, 384)) # (height, width) - # flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='image', sizes=s) - # r = flexible_shape_utils.NeuralNetworkImageSizeRange() # shape ranges - # r.add_height_range((192, 640)) - # r.add_width_range((192, 640)) - # flexible_shape_utils.update_image_size_range(spec, feature_name='image', size_range=r) - - # Print - print(spec.description) - - # Model from spec - model = ct.models.MLModel(spec) - - # 3. Create NMS protobuf - nms_spec = ct.proto.Model_pb2.Model() - nms_spec.specificationVersion = 5 - for i in range(2): - decoder_output = model._spec.description.output[i].SerializeToString() - nms_spec.description.input.add() - nms_spec.description.input[i].ParseFromString(decoder_output) - nms_spec.description.output.add() - nms_spec.description.output[i].ParseFromString(decoder_output) - - nms_spec.description.output[0].name = 'confidence' - nms_spec.description.output[1].name = 'coordinates' - - output_sizes = [nc, 4] - for i in range(2): - ma_type = nms_spec.description.output[i].type.multiArrayType - ma_type.shapeRange.sizeRanges.add() - ma_type.shapeRange.sizeRanges[0].lowerBound = 0 - ma_type.shapeRange.sizeRanges[0].upperBound = -1 - ma_type.shapeRange.sizeRanges.add() - ma_type.shapeRange.sizeRanges[1].lowerBound = output_sizes[i] - ma_type.shapeRange.sizeRanges[1].upperBound = output_sizes[i] - del ma_type.shape[:] - - nms = nms_spec.nonMaximumSuppression - nms.confidenceInputFeatureName = out0.name # 1x507x80 - nms.coordinatesInputFeatureName = out1.name # 1x507x4 - nms.confidenceOutputFeatureName = 'confidence' - nms.coordinatesOutputFeatureName = 'coordinates' - nms.iouThresholdInputFeatureName = 'iouThreshold' - nms.confidenceThresholdInputFeatureName = 'confidenceThreshold' - nms.iouThreshold = 0.45 - nms.confidenceThreshold = 0.25 - nms.pickTop.perClass = True - nms.stringClassLabels.vector.extend(names.values()) - nms_model = ct.models.MLModel(nms_spec) - - # 4. Pipeline models together - pipeline = ct.models.pipeline.Pipeline(input_features=[('image', ct.models.datatypes.Array(3, ny, nx)), - ('iouThreshold', ct.models.datatypes.Double()), - ('confidenceThreshold', ct.models.datatypes.Double())], - output_features=['confidence', 'coordinates']) - pipeline.add_model(model) - pipeline.add_model(nms_model) - - # Correct datatypes - pipeline.spec.description.input[0].ParseFromString(model._spec.description.input[0].SerializeToString()) - pipeline.spec.description.output[0].ParseFromString(nms_model._spec.description.output[0].SerializeToString()) - pipeline.spec.description.output[1].ParseFromString(nms_model._spec.description.output[1].SerializeToString()) - - # Update metadata - pipeline.spec.specificationVersion = 5 - pipeline.spec.description.metadata.versionString = 'https://github.com/ultralytics/yolov5' - pipeline.spec.description.metadata.shortDescription = 'https://github.com/ultralytics/yolov5' - pipeline.spec.description.metadata.author = 'glenn.jocher@ultralytics.com' - pipeline.spec.description.metadata.license = 'https://github.com/ultralytics/yolov5/blob/master/LICENSE' - pipeline.spec.description.metadata.userDefined.update({ - 'classes': ','.join(names.values()), - 'iou_threshold': str(nms.iouThreshold), - 'confidence_threshold': str(nms.confidenceThreshold)}) - - # Save the model - f = file.with_suffix('.mlmodel') # filename - model = ct.models.MLModel(pipeline.spec) - model.input_description['image'] = 'Input image' - model.input_description['iouThreshold'] = f'(optional) IOU Threshold override (default: {nms.iouThreshold})' - model.input_description['confidenceThreshold'] = \ - f'(optional) Confidence Threshold override (default: {nms.confidenceThreshold})' - model.output_description['confidence'] = 'Boxes × Class confidence (see user-defined metadata "classes")' - model.output_description['coordinates'] = 'Boxes × [x, y, width, height] (relative to image size)' - model.save(f) # pipelined - print(f'{prefix} pipeline success ({time.time() - t:.2f}s), saved as {f} ({file_size(f):.1f} MB)') - - -@smart_inference_mode() -def run( - data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path' - weights=ROOT / 'yolov5s.pt', # weights path - imgsz=(640, 640), # image (height, width) - batch_size=1, # batch size - device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu - include=('torchscript', 'onnx'), # include formats - half=False, # FP16 half-precision export - inplace=False, # set YOLOv5 Detect() inplace=True - keras=False, # use Keras - optimize=False, # TorchScript: optimize for mobile - int8=False, # CoreML/TF INT8 quantization - dynamic=False, # ONNX/TF/TensorRT: dynamic axes - simplify=False, # ONNX: simplify model - opset=12, # ONNX: opset version - verbose=False, # TensorRT: verbose log - workspace=4, # TensorRT: workspace size (GB) - nms=False, # TF: add NMS to model - agnostic_nms=False, # TF: add agnostic NMS to model - topk_per_class=100, # TF.js NMS: topk per class to keep - topk_all=100, # TF.js NMS: topk for all classes to keep - iou_thres=0.45, # TF.js NMS: IoU threshold - conf_thres=0.25, # TF.js NMS: confidence threshold -): - t = time.time() - include = [x.lower() for x in include] # to lowercase - fmts = tuple(export_formats()['Argument'][1:]) # --include arguments - flags = [x in include for x in fmts] - assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}' - jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle = flags # export booleans - file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # PyTorch weights - - # Load PyTorch model - device = select_device(device) - if half: - assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0' - assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both' - model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model - - # Checks - imgsz *= 2 if len(imgsz) == 1 else 1 # expand - if optimize: - assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu' - - # Input - gs = int(max(model.stride)) # grid size (max stride) - imgsz = [check_img_size(x, gs) for x in imgsz] # verify img_size are gs-multiples - im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection - - # Update model - model.eval() - for k, m in model.named_modules(): - if isinstance(m, Detect): - m.inplace = inplace - m.dynamic = dynamic - m.export = True - - for _ in range(2): - y = model(im) # dry runs - if half and not coreml: - im, model = im.half(), model.half() # to FP16 - shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape - metadata = {'stride': int(max(model.stride)), 'names': model.names} # model metadata - LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)") - - # Exports - f = [''] * len(fmts) # exported filenames - warnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning) # suppress TracerWarning - if jit: # TorchScript - f[0], _ = export_torchscript(model, im, file, optimize) - if engine: # TensorRT required before ONNX - f[1], _ = export_engine(model, im, file, half, dynamic, simplify, workspace, verbose) - if onnx or xml: # OpenVINO requires ONNX - f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify) - if xml: # OpenVINO - f[3], _ = export_openvino(file, metadata, half) - if coreml: # CoreML - f[4], ct_model = export_coreml(model, im, file, int8, half, nms) - if nms: - pipeline_coreml(ct_model, im, file, model.names, y) - if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats - assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.' - assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.' - f[5], s_model = export_saved_model(model.cpu(), - im, - file, - dynamic, - tf_nms=nms or agnostic_nms or tfjs, - agnostic_nms=agnostic_nms or tfjs, - topk_per_class=topk_per_class, - topk_all=topk_all, - iou_thres=iou_thres, - conf_thres=conf_thres, - keras=keras) - if pb or tfjs: # pb prerequisite to tfjs - f[6], _ = export_pb(s_model, file) - if tflite or edgetpu: - f[7], _ = export_tflite(s_model, im, file, int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms) - if edgetpu: - f[8], _ = export_edgetpu(file) - add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs)) - if tfjs: - f[9], _ = export_tfjs(file, int8) - if paddle: # PaddlePaddle - f[10], _ = export_paddle(model, im, file, metadata) - - # Finish - f = [str(x) for x in f if x] # filter out '' and None - if any(f): - cls, det, seg = (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel)) # type - det &= not seg # segmentation models inherit from SegmentationModel(DetectionModel) - dir = Path('segment' if seg else 'classify' if cls else '') - h = '--half' if half else '' # --half FP16 inference arg - s = '# WARNING ⚠️ ClassificationModel not yet supported for PyTorch Hub AutoShape inference' if cls else \ - '# WARNING ⚠️ SegmentationModel not yet supported for PyTorch Hub AutoShape inference' if seg else '' - LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)' - f"\nResults saved to {colorstr('bold', file.parent.resolve())}" - f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}" - f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}" - f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}" - f'\nVisualize: https://netron.app') - return f # return list of exported files/dirs - - -def parse_opt(known=False): - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)') - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--half', action='store_true', help='FP16 half-precision export') - parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True') - parser.add_argument('--keras', action='store_true', help='TF: use Keras') - parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile') - parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization') - parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes') - parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model') - parser.add_argument('--opset', type=int, default=17, help='ONNX: opset version') - parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log') - parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)') - parser.add_argument('--nms', action='store_true', help='TF: add NMS to model') - parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model') - parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep') - parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep') - parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold') - parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold') - parser.add_argument( - '--include', - nargs='+', - default=['torchscript'], - help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle') - opt = parser.parse_known_args()[0] if known else parser.parse_args() - print_args(vars(opt)) - return opt - - -def main(opt): - for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]): - run(**vars(opt)) - - -if __name__ == '__main__': - opt = parse_opt() - main(opt) diff --git a/spaces/xiaozhengchina/bingo/Dockerfile b/spaces/xiaozhengchina/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/xiaozhengchina/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/xnetba/Chat_advance/README.md b/spaces/xnetba/Chat_advance/README.md deleted file mode 100644 index 7d1a154a6a22e208e335edf4c8dae5477c5dfcff..0000000000000000000000000000000000000000 --- a/spaces/xnetba/Chat_advance/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: kaicheng/Chat_advance ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/xuxw98/TAPA/evaluate/full.py b/spaces/xuxw98/TAPA/evaluate/full.py deleted file mode 100644 index 48d5fb89a9ccfb1ba6be31f2132095a461012726..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/evaluate/full.py +++ /dev/null @@ -1,147 +0,0 @@ -# This mimics GPTQ's evaluation metrics: https://github.com/IST-DASLab/gptq/ -# Thanks to E. Frantar et al GPTQ: Accurate Post-training Compression for GPT, arXiv:2210.17323 -import math -import sys -import time -from pathlib import Path -from typing import Optional - -import lightning as L -import torch -import tqdm - -# support running without installing as a package -wd = Path(__file__).parent.parent.resolve() -sys.path.append(str(wd)) - -from lit_llama import LLaMA, Tokenizer -from lit_llama.utils import EmptyInitOnDevice - -from datasets import load_dataset - - -def load_eval_data(dataset_name: str) -> str: - # this mimics gptq datautils - if dataset_name == "wikitext": - # traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train') - testdata = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") - testdata = "\n\n".join(testdata["text"]) - elif dataset_name == "ptb": - testdata = load_dataset("ptb_text_only", "penn_treebank", split="test") - testdata = "\n\n".join(testdata["sentence"]) - elif dataset_name == "c4": - testdata = load_dataset( - "allenai/c4", - "allenai--c4", - data_files={"validation": "en/c4-validation.00000-of-00008.json.gz"}, - split="validation", - ) - testdata = " ".join(testdata[:1100]["text"]) - - else: - raise ValueError("invalid dataset name (wikitext, ptb, c4 are allowed)") - return testdata - - -def main( - datasets: str = "wikitext,ptb,c4", - *, - # compilation fails as it does not support torch.complex64 for RoPE - # compile: bool = False, - accelerator: str = "auto", - checkpoint_path: Optional[Path] = None, - tokenizer_path: Path = Path("checkpoints/lit-llama/tokenizer.model"), - model_size: str = "7B", - dtype: str = "float32", - quantize: Optional[str] = None, -) -> None: - """Generates text samples based on a pre-trained LLaMA model and tokenizer. - - Args: - datasets: The datasets to use as a comma separated string - # compile: Whether to compile the model. - accelerator: The hardware to run on. Possible choices are: - ``"cpu"``, ``"cuda"``, ``"mps"``, ``"gpu"``, ``"tpu"``, ``"auto"``. - checkpoint_path: The checkpoint path to load. - tokenizer_path: The tokenizer path to load. - dtype: The tensor dtype for choosing the floating-point precision - quantize: Whether to quantize the model and using which method: - ``"llm.int8"``: LLM.int8() mode, - ``"gptq.int4"``: GPTQ 4-bit mode. - """ - if not checkpoint_path: - checkpoint_path = Path(f"checkpoints/lit-llama/{model_size}/lit-llama.pth") - assert checkpoint_path.is_file() - assert tokenizer_path.is_file() - - fabric = L.Fabric(accelerator=accelerator, devices=1) - - dt = getattr(torch, dtype, None) - if not isinstance(dt, torch.dtype): - raise ValueError(f"{dtype} is not a valid dtype.") - dtype = dt - - with EmptyInitOnDevice( - device=fabric.device, dtype=dtype, quantization_mode=quantize - ): - print("Loading model ...", file=sys.stderr) - t0 = time.time() - model = LLaMA.from_name(model_size) - checkpoint = torch.load(checkpoint_path) - model.load_state_dict(checkpoint) - print(f"Time to load model: {time.time() - t0:.02f} seconds.", file=sys.stderr) - - model.eval() - - # if compile: - # model = torch.compile(model) - - total_toks = 0 - model = fabric.setup_module(model) - - tokenizer = Tokenizer(tokenizer_path) - - for dsname in datasets.split(","): - test_string = load_eval_data(dsname) - encoded_text = tokenizer.encode( - test_string, bos=True, eos=False, device=fabric.device - ) - encoded_text = encoded_text[ - None, : 256 * model.config.block_size - ] # add batch dimension, trim like gptq implementation - t0 = time.perf_counter() - - nlls = 0 - toks = 0 - with torch.inference_mode(): - block_size = 2048 # this is for compat with gptq, and indeed we get much worse beyond this (https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/llama/model.py#L30) - for i in tqdm.tqdm(range(0, encoded_text.shape[1], block_size)): - inp = encoded_text[:, i : i + block_size] - logits = model(inp)[0] - nll = torch.nn.functional.cross_entropy( - logits[:-1], inp[0, 1:].to(dtype=torch.long), reduction="sum" - ) - toks += inp.size(1) - 1 - nlls += nll.item() - - print(encoded_text.shape, logits.shape) - ppl = math.exp(nlls / toks) - print(f"Perplexity on {dsname}: {ppl:.2f}") - total_toks += toks - - t = time.perf_counter() - t0 - print( - f"\n\nTime for inference: {t:.02f} sec total, {total_toks / t:.02f} tokens/sec", - file=sys.stderr, - ) - print( - f"Memory used: {torch.cuda.max_memory_reserved() / 1e9:.02f} GB", - file=sys.stderr, - ) - - -if __name__ == "__main__": - from jsonargparse import CLI - - torch.set_float32_matmul_precision("high") - CLI(main) diff --git a/spaces/yaoshining/text-generation-webui/extensions/ngrok/README.md b/spaces/yaoshining/text-generation-webui/extensions/ngrok/README.md deleted file mode 100644 index 0324bf9852408d9d2b86cc0165c2d548996f9c94..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/extensions/ngrok/README.md +++ /dev/null @@ -1,69 +0,0 @@ -# Adding an ingress URL through the ngrok Agent SDK for Python - -[ngrok](https://ngrok.com) is a globally distributed reverse proxy commonly used for quickly getting a public URL to a -service running inside a private network, such as on your local laptop. The ngrok agent is usually -deployed inside a private network and is used to communicate with the ngrok cloud service. - -By default the authtoken in the NGROK_AUTHTOKEN environment variable will be used. Alternatively one may be specified in -the `settings.json` file, see the Examples below. Retrieve your authtoken on the [Auth Token page of your ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken), signing up is free. - -# Documentation - -For a list of all available options, see [the configuration documentation](https://ngrok.com/docs/ngrok-agent/config/) or [the connect example](https://github.com/ngrok/ngrok-py/blob/main/examples/ngrok-connect-full.py). - -The ngrok Python SDK is [on github here](https://github.com/ngrok/ngrok-py). A quickstart guide and a full API reference are included in the [ngrok-py Python API documentation](https://ngrok.github.io/ngrok-py/). - -# Running - -To enable ngrok install the requirements and then add `--extension ngrok` to the command line options, for instance: - -```bash -pip install -r extensions/ngrok/requirements.txt -python server.py --extension ngrok -``` - -In the output you should then see something like this: - -```bash -INFO:Loading the extension "ngrok"... -INFO:Session created -INFO:Created tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" with url "https://d83706cf7be7.ngrok.app" -INFO:Tunnel "9d9d0944dc75ff9d3aae653e5eb29fe9" TCP forwarding to "localhost:7860" -INFO:Ingress established at https://d83706cf7be7.ngrok.app -``` - -You can now access the webui via the url shown, in this case `https://d83706cf7be7.ngrok.app`. It is recommended to add some authentication to the ingress, see below. - -# Example Settings - -In `settings.json` add a `ngrok` key with a dictionary of options, for instance: - -To enable basic authentication: -```json -{ - "ngrok": { - "basic_auth": "user:password" - } -} -``` - -To enable OAUTH authentication: -```json -{ - "ngrok": { - "oauth_provider": "google", - "oauth_allow_domains": "asdf.com", - "oauth_allow_emails": "asdf@asdf.com" - } -} -``` - -To add an authtoken instead of using the NGROK_AUTHTOKEN environment variable: -```json -{ - "ngrok": { - "authtoken": "", - "authtoken_from_env":false - } -} -``` \ No newline at end of file diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/pretrained_example.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/pretrained_example.py deleted file mode 100644 index 63baef08bfa4bf34f52a0cf63e10a0b6783ac316..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/pretrained_example.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Minimal script for generating an image using pre-trained StyleGAN generator.""" - -import os -import pickle -import numpy as np -import PIL.Image -import dnnlib -import dnnlib.tflib as tflib -import config - -def main(): - # Initialize TensorFlow. - tflib.init_tf() - - # Load pre-trained network. - url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # karras2019stylegan-ffhq-1024x1024.pkl - with dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f: - _G, _D, Gs = pickle.load(f) - # _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run. - # _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run. - # Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot. - - # Print network details. - Gs.print_layers() - - # Pick latent vector. - rnd = np.random.RandomState(5) - latents = rnd.randn(1, Gs.input_shape[1]) - - # Generate image. - fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) - images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=fmt) - - # Save image. - os.makedirs(config.result_dir, exist_ok=True) - png_filename = os.path.join(config.result_dir, 'example.png') - PIL.Image.fromarray(images[0], 'RGB').save(png_filename) - -if __name__ == "__main__": - main() diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/track/Track.test.ts b/spaces/yderre-aubay/midi-player-demo/src/common/track/Track.test.ts deleted file mode 100644 index 06faf0ccb03361970783b73066137dd69ee59761..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/track/Track.test.ts +++ /dev/null @@ -1,39 +0,0 @@ -import { deserialize, serialize } from "serializr" -import Track from "./Track" -import { NoteEvent } from "./TrackEvent" -import { emptyTrack } from "./TrackFactory" - -describe("Track", () => { - it("should be serializable", () => { - const track = new Track() - track.channel = 5 - track.addEvent({ - type: "channel", - subtype: "note", - duration: 120, - tick: 123, - velocity: 100, - noteNumber: 100, - }) - const s = serialize(track) - const t = deserialize(Track, s) - expect(t.channel).toBe(5) - expect(t.endOfTrack).toBe(track.endOfTrack) - expect(t.events.length).toBe(1) - expect(t.events[0].tick).toBe(123) - }) - it("should reset end of track after note deletion", () => { - const track = emptyTrack(5) - const noteEvent = track.addEvent({ - type: "channel", - subtype: "note", - duration: 120, - tick: 123, - velocity: 100, - noteNumber: 100, - }) - expect(track.endOfTrack).toBe(243) - track.removeEvent(noteEvent.id) - expect(track.endOfTrack).toBe(0) - }) -}) diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/timer.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/timer.py deleted file mode 100644 index e4b3b8098a5ad41f8d18d42b6b2fedb694aa5508..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/FaceBoxesV2/utils/timer.py +++ /dev/null @@ -1,40 +0,0 @@ -# -------------------------------------------------------- -# Fast R-CNN -# Copyright (c) 2015 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ross Girshick -# -------------------------------------------------------- - -import time - - -class Timer(object): - """A simple timer.""" - def __init__(self): - self.total_time = 0. - self.calls = 0 - self.start_time = 0. - self.diff = 0. - self.average_time = 0. - - def tic(self): - # using time.time instead of time.clock because time time.clock - # does not normalize for multithreading - self.start_time = time.time() - - def toc(self, average=True): - self.diff = time.time() - self.start_time - self.total_time += self.diff - self.calls += 1 - self.average_time = self.total_time / self.calls - if average: - return self.average_time - else: - return self.diff - - def clear(self): - self.total_time = 0. - self.calls = 0 - self.start_time = 0. - self.diff = 0. - self.average_time = 0. diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/processing_markuplm.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/processing_markuplm.py deleted file mode 100644 index 51307d20eb5f3bf489920b45bee999383f6bb0e2..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/markuplm/processing_markuplm.py +++ /dev/null @@ -1,145 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Processor class for MarkupLM. -""" -from typing import Optional, Union - -from ...file_utils import TensorType -from ...processing_utils import ProcessorMixin -from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, TruncationStrategy - - -class MarkupLMProcessor(ProcessorMixin): - r""" - Constructs a MarkupLM processor which combines a MarkupLM feature extractor and a MarkupLM tokenizer into a single - processor. - - [`MarkupLMProcessor`] offers all the functionalities you need to prepare data for the model. - - It first uses [`MarkupLMFeatureExtractor`] to extract nodes and corresponding xpaths from one or more HTML strings. - Next, these are provided to [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`], which turns them into token-level - `input_ids`, `attention_mask`, `token_type_ids`, `xpath_tags_seq` and `xpath_subs_seq`. - - Args: - feature_extractor (`MarkupLMFeatureExtractor`): - An instance of [`MarkupLMFeatureExtractor`]. The feature extractor is a required input. - tokenizer (`MarkupLMTokenizer` or `MarkupLMTokenizerFast`): - An instance of [`MarkupLMTokenizer`] or [`MarkupLMTokenizerFast`]. The tokenizer is a required input. - parse_html (`bool`, *optional*, defaults to `True`): - Whether or not to use `MarkupLMFeatureExtractor` to parse HTML strings into nodes and corresponding xpaths. - """ - feature_extractor_class = "MarkupLMFeatureExtractor" - tokenizer_class = ("MarkupLMTokenizer", "MarkupLMTokenizerFast") - parse_html = True - - def __call__( - self, - html_strings=None, - nodes=None, - xpaths=None, - node_labels=None, - questions=None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - return_tensors: Optional[Union[str, TensorType]] = None, - **kwargs, - ) -> BatchEncoding: - """ - This method first forwards the `html_strings` argument to [`~MarkupLMFeatureExtractor.__call__`]. Next, it - passes the `nodes` and `xpaths` along with the additional arguments to [`~MarkupLMTokenizer.__call__`] and - returns the output. - - Optionally, one can also provide a `text` argument which is passed along as first sequence. - - Please refer to the docstring of the above two methods for more information. - """ - # first, create nodes and xpaths - if self.parse_html: - if html_strings is None: - raise ValueError("Make sure to pass HTML strings in case `parse_html` is set to `True`") - - if nodes is not None or xpaths is not None or node_labels is not None: - raise ValueError( - "Please don't pass nodes, xpaths nor node labels in case `parse_html` is set to `True`" - ) - - features = self.feature_extractor(html_strings) - nodes = features["nodes"] - xpaths = features["xpaths"] - else: - if html_strings is not None: - raise ValueError("You have passed HTML strings but `parse_html` is set to `False`.") - if nodes is None or xpaths is None: - raise ValueError("Make sure to pass nodes and xpaths in case `parse_html` is set to `False`") - - # # second, apply the tokenizer - if questions is not None and self.parse_html: - if isinstance(questions, str): - questions = [questions] # add batch dimension (as the feature extractor always adds a batch dimension) - - encoded_inputs = self.tokenizer( - text=questions if questions is not None else nodes, - text_pair=nodes if questions is not None else None, - xpaths=xpaths, - node_labels=node_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - return_tensors=return_tensors, - **kwargs, - ) - - return encoded_inputs - - def batch_decode(self, *args, **kwargs): - """ - This method forwards all its arguments to TrOCRTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please refer - to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, **kwargs) - - def decode(self, *args, **kwargs): - """ - This method forwards all its arguments to TrOCRTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to the - docstring of this method for more information. - """ - return self.tokenizer.decode(*args, **kwargs) - - @property - def model_input_names(self): - tokenizer_input_names = self.tokenizer.model_input_names - return tokenizer_input_names diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/opt/configuration_opt.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/opt/configuration_opt.py deleted file mode 100644 index d2b7a4347ea4e33743c42f4837c8069441424910..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/opt/configuration_opt.py +++ /dev/null @@ -1,150 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The Metaseq Authors and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" OPT model configuration""" -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -OPT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "facebook/opt-125m": "https://huggingface.co/facebook/opt-125m/blob/main/config.json", - "facebook/opt-350m": "https://huggingface.co/facebook/opt-350m/blob/main/config.json", - "facebook/opt-1.3b": "https://huggingface.co/facebook/opt-1.3b/blob/main/config.json", - "facebook/opt-2.7b": "https://huggingface.co/facebook/opt-2.7b/blob/main/config.json", - "facebook/opt-6.7b": "https://huggingface.co/facebook/opt-6.7b/blob/main/config.json", - "facebook/opt-13b": "https://huggingface.co/facebook/opt-13b/blob/main/config.json", -} - - -class OPTConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`OPTModel`]. It is used to instantiate a OPT model - according to the specified arguments, defining the model architecture. Instantiating a configuration with the - defaults will yield a similar configuration to that of the OPT - [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 50272): - Vocabulary size of the OPT model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`OPTModel`] - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of decoder layers. - ffn_dim (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer decoder. - activation_function (`str` or `function`, *optional*, defaults to `"relu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - max_position_embeddings (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - do_layer_norm_before (`bool`, *optional*, defaults to `True`): - Whether to perform layer normalization before the attention block. - word_embed_proj_dim (`int`, *optional*): - `word_embed_proj_dim` can be set to down-project word embeddings, *e.g.* `opt-350m`. Defaults to - `hidden_size`. - dropout (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_dropout (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - layerdrop (`float`, *optional*, defaults to 0.0): - The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more - details. - init_std (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - enable_bias (`bool`, *optional*, defaults to `True`): - Whether or not if the linear layers in the attention blocks should use the bias term. - layer_norm_elementwise_affine (`bool`, *optional*, defaults to `True`): - Whether or not if the layer norms should have learnable parameters. - - Example: - - ```python - >>> from transformers import OPTConfig, OPTModel - - >>> # Initializing a OPT facebook/opt-large style configuration - >>> configuration = OPTConfig() - - >>> # Initializing a model (with random weights) from the facebook/opt-large style configuration - >>> model = OPTModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "opt" - keys_to_ignore_at_inference = ["past_key_values"] - - def __init__( - self, - vocab_size=50272, - hidden_size=768, - num_hidden_layers=12, - ffn_dim=3072, - max_position_embeddings=2048, - do_layer_norm_before=True, - _remove_final_layer_norm=False, - word_embed_proj_dim=None, - dropout=0.1, - attention_dropout=0.0, - num_attention_heads=12, - activation_function="relu", - layerdrop=0.0, - init_std=0.02, - use_cache=True, - pad_token_id=1, - bos_token_id=2, - eos_token_id=2, - enable_bias=True, - layer_norm_elementwise_affine=True, - **kwargs, - ): - super().__init__( - pad_token_id=pad_token_id, - bos_token_id=bos_token_id, - eos_token_id=eos_token_id, - **kwargs, - ) - self.vocab_size = vocab_size - self.max_position_embeddings = max_position_embeddings - self.num_attention_heads = num_attention_heads - self.word_embed_proj_dim = word_embed_proj_dim if word_embed_proj_dim is not None else hidden_size - self.ffn_dim = ffn_dim - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.dropout = dropout - self.attention_dropout = attention_dropout - self.activation_function = activation_function - self.init_std = init_std - self.layerdrop = layerdrop - self.use_cache = use_cache - self.do_layer_norm_before = do_layer_norm_before - # We keep these variables at `True` for backward compatibility. - self.enable_bias = enable_bias - self.layer_norm_elementwise_affine = layer_norm_elementwise_affine - - # Note that the only purpose of `_remove_final_layer_norm` is to keep backward compatibility - # with checkpoints that have been fine-tuned before transformers v4.20.1 - # see https://github.com/facebookresearch/metaseq/pull/164 - self._remove_final_layer_norm = _remove_final_layer_norm diff --git a/spaces/ynhe/AskAnything/models/grit_src/grit/modeling/text/text_decoder.py b/spaces/ynhe/AskAnything/models/grit_src/grit/modeling/text/text_decoder.py deleted file mode 100644 index 071baa7a52d21d7132cc492f070cba066d17aa43..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/grit/modeling/text/text_decoder.py +++ /dev/null @@ -1,672 +0,0 @@ -# Modified by Jialian Wu from -# https://github.com/microsoft/GenerativeImage2Text/blob/main/generativeimage2text/layers/decoder.py -# and https://github.com/kdexd/virtex -from torch import nn -import torch -import functools -from torch.nn import functional as F -import warnings - - -class TextualHead(nn.Module): - def __init__(self, - visual_feature_size: int, vocab_size: int, hidden_size: int): - super().__init__() - self.visual_feature_size = visual_feature_size - self.vocab_size = vocab_size - self.hidden_size = hidden_size - - @property - def textual_feature_size(self): - return self.hidden_size - - -class WordAndPositionalEmbedding(nn.Module): - def __init__( - self, - vocab_size: int, - hidden_size: int, - dropout: float = 0.0, - max_caption_length: int = 30, - padding_idx: int = 0, - ): - super().__init__() - self.vocab_size = vocab_size - self.padding_idx = padding_idx - - #self.words = nn.Embedding(vocab_size, hidden_size, padding_idx=padding_idx) - self.words = nn.Embedding(vocab_size, hidden_size) - - # We provide no "padding index" for positional embeddings. We zero out - # the positional embeddings of padded positions as a post-processing. - self.positions = nn.Embedding(max_caption_length, hidden_size) - self.layer_norm = nn.LayerNorm( - hidden_size, eps=1e-8, elementwise_affine=True - ) - self.dropout = nn.Dropout(p=dropout) - - def forward(self, tokens: torch.Tensor): - position_indices = self._create_position_indices(tokens) - - # shape: (batch_size, max_caption_length, hidden_size) - word_embeddings = self.words(tokens) - position_embeddings = self.positions(position_indices) - - # shape: (batch_size, max_caption_length, hidden_size) - embeddings = self.layer_norm(word_embeddings + position_embeddings) - embeddings = self.dropout(embeddings) - - return embeddings - - @functools.lru_cache(maxsize=128) - def _create_position_indices(self, tokens: torch.Tensor): - - # Create position indices of the same size as token indices. - batch_size, max_caption_length = tokens.size() - positions = torch.arange( - max_caption_length, dtype=tokens.dtype, device=tokens.device - ) - # shape: (batch_size, max_caption_length) - positions = positions.unsqueeze(0).expand(batch_size, max_caption_length) - return positions - - -class BertEncoderAsDecoder(nn.Module): - def __init__(self, encoder): - super().__init__() - self.encoder = encoder - - def forward(self, tgt, memory, - tgt_mask=None, - tgt_key_padding_mask=None, - memory_key_padding_mask=None, - tgt_bi_valid_mask=None, - encoder_history_states=None, - ): - assert tgt_key_padding_mask is None, 'not supported' - assert tgt_mask.dim() == 2 - assert tgt_mask.shape[0] == tgt_mask.shape[1] - # tgt_mask should always be 0/negative infinity - tgt = tgt.transpose(0, 1) - memory = memory.transpose(0, 1) - - hidden_states = torch.cat((memory, tgt), dim=1) - num_tgt = tgt.shape[1] - num_memory = memory.shape[1] - device = tgt.device - dtype = tgt.dtype - top_left = torch.zeros((num_memory, num_memory), device=device, dtype=dtype) - top_right = torch.full((num_memory, num_tgt), float('-inf'), device=tgt.device, dtype=dtype,) - bottom_left = torch.zeros((num_tgt, num_memory), dtype=dtype, device=tgt_mask.device,) - left = torch.cat((top_left, bottom_left), dim=0) - right = torch.cat((top_right, tgt_mask.to(dtype)), dim=0) - - full_attention_mask = torch.cat((left, right), dim=1)[None, :] - - if memory_key_padding_mask is None: - memory_key_padding_mask = torch.full((memory.shape[0], memory.shape[1]), fill_value=False, device=device) - # if it is False, it means valid. That is, it is not a padding - assert memory_key_padding_mask.dtype == torch.bool - zero_negative_infinity = torch.zeros_like(memory_key_padding_mask, dtype=tgt.dtype) - zero_negative_infinity[memory_key_padding_mask] = float('-inf') - full_attention_mask = full_attention_mask.expand((memory_key_padding_mask.shape[0], num_memory + num_tgt, num_memory + num_tgt)) - full_attention_mask = full_attention_mask.clone() - origin_left = full_attention_mask[:, :, :num_memory] - update = zero_negative_infinity[:, None, :] - full_attention_mask[:, :, :num_memory] = origin_left + update - - if tgt_bi_valid_mask is not None: - # verify the correctness - bs = full_attention_mask.shape[0] - # during inference, tgt_bi_valid_mask's length is not changed, but - # num_tgt can be increased - max_valid_target = tgt_bi_valid_mask.shape[1] - mask = tgt_bi_valid_mask[:, None, :].expand((bs, num_memory+num_tgt, max_valid_target)) - full_attention_mask[:, :, num_memory:(num_memory+max_valid_target)][mask] = 0 - - # add axis for multi-head - full_attention_mask = full_attention_mask[:, None, :, :] - - if encoder_history_states is None: - result = self.encoder( - hidden_states=hidden_states, - attention_mask=full_attention_mask, - encoder_history_states=encoder_history_states, - ) - result = list(result) - result[0] = result[0][:, num_memory:].transpose(0, 1) - if self.encoder.output_hidden_states: - return result[0], result[1] - else: - # make it back-compatible - return result[0] - else: - encoder_out = self.encoder( - hidden_states=hidden_states[:, -1:], - attention_mask=full_attention_mask[:, :, -1:], - encoder_history_states=encoder_history_states, - ) - result = encoder_out[0].transpose(0, 1) - if self.encoder.output_hidden_states: - return result, encoder_out[1] - else: - return result - - -def create_transformer(decoder_type, norm_type, - textual_feature_size, - attention_heads, - feedforward_size, - dropout, - num_layers, - output_hidden_states=False, - use_mlp_wrapper=None, - use_act_checkpoint=True, - ): - assert norm_type in ['post', 'pre'] - if decoder_type is None: - LayerClass = ( - nn.TransformerDecoderLayer - if norm_type == "post" - else PreNormTransformerDecoderLayer - ) - _layer = LayerClass( - textual_feature_size, - attention_heads, - dim_feedforward=feedforward_size, - dropout=dropout, - activation="gelu", - ) - return nn.TransformerDecoder(_layer, num_layers) - elif decoder_type == 'bert_en': - from .modeling_bert import BertConfig, BertEncoder - config = BertConfig( - vocab_size_or_config_json_file=30522, - hidden_size=textual_feature_size, - num_hidden_layers=num_layers, - num_attention_heads=attention_heads, - intermediate_size=feedforward_size, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - layer_norm_eps=1e-12, - ) - config.pre_norm = (norm_type == 'pre') - config.use_mlp_wrapper = use_mlp_wrapper - config.output_hidden_states = output_hidden_states - encoder = BertEncoder(config, use_act_checkpoint=use_act_checkpoint) - return BertEncoderAsDecoder(encoder) - - -class PreNormTransformerDecoderLayer(nn.TransformerDecoderLayer): - def forward(self, tgt, memory, tgt_mask=None, memory_mask=None, - tgt_key_padding_mask=None, memory_key_padding_mask=None): - # fmt: off - # We use the members (modules) from super-class, just the order of - # operations is changed here. First layernorm, then attention. - tgt2 = self.norm1(tgt) - tgt2, _ = self.self_attn( - tgt2, tgt2, tgt2, attn_mask=tgt_mask, - key_padding_mask=tgt_key_padding_mask - ) - tgt = tgt + self.dropout1(tgt2) - - # Layernorm first, then decoder attention. - tgt2 = self.norm2(tgt) - tgt2, _ = self.multihead_attn( - tgt2, memory, memory, attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask - ) - tgt = tgt + self.dropout2(tgt2) - - # Layernorm first, then transformation through feedforward network. - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt - - -class TransformerDecoderTextualHead(TextualHead): - def __init__( - self, - object_feature_size: int, - vocab_size: int, - hidden_size: int, - num_layers: int, - attention_heads: int, - feedforward_size: int, - dropout: float = 0.1, - norm_type: str = "post", - mask_future_positions: bool = True, - max_caption_length: int = 1024, - padding_idx: int = 0, - decoder_type=None, - not_tie_weight=None, - output_hidden_states=None, - use_mlp_wrapper=None, - use_act_checkpoint=True, - ): - super().__init__(object_feature_size, vocab_size, hidden_size) - self.num_layers = num_layers - self.attention_heads = attention_heads - self.feedforward_size = feedforward_size - self.dropout = dropout - assert mask_future_positions - self.padding_idx = padding_idx - - self.object_feature_projection = nn.Sequential( - nn.Linear(object_feature_size, self.textual_feature_size), - nn.LayerNorm(self.textual_feature_size)) - - self.embedding = WordAndPositionalEmbedding( - self.vocab_size, - self.textual_feature_size, - dropout=dropout, - max_caption_length=max_caption_length, - padding_idx=padding_idx, - ) - self.transformer = create_transformer( - decoder_type=decoder_type, - norm_type=norm_type, - textual_feature_size=self.textual_feature_size, - attention_heads=self.attention_heads, - feedforward_size=self.feedforward_size, - dropout=dropout, - num_layers=self.num_layers, - output_hidden_states=output_hidden_states, - use_mlp_wrapper=use_mlp_wrapper, - use_act_checkpoint=use_act_checkpoint, - ) - self.apply(self._init_weights) - - # Create an output linear layer and tie the input and output word - # embeddings to reduce parametejs. - self.output = nn.Linear(self.textual_feature_size, vocab_size) - if not not_tie_weight: - self.output.weight = self.embedding.words.weight - - @staticmethod - def _init_weights(module): - """Initialize weights like BERT - N(0.0, 0.02), bias = 0.""" - - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=0.02) - elif isinstance(module, nn.MultiheadAttention): - module.in_proj_weight.data.normal_(mean=0.0, std=0.02) - module.out_proj.weight.data.normal_(mean=0.0, std=0.02) - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=0.02) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def forward( - self, - hidden_states, - text_tokens, - ): - projected_object_features = self.object_feature_projection(hidden_states) if hidden_states is not None else None - batch_size, max_text_length = text_tokens.size() - text_embeddings = self.embedding(text_tokens) - - # An additive mask for masking the future (one direction). - uni_mask_zero_neg = self._generate_future_mask( - max_text_length, text_embeddings.dtype, text_embeddings.device - ) - - # We transpose the first two dimensions of tokens embeddings and visual - # features, as required by decoder. - text_embeddings = text_embeddings.transpose(0, 1) - - projected_object_features = projected_object_features.transpose(0, 1) - - # if transformer here is the pytorch/decoder, there is no chance, the - # output is always tensor - trans_out = self.transformer( - text_embeddings, - projected_object_features, - tgt_mask=uni_mask_zero_neg, - ) - if isinstance(trans_out, tuple): - textual_features = trans_out[0] - else: - assert isinstance(trans_out, torch.Tensor) - textual_features = trans_out - # Undo the transpose and bring batch to dim 0. - # shape: (batch_size, max_caption_length, hidden_size) - textual_features = textual_features.transpose(0, 1) - - # shape: (batch_size, max_caption_length, vocab_size) - output_logits = self.output(textual_features) - if isinstance(trans_out, tuple): - return output_logits, trans_out[1] - else: - return output_logits - - def _generate_future_mask( - self, size: int, dtype: torch.dtype, device: torch.device - ): - # Default mask is for forward direction. Flip for backward direction. - mask = torch.triu( - torch.ones(size, size, device=device, dtype=dtype), diagonal=1 - ) - mask = mask.masked_fill(mask == 1, float("-inf")) - return mask - - -class AutoRegressiveBeamSearch(object): - def __init__( - self, - end_token_id: int, - max_steps: int = 50, - beam_size: int = 5, - objectdet=True, - per_node_beam_size: int = 2, - ): - self._eos_index = end_token_id - self.max_steps = max_steps - self.beam_size = beam_size - self.objectdet = objectdet - self.per_node_beam_size = per_node_beam_size or beam_size - - def search(self, begin_tokens, step): - if self.beam_size > 1 and self.objectdet: - only_return_best = False - else: - only_return_best = True - - batch_size = begin_tokens.size()[0] - - predictions = begin_tokens.unsqueeze(1).expand((batch_size, self.beam_size, begin_tokens.shape[-1])) - # Calculate the first timestep. This is done outside the main loop - # because we are going from a single decoder input (the output from the - # encoder) to the top `beam_size` decoder outputs. On the other hand, - # within the main loop we are going from the `beam_size` elements of the - # beam to `beam_size`^2 candidates from which we will select the top - # `beam_size` elements for the next iteration. - # shape: (batch_size, num_classes) - start_class_logits = step(begin_tokens) - - # Convert logits to logprobs. - # shape: (batch_size * beam_size, vocab_size) - start_class_logprobs = F.log_softmax(start_class_logits, dim=1) - - num_classes = start_class_logprobs.size()[1] - - # shape: (batch_size, beam_size), (batch_size, beam_size) - start_top_logprobs, start_predicted_classes = start_class_logprobs.topk( - self.beam_size - ) - - if ( - self.beam_size == 1 - and (start_predicted_classes == self._eos_index).all() - ): - warnings.warn( - "Empty object description predicted. You may want to increase beam" - "size or ensure your step function is working properly.", - RuntimeWarning, - ) - if only_return_best: - return start_predicted_classes, start_top_logprobs - else: - return start_predicted_classes.unsqueeze(-1), start_top_logprobs - - # The log probs for the last time step. - # shape: (batch_size, beam_size) - last_logprobs = start_top_logprobs - - # shape: (batch_size, beam_size, sequence_length) - predictions = torch.cat([predictions, start_predicted_classes.unsqueeze(-1)], dim=-1) - - # Log probability tensor that mandates that the end token is selected. - # shape: (batch_size * beam_size, num_classes) - logprobs_after_end = start_class_logprobs.new_full( - (batch_size * self.beam_size, num_classes), float("-inf") - ) - logprobs_after_end[:, self._eos_index] = 0.0 - - logits_after_end = start_class_logprobs.new_full( - (batch_size * self.beam_size, num_classes), float("-inf") - ) - logits_after_end[:, self._eos_index] = 0 - - while predictions.shape[-1] < self.max_steps: - # shape: (batch_size * beam_size,) - last_predictions = predictions[:, :, -1].reshape(batch_size * self.beam_size) - - # If every predicted token from the last step is `self._eos_index`, - # then we can stop early. - if (last_predictions == self._eos_index).all(): - break - - predictions_so_far = predictions.view( - batch_size * self.beam_size, -1 - ) - # shape: (batch_size * beam_size, num_classes) - class_logits = step(predictions_so_far) - - # Set logprobs of last predicted tokens as high negative value to avoid - # repetition in description. - class_logits = class_logits.scatter(1, predictions_so_far[:, -1].view((-1, 1)), -10000) - - # shape: (batch_size * beam_size, num_classes) - last_predictions_expanded = last_predictions.unsqueeze(-1).expand( - batch_size * self.beam_size, num_classes - ) - - # Here we are finding any beams where we predicted the end token in - # the previous timestep and replacing the distribution with a - # one-hot distribution, forcing the beam to predict the end token - # this timestep as well. - class_logits = torch.where( - last_predictions_expanded == self._eos_index, - logits_after_end, - class_logits, - ) - - # Convert logits to logprobs. - # shape: (batch_size * beam_size, vocab_size) - class_logprobs = F.log_softmax(class_logits, dim=1) - - # shape (both): (batch_size * beam_size, per_node_beam_size) - top_logprobs, predicted_classes = class_logprobs.topk( - self.per_node_beam_size - ) - - # Here we expand the last log probs to `(batch_size * beam_size, - # per_node_beam_size)` so that we can add them to the current log - # probs for this timestep. This lets us maintain the log - # probability of each element on the beam. - # shape: (batch_size * beam_size, per_node_beam_size) - expanded_last_logprobs = ( - last_logprobs.unsqueeze(2) - .expand(batch_size, self.beam_size, self.per_node_beam_size) - .reshape(batch_size * self.beam_size, self.per_node_beam_size) - ) - # shape: (batch_size * beam_size, per_node_beam_size) - summed_top_logprobs = top_logprobs + expanded_last_logprobs - - # shape: (batch_size, beam_size * per_node_beam_size) - reshaped_summed = summed_top_logprobs.reshape( - batch_size, self.beam_size * self.per_node_beam_size - ) - # shape: (batch_size, beam_size * per_node_beam_size) - reshaped_predicted_classes = predicted_classes.reshape( - batch_size, self.beam_size * self.per_node_beam_size - ) - # Append the predictions to the current beam. - reshaped_beam = ( - predictions.view(batch_size * self.beam_size, 1, -1) - .repeat(1, self.per_node_beam_size, 1) - .reshape(batch_size, self.beam_size * self.per_node_beam_size, -1) - ) - # batch_size, (beam_size * per_node_beach_size), #token - reshaped_beam = torch.cat([reshaped_beam, reshaped_predicted_classes.unsqueeze(-1)], dim=-1) - - # Keep only the top `beam_size` beam indices. - # shape: (batch_size, beam_size), (batch_size, beam_size) - restricted_beam_logprobs, restricted_beam_indices = reshaped_summed.topk( - self.beam_size - ) - predictions = reshaped_beam.gather( - 1, restricted_beam_indices.unsqueeze(-1).repeat(1,1,reshaped_beam.shape[-1]) - ) - - # shape: (batch_size, beam_size) - last_logprobs = restricted_beam_logprobs - - if not torch.isfinite(last_logprobs).all(): - warnings.warn( - "Infinite log probs encountered. Some final descriptions may not " - "make sense. This can happen when the beam size is larger than" - " the number of valid (non-zero probability) transitions that " - "the step function produces.", - RuntimeWarning, - ) - - # Optionally select best beam and its logprobs. - if only_return_best: - # shape: (batch_size, sequence_length) - predictions = predictions[:, 0, :] - last_logprobs = last_logprobs[:, 0] - num_valid = (predictions != self._eos_index).sum(dim=-1) - num_valid += (predictions == self._eos_index).sum(dim=-1) > 0 - num_valid = num_valid - begin_tokens.shape[1] - num_valid = num_valid.clip(min=1) - - last_logprobs = last_logprobs / num_valid - - return predictions, last_logprobs - - -class GRiTTextDecoder(nn.Module): - def __init__( - self, - transformer, - begin_token_id=101, - beamsearch_decode=None, - loss_type=None, - tokenizer=None, - ): - super().__init__() - self.textual = transformer - self.padding_idx = self.textual.padding_idx - - self.begin_token_id = begin_token_id - self.beamsearch_decode = beamsearch_decode - self.tokenizer = tokenizer - - if loss_type is None: - self.loss = nn.CrossEntropyLoss(ignore_index=self.padding_idx) - elif loss_type == 'smooth': - self.loss = SmoothLabelCrossEntropyLoss(ignore_index=self.padding_idx) - else: - raise NotImplementedError(loss_type) - - def forward(self, batch): - object_features = batch['object_features'] - - if self.training: - caption_token_input = batch["text_tokens"] - - output_logits = self.textual( - object_features, - caption_token_input, - ) - - if 'need_predict' in batch: - # in place should also be good, but we do not choose that for - # safety as we may use it in prediction results in future - target = batch["text_tokens"].clone() - target[batch['need_predict'] == 0] = self.padding_idx - else: - target = batch["text_tokens"] - - feat = output_logits[:, :-1].contiguous() - target = target[:, 1:].contiguous() - feat = feat.view(-1, self.textual.vocab_size) - target = target.view(-1) - - valid_mask = target != self.padding_idx - target = target[valid_mask] - feat = feat[valid_mask] - loss = self.loss(feat, target) - - return loss - else: - output_dict = self.infer(object_features) - return output_dict - - def infer(self, object_features): - batch_size = object_features.size(0) - begin_tokens = object_features.new_full( - (batch_size, 1), self.begin_token_id - ).long() - - decoding_step = functools.partial( - self.decoding_step, object_features - ) - - object_description_tokens, logprobs = self.beamsearch_decode.search( - begin_tokens, decoding_step - ) - - output_dict = { - 'predictions': object_description_tokens, - 'logprobs': logprobs, - } - - return output_dict - - def decoding_step(self, object_features, partial_text): - batch_size = object_features.shape[0] - beam_size = int(partial_text.size(0) / batch_size) - if beam_size > 1: - batch_size, num_token, channels = object_features.size() - object_features = object_features.unsqueeze(1).repeat(1, beam_size, 1, 1) - object_features = object_features.view( - batch_size * beam_size, num_token, channels - ) - - text_lengths = torch.ones_like(partial_text) - if len(text_lengths.size()) != 2: - partial_text = partial_text.unsqueeze(1) - - # shape: (batch_size * beam_size, partial_caption_length, vocab_size) - logits = self.textual( - object_features, - partial_text, - ) - - return logits[:, -1, :].float() - - -class SmoothLabelCrossEntropyLoss(nn.Module): - def __init__(self, eps=0.1, log_prefix='', ignore_index=None): - super().__init__() - self.eps = eps - self.log_soft = nn.LogSoftmax(dim=1) - self.kl = nn.KLDivLoss(reduction='none') - - self.iter = 0 - self.max_loss = 0 - self.min_loss = 0 - self.log_prefix = log_prefix - self.ignore_index = ignore_index - - def forward(self, feature, target): - feature = feature.float() - if self.ignore_index is not None: - valid_mask = target != self.ignore_index - target = target[valid_mask] - feature = feature[valid_mask] - assert target.numel() > 0 - self.iter += 1 - eps = self.eps - n_class = feature.size(1) - one_hot = torch.zeros_like(feature).scatter(1, target.view(-1, 1), 1) - one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1) - log_prb = self.log_soft(feature) - loss = self.kl(log_prb, one_hot) - return loss.sum(dim=1).mean() - diff --git a/spaces/yooch/yooch/chatgpt - macOS.command b/spaces/yooch/yooch/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/yooch/yooch/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/yotamsapi/face-swap/options/swap_options.py b/spaces/yotamsapi/face-swap/options/swap_options.py deleted file mode 100644 index 2a90c349bb7078823ddd99ed96700cb2569579cd..0000000000000000000000000000000000000000 --- a/spaces/yotamsapi/face-swap/options/swap_options.py +++ /dev/null @@ -1,43 +0,0 @@ -import argparse - - -class SwapOptions(): - def __init__(self): - self.parser = argparse.ArgumentParser() - self.initialized = False - - def initialize(self): - # paths (data, models, etc...) - self.parser.add_argument('--arcface_path', type=str, - default="arcface_model/arcface/arc_res50.h5", - help='path to arcface model. Used to extract identity from source.') - - # Video/Image necessary models - self.parser.add_argument('--retina_path', type=str, - default="retinaface/retinaface_res50.h5", - help='path to retinaface model.') - self.parser.add_argument('--compare', type=bool, - default=True, - help='If true, concatenates the frame with the manipulated frame') - - self.parser.add_argument('--load', type=int, - default=30, - help='int of number to load checkpoint weights.') - self.parser.add_argument('--device_id', type=int, default=0, - help='which device to use') - - # logging and checkpointing - self.parser.add_argument('--log_dir', type=str, default='logs/runs/', - help='logging directory') - self.parser.add_argument('--log_name', type=str, default='affa_f', - help='name of the run, change this to track several experiments') - - self.parser.add_argument('--chkp_dir', type=str, default='checkpoints/', - help='checkpoint directory (will use same name as log_name!)') - self.initialized = True - - def parse(self): - if not self.initialized: - self.initialize() - self.opt = self.parser.parse_args() - return self.opt \ No newline at end of file diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/image-rendering.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/image-rendering.js deleted file mode 100644 index 3b0d33aaf351ba3ed82f58d16a68ae356a424298..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/image-rendering.js +++ /dev/null @@ -1,48 +0,0 @@ -let Declaration = require('../declaration') - -class ImageRendering extends Declaration { - /** - * Add hack only for crisp-edges - */ - check(decl) { - return decl.value === 'pixelated' - } - - /** - * Change property name for IE - */ - prefixed(prop, prefix) { - if (prefix === '-ms-') { - return '-ms-interpolation-mode' - } - return super.prefixed(prop, prefix) - } - - /** - * Change property and value for IE - */ - set(decl, prefix) { - if (prefix !== '-ms-') return super.set(decl, prefix) - decl.prop = '-ms-interpolation-mode' - decl.value = 'nearest-neighbor' - return decl - } - - /** - * Return property name by spec - */ - normalize() { - return 'image-rendering' - } - - /** - * Warn on old value - */ - process(node, result) { - return super.process(node, result) - } -} - -ImageRendering.names = ['image-rendering', 'interpolation-mode'] - -module.exports = ImageRendering diff --git a/spaces/yukkzer/google-flan-ul2/README.md b/spaces/yukkzer/google-flan-ul2/README.md deleted file mode 100644 index b5729b7f22af0d9c9176040ef8ffc7082020dbd6..0000000000000000000000000000000000000000 --- a/spaces/yukkzer/google-flan-ul2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Google Flan Ul2 -emoji: 🏢 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h b/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference/slicer.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference/slicer.py deleted file mode 100644 index afb31b7af1cdf8310ea42968d1857af6f15d73e4..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # The first segment is not the beginning of the audio. - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # Mark audio segment. Skip the first segment. - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # Mark all mute segments - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # The last segment is not the end. - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/zhicheng127/White-box-Cartoonization/app.py b/spaces/zhicheng127/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/zhicheng127/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py deleted file mode 100644 index 6fcbfe4acfc2a30e12eafd2ed74a6e7b5d25641d..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/models/modules/conv_transformer.py +++ /dev/null @@ -1,128 +0,0 @@ -import torch -import torch.nn.functional as F - -from torch import nn, einsum -from einops import rearrange - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - - -class GELU(nn.Module): - def forward(self, input): - return F.gelu(input) - - -class Attend(nn.Module): - - def __init__(self, dim=None): - super().__init__() - self.dim = dim - - def forward(self, input): - return F.softmax(input, dim=self.dim, dtype=input.dtype) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout=0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - GELU(), - nn.Dropout(dropout), - nn.Linear(hidden_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x): - return self.net(x) - - -class Attention(nn.Module): - def __init__(self, dim, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - - self.attend = Attend(dim=-1) - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x): - b, n, _, h = *x.shape, self.heads - qkv = self.to_qkv(x).chunk(3, dim=-1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv) - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - attn = self.attend(dots) - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - - -class Conv(nn.Module): - def __init__(self, dim, dropout=0.): - super().__init__() - self.dim = dim - self.net = nn.Sequential( - nn.Conv1d(dim, dim, kernel_size=3, stride=1, padding=0), - nn.Dropout(dropout) - ) - - def forward(self, x): - x = x.transpose(1, 2) - x = torch.cat([x[..., -1:], x, x[..., :1]], dim=-1) - x = self.net(x) - return x.transpose(1, 2) - - -class ConvTransformer(nn.Module): - def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout=0.): - super().__init__() - self.layers = nn.ModuleList([]) - for _ in range(depth): - self.layers.append(nn.ModuleList([ - PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout)), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)), - PreNorm(dim, Conv(dim, dropout=dropout)) - ])) - - def forward(self, x): - for attn, ff, cov in self.layers: - x = attn(x) + x - x = ff(x) + x - x = cov(x) + x - return x - - -if __name__ == '__main__': - token_dim = 1024 - toke_len = 256 - - transformer = ConvTransformer(dim=token_dim, - depth=6, - heads=16, - dim_head=64, - mlp_dim=2048, - dropout=0.1) - - total = sum(p.numel() for p in transformer.parameters()) - trainable = sum(p.numel() for p in transformer.parameters() if p.requires_grad) - print('parameter total:{:,}, trainable:{:,}'.format(total, trainable)) - - input = torch.randn(1, toke_len, token_dim) - output = transformer(input) - print(output.shape) diff --git a/spaces/zhuj/goodwork/README.md b/spaces/zhuj/goodwork/README.md deleted file mode 100644 index c00762564a2b98b3d49f6f2b6e78fdee1f0cc077..0000000000000000000000000000000000000000 --- a/spaces/zhuj/goodwork/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Goodwork -emoji: 🏢 -colorFrom: pink -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ziguo/Real-ESRGAN/inference_realesrgan.py b/spaces/ziguo/Real-ESRGAN/inference_realesrgan.py deleted file mode 100644 index 6d5ff4d188faaa16c0131be69a08fd22fb608f80..0000000000000000000000000000000000000000 --- a/spaces/ziguo/Real-ESRGAN/inference_realesrgan.py +++ /dev/null @@ -1,128 +0,0 @@ -import argparse -import cv2 -import glob -import os -from basicsr.archs.rrdbnet_arch import RRDBNet - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -def main(): - """Inference demo for Real-ESRGAN. - """ - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder') - parser.add_argument( - '-n', - '--model_name', - type=str, - default='RealESRGAN_x4plus', - help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus' - 'RealESRGANv2-anime-xsx2 | RealESRGANv2-animevideo-xsx2-nousm | RealESRGANv2-animevideo-xsx2' - 'RealESRGANv2-anime-xsx4 | RealESRGANv2-animevideo-xsx4-nousm | RealESRGANv2-animevideo-xsx4')) - parser.add_argument('-o', '--output', type=str, default='results', help='Output folder') - parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image') - parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image') - parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing') - parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding') - parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border') - parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face') - parser.add_argument('--half', action='store_true', help='Use half precision during inference') - parser.add_argument( - '--alpha_upsampler', - type=str, - default='realesrgan', - help='The upsampler for the alpha channels. Options: realesrgan | bicubic') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - args = parser.parse_args() - - # determine models according to model names - args.model_name = args.model_name.split('.')[0] - if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - elif args.model_name in [ - 'RealESRGANv2-anime-xsx2', 'RealESRGANv2-animevideo-xsx2-nousm', 'RealESRGANv2-animevideo-xsx2' - ]: # x2 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu') - netscale = 2 - elif args.model_name in [ - 'RealESRGANv2-anime-xsx4', 'RealESRGANv2-animevideo-xsx4-nousm', 'RealESRGANv2-animevideo-xsx4' - ]: # x4 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - - # determine model paths - model_path = os.path.join('.', args.model_name + '.pth') - if not os.path.isfile(model_path): - model_path = os.path.join('.', args.model_name + '.pth') - if not os.path.isfile(model_path): - raise ValueError(f'Model {args.model_name} does not exist.') - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=args.half) - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth', - upscale=args.outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - os.makedirs(args.output, exist_ok=True) - - if os.path.isfile(args.input): - paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, '*'))) - - for idx, path in enumerate(paths): - imgname, extension = os.path.splitext(os.path.basename(path)) - print('Testing', idx, imgname) - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - else: - img_mode = None - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - if args.ext == 'auto': - extension = extension[1:] - else: - extension = args.ext - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}') - cv2.imwrite(save_path, output) - - -if __name__ == '__main__': - main() diff --git a/spaces/zomehwh/sovits-teio/README.md b/spaces/zomehwh/sovits-teio/README.md deleted file mode 100644 index 5adb27747be14e2b92906f3ce129aba1f626a9e2..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-teio/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sovits Teio -emoji: 🎙️ -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: sayashi/sovits-models ---- diff --git a/spaces/zxc314/vits-uma-genshin-honkai/attentions.py b/spaces/zxc314/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/zxc314/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x