diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2020 Design Torrent 47 How to Install Activate and Use the Most Advanced Design Tool.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2020 Design Torrent 47 How to Install Activate and Use the Most Advanced Design Tool.md deleted file mode 100644 index d7dea2ff9a4013c1b350ef55a994290f2e6c16d0..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2020 Design Torrent 47 How to Install Activate and Use the Most Advanced Design Tool.md +++ /dev/null @@ -1,119 +0,0 @@ - -

2020 Design Torrent 47: What You Need to Know

-

If you are looking for a software that can help you create stunning kitchen and bathroom designs, you might have heard of 2020 Design. This software is one of the most popular and powerful tools for interior designers, contractors, and homeowners. But what if you don't want to pay for the full version of the software? Is there a way to get it for free? That's where 2020 Design Torrent 47 comes in. In this article, we will tell you everything you need to know about this torrent, including what it is, how it works, why you might want to use it, and how to use it safely and effectively.

-

2020 Design Torrent 47


DOWNLOAD https://byltly.com/2uKyz4



-

What is 2020 Design?

-

A brief introduction to the software

-

2020 Design is a software that allows you to create realistic and interactive 3D renderings of kitchen and bathroom spaces. You can choose from thousands of products, materials, colors, and styles from leading manufacturers and brands. You can also customize every detail of your design, from cabinets and countertops to faucets and lighting. You can even add accessories, appliances, and furniture to complete your vision.

-

With 2020 Design, you can also generate accurate floor plans, elevations, and perspectives of your design. You can also create stunning presentations and reports for your clients or yourself. You can export your design in various formats, such as PDF, JPG, DWG, or DXF. You can also share your design online or print it out.

-

The features and benefits of 2020 Design

-

Some of the features and benefits of using 2020 Design are:

- -

What is a torrent?

-

A brief explanation of how torrents work

-

A torrent is a file that contains information about other files that are shared by users on a peer-to-peer (P2P) network. A P2P network is a system where users can share files directly with each other without using a central server. To download a file from a P2P network, you need a torrent client, which is a software that connects you to other users who have the file you want. The torrent client then downloads small pieces of the file from different users until you have the complete file.

-

The advantages and disadvantages of using torrents

-

Some of the advantages of using torrents are:

- -

Some of the disadvantages of using torrents are:

-

2020 Design Software Crack Torrent 47
-Download 2020 Design Full Version Torrent 47
-2020 Design Kitchen and Bathroom Torrent 47
-How to Install 2020 Design Torrent 47
-2020 Design License Key Generator Torrent 47
-2020 Design v12 Free Download Torrent 47
-2020 Design Catalogs and Cloud Torrent 47
-2020 Design Training Videos Torrent 47
-2020 Design System Requirements Torrent 47
-2020 Design Support and Updates Torrent 47
-Best Alternatives to 2020 Design Torrent 47
-2020 Design Reviews and Ratings Torrent 47
-How to Use 2020 Design Torrent 47
-2020 Design Tips and Tricks Torrent 47
-2020 Design Features and Benefits Torrent 47
-How to Uninstall 2020 Design Torrent 47
-How to Activate 2020 Design Torrent 47
-How to Update 2020 Design Torrent 47
-How to Import and Export in 2020 Design Torrent 47
-How to Customize and Personalize in 2020 Design Torrent 47
-How to Create and Edit in 2020 Design Torrent 47
-How to Render and Print in 2020 Design Torrent 47
-How to Share and Collaborate in 2020 Design Torrent 47
-How to Troubleshoot and Fix in 2020 Design Torrent 47
-How to Optimize and Enhance in 2020 Design Torrent 47
-Pros and Cons of Using 2020 Design Torrent 47
-Comparison of Different Versions of 2020 Design Torrent 47
-Frequently Asked Questions About 2020 Design Torrent 47
-User Testimonials and Feedback About 2020 Design Torrent 47
-Case Studies and Success Stories About Using 2020 Design Torrent

- -

Why use 2020 Design Torrent 47?

-

The reasons to download 2020 Design via torrent

-

If you are interested in using 2020 Design but don't want to pay for it, you might consider downloading it via torrent. Some of the reasons why you might want to do this are:

- -

The risks and challenges of using 2020 Design Torrent 47

-

However, downloading 2020 Design via torrent also comes with some risks and challenges. Some of them are:

- -

How to use 2020 Design Torrent 47?

-

The steps to download and install 2020 Design Torrent 47

-

If you decide to download and install 2020 Design Torrent 47, here are some steps that you need to follow:

-
    -
  1. Find a reliable torrent website that offers 2020 Design Torrent 47. You can use search engines or online forums to find such websites. Some examples are SuprBay, Peatix, or Tealfeed. Make sure to check the reviews, ratings, comments, or feedbacks from other users before downloading anything.
  2. -
  3. Download a torrent client that can handle torrent files. Some examples are uTorrent, BitTorrent, or qBittorrent. Install the torrent client on your computer and run it.
  4. -
  5. Download the torrent file for 2020 Design Torrent 47 from the torrent website. Open the torrent file with your torrent client and choose where you want to save the file on your computer. Wait for the download to finish.
  6. -
  7. Extract the downloaded file using a software like WinRAR or WinZip. You should see a folder containing several files related to 2020 Design. Look for an executable file (.exe) that can install the software on your computer. Run the executable file as an administrator and follow the instructions on the screen.
  8. -
  9. Activate the software using a crack, patch, keygen, serial number, or license key that is provided in the folder or on the torrent website. This will allow you to bypass any security checks or verification processes that might prevent you from using the software without paying for it.
  10. -
-

The tips and tricks to optimize 2020 Design Torrent 47

-

To make sure that you get the best experience out of using 2020 Design Torrent 47, here are some tips and tricks that you can follow:

- -

Conclusion

-

A summary of the main points

-

In conclusion, 2020 Design Torrent 47 is a file that allows you to download and install 2020 Design, a software that helps you create amazing kitchen and bathroom designs. You might want to use this torrent if you want to try out the software for free or access its latest features. However, you also need to be aware of the risks and challenges of using this torrent, such as viruses, malware, legal issues, or compatibility problems. You also need to follow some steps and tips to download and install 2020 Design Torrent 47 safely and effectively.

-

A call to action for the readers

-

If you are interested in using 2020 Design Torrent 47, you can find it on various torrent websites online. However, we recommend that you use it with caution and responsibility. We also encourage you to support the original creators of 2020 Design by buying the software if you find it useful and valuable. 2020 Design is a great tool for interior design enthusiasts and professionals alike. You can learn more about it on its official website.

-

FAQs

-

Q1: Is 2020 Design Torrent 47 legal?

-

A1: No, it is not. Downloading or using pirated software is illegal in most countries and regions. You might face fines, lawsuits, or even jail time if you are caught doing so. You might also violate the terms and conditions of the software or the intellectual property rights of the developers or manufacturers.

-

Q2: Is 2020 Design Torrent 47 safe?

-

A2: Not necessarily. There is no guarantee that the file you download is authentic, complete, or virus-free. You might download a fake or corrupted file that does not work or damages your computer. You might also download a file that contains viruses or malware that infect your computer or steal your data. You might also expose your personal information or IP address to other users or authorities.

-

Q3: Is 2020 Design Torrent 47 compatible with my system?

-

A3: It depends. Different versions of 2020 Design might have different system requirements and specifications. You need to check if your computer meets the minimum requirements for running the software before downloading or installing it. You also need to make sure that your computer has enough space and memory to handle the software.

-

Q4: How can I update 2020 Design Torrent 47?

-

A4: You might not be able to. Most pirated software does not have access to official updates, patches, or fixes from the developers or manufacturers. You might miss out on important features, improvements, or bug fixes that are available only for licensed users. You might also encounter errors, glitches, or crashes when using the software.

-

Q5: Where can I find more resources on 2020 Design?

-

A5: You can find more resources on 2020 Design on its official website. There you can find more information about the software, its features, its pricing, its support, its training, and its community. You can also watch videos, read blogs, download catalogs, request demos, or contact sales representatives.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Best Site To Download Cracked Pc Games For Free LINK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Best Site To Download Cracked Pc Games For Free LINK.md deleted file mode 100644 index 0f87819b4829c6dce27db0c41eeaefbd36f43852..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Best Site To Download Cracked Pc Games For Free LINK.md +++ /dev/null @@ -1,10 +0,0 @@ - -

Best Site to Download Cracked PC Games for Free in 2023

-

If you are a PC gamer, you might be looking for a way to download cracked PC games for free in 2023. Cracked PC games are games that have been modified to bypass the copy protection or DRM (digital rights management) system and allow you to play them without paying for them. However, downloading cracked PC games is not a good idea for several reasons. First of all, downloading cracked PC games is illegal and unethical, and you could face legal consequences if you get caught. Second, cracked PC games often contain viruses, malware, or spyware that can harm your computer or steal your personal information. Third, cracked PC games usually do not work properly or have limited functionality, and you might miss out on important updates or bug fixes.

-

best site to download cracked pc games for free


Download File ✫✫✫ https://byltly.com/2uKyF6



-

So, what is the best site to download cracked PC games for free in 2023? The answer is simple: there is no such thing. The only way to download PC games legally and safely is to buy them from the official website or an authorized dealer. This way, you will get the full version of the game with all the features and benefits, as well as lifetime support and updates. You will also support the developers who worked hard to create this amazing product.

-

However, if you still want to try PC games before buying them, there is a solution: you can download the demo version or the free trial from the official website or other online platforms. The demo version or the free trial allows you to test some of the features and gameplay of the game for a limited time without any limitations. You can also save and load your progress, but you cannot export it. This way, you can see for yourself if the game is worth your money and if it suits your needs and preferences.

-

In conclusion, PC games are a great source of entertainment and fun that can help you relax and enjoy yourself. However, they are not available for free download crack in 2023 or any other year. The only way to download PC games legally and safely is to buy them from the official website or an authorized dealer. Alternatively, you can download the demo version or the free trial and try them for a limited time. We hope this article has helped you understand why downloading cracked PC games is not a good idea and how to download PC games legally and safely.

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc No Cd Crack Razor The Benefits of Playing GTA EFLC Without a CD.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc No Cd Crack Razor The Benefits of Playing GTA EFLC Without a CD.md deleted file mode 100644 index 0112d264c9fb7131b4d23630392823fe9c90f462..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gta Eflc No Cd Crack Razor The Benefits of Playing GTA EFLC Without a CD.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

Waves License Center Crack: What You Need to Know

-

If you are a music producer, engineer, or enthusiast, you probably have heard of Waves plugins. Waves is one of the world's leading developers of audio plugins and signal processors for the professional and consumer electronics audio markets. They offer a wide range of products, from compressors, equalizers, reverbs, delays, to virtual instruments, mastering tools, and more.

-

Waves License Center Crack


Download ->>> https://byltly.com/2uKxh2



-

But how do you manage your Waves plugins and licenses? That's where Waves License Center comes in. Waves License Center is an application that allows you to activate, deactivate, recover, and transfer your Waves licenses. It also lets you update your plugins and access your Waves account.

-

However, some people might be tempted to use a cracked version of Waves License Center instead of paying for a legitimate one. This might seem like a good idea at first, but it can actually cause more harm than good. In this article, we will explain how Waves License Center works, what are the risks of using a crack, what are the benefits of using a legitimate one, and how to get and activate your Waves licenses.

-

How Waves License Center Works

-

Waves License Center is part of Waves Central, which is the main hub for managing your Waves products. You can download Waves Central for free from the official website. Once you install it on your computer, you can launch it and access Waves License Center.

-

How to fix Waves License Center error
-Waves License Center offline installer download
-Waves License Center activation code generator
-Waves License Center not detecting device
-Waves License Center alternative software
-Waves License Center crack for Mac OS
-Waves License Center crack for Windows 10
-Waves License Center crack for Linux
-Waves License Center crack for Android
-Waves License Center crack for iOS
-Waves License Center free download full version
-Waves License Center license key recovery
-Waves License Center license transfer tutorial
-Waves License Center offline activation guide
-Waves License Center online activation bypass
-Waves License Center troubleshooting tips
-Waves License Center update patch download
-Waves License Center uninstall instructions
-Waves License Center compatibility issues
-Waves License Center customer support contact
-Waves License Center hacked version download
-Waves License Center serial number finder
-Waves License Center registration code crack
-Waves License Center keygen download link
-Waves License Center torrent file download
-Waves License Center review and rating
-Waves License Center features and benefits
-Waves License Center system requirements
-Waves License Center price and discount
-Waves License Center coupon code and promo code
-Waves License Center refund policy and guarantee
-Waves License Center testimonials and feedback
-Waves License Center comparison with other software
-Waves License Center pros and cons analysis
-Waves License Center best practices and tips
-Waves License Center FAQs and answers
-Waves License Center video tutorial and demo
-Waves License Center blog posts and articles
-Waves License Center forum and community
-Waves License Center social media and news
-How to use Waves plugins without license center
-How to install waves plugins without license center
-How to activate waves plugins without license center
-How to update waves plugins without license center
-How to uninstall waves plugins without license center
-How to transfer waves plugins without license center
-How to backup waves plugins without license center
-How to restore waves plugins without license center
-How to troubleshoot waves plugins without license center
-How to get waves plugins without license center for free

-

Waves License Center allows you to activate your licenses directly to your computer or to a USB flash drive. You can also deactivate your licenses from one device and move them to another. This way, you can use your plugins on different computers or studios without having to buy multiple licenses.

-

To activate your licenses, you need to log in to your Waves account using your email and password. If you don't have an account yet, you can create one for free. Then, you need to select the licenses that you want to activate and choose where to activate them: either on your computer or on a USB flash drive. You can also use the Easy Activate option, which will automatically activate all your available licenses on your computer.

-

To deactivate your licenses, you need to select the device that has the licenses that you want to deactivate and click on Deactivate Licenses. You can also use the Easy Deactivate option, which will automatically deactivate all your licenses on your device.

-

To recover your licenses, you need to use the Recover option in case you lose access to your device or it gets damaged or stolen. This will deactivate all your licenses from that device and make them available for activation again.

-

To transfer your licenses, you need to use the Move option in case you want to move your licenses from one device to another without deactivating them first. This will save you time and hassle.

-

The Risks of Using a Waves License Center Crack

-

Some people might think that using a cracked version of Waves License Center is a smart way to save money and get access to all the plugins they want. However, this is actually a very risky and irresponsible thing to do. Here are some of the reasons why:

- -

The Benefits of Using a Legitimate Waves License Center

-

On the other hand, using a legitimate version of Waves License Center has many benefits that outweigh the cost. Here are some of them:

- -

How to Get a Legitimate Waves License Center

-

If you are convinced that using a legitimate version of Waves License Center is the best way to go, here are some options for getting one:

- -

How to Activate Your Waves Licenses

-

Once you have purchased or subscribed to any of their products, here is how you can activate your licenses using Waves Central:

-
    -
  1. Download and install Waves Central from their website.
  2. -
  3. Launch it and log in with your email and password.
  4. -
  5. Select Offline Installer at the top left corner.
  6. -
  7. Select Install Products at the top right corner.
  8. -
  9. Select My Products at the left sidebar.
  10. -
  11. Select all the products that you want to install and click Install at the bottom right corner.
  12. -
  13. Select where you want to install them: either on Local Disk (C:) or on an external drive (if connected).
  14. -
  15. Wait for the installation process to finish.
  16. -
  17. Select Licenses at the top left corner.
  18. -
  19. Select Activate Licenses at the top right corner.
  20. -
  21. Select all the licenses that you want to activate and click Activate at the bottom right corner.
  22. -
  23. Select where you want to activate them: either on this computer (Local Licenses) or on an external drive (if connected).
  24. -
  25. Wait for the activation process to finish.
  26. -
-

How to Use Your Plugins

-

Now that you have activated your licenses, here are some tips and tricks on how to use your plugins effectively and creatively:

- - - - - - - - - - -
PluginTip/Trick
R-CompUse the ARC (Auto Release Control) feature to automatically adjust the release time according to the input signal. This can help you achieve a more natural and consistent compression.
CLA-2AUse the Compress/Limit switch to change the compression ratio and the knee shape. Compress mode has a 3:1 ratio and a soft knee, while Limit mode has a 100:1 ratio and a hard knee. Compress mode is good for smooth and gentle compression, while Limit mode is good for aggressive and tight compression.
API 2500Use the Thrust filter to change the frequency response of the detector circuit. This can affect how the compressor reacts to different parts of the spectrum. The three options are Normal, Medium, and High. Normal has a flat response, Medium has a high-pass filter that reduces low frequencies, and High has a band-pass filter that boosts mid frequencies.
SSL G-Master Buss CompressorUse the Auto Fade feature to create a smooth fade-out at the end of your mix. You can set the fade time from 1 to 60 seconds and activate it by clicking on the Fade button. You can also use the Auto Fade feature as a creative effect by automating it during your mix.
F6Use the dynamic EQ bands to apply compression or expansion to specific frequency ranges. You can adjust the threshold, range, attack, release, and Q parameters for each band. You can also solo or bypass each band for easier monitoring.
OVoxUse the Note Mapper to create custom scales and chords for your vocal harmonies. You can drag and drop notes on the grid to assign them to different MIDI notes. You can also use the Scale and Chord menus to select from preset options.
PuigTec EQsUse the Boost/Cut controls to create resonant peaks or dips at specific frequencies. The Boost and Cut controls work independently, so you can boost and cut at the same frequency for a unique EQ curve. This can help you add color and character to your sound.
Abbey Road TG Mastering ChainUse the Tape Delay module to add some vintage delay effects to your mix. You can adjust the delay time, feedback, wow, flutter, and saturation parameters. You can also use the Sync button to sync the delay time to your DAW tempo.
-

Conclusion

-

In conclusion, Waves License Center is an essential tool for managing your Waves plugins and licenses. It allows you to activate, deactivate, recover, and transfer your licenses with ease and flexibility. However, using a cracked version of Waves License Center is not a smart idea, as it can expose you to many risks and disadvantages. Instead, you should use a legitimate version of Waves License Center that will give you many benefits and advantages. You should also learn how to use your plugins effectively and creatively to get the best results from your music production.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Aalavandhan Hd 1080p Movie Torrent Download Dont Miss the Chance to See Kamal Haasan in Dual Roles.md b/spaces/1gistliPinn/ChatGPT4/Examples/Aalavandhan Hd 1080p Movie Torrent Download Dont Miss the Chance to See Kamal Haasan in Dual Roles.md deleted file mode 100644 index 1197e13866428a02eb91df990f0757d79dcbf450..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Aalavandhan Hd 1080p Movie Torrent Download Dont Miss the Chance to See Kamal Haasan in Dual Roles.md +++ /dev/null @@ -1,5 +0,0 @@ - -

Downloading torrents is risky for you: your IP and leaked private data being actively tracked by your ISP and Government Agencies. Protect yourself from expensive lawsuits and fines NOW! You must use a VPN. It is the only way to download torrents fully anonymous by encrypting all traffic with zero logs.

-

Aalavandhan Hd 1080p Movie Torrent Download


Download === https://imgfil.com/2uy1oj



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Beyonce Dangerously In Love Album Zip.md b/spaces/1gistliPinn/ChatGPT4/Examples/Beyonce Dangerously In Love Album Zip.md deleted file mode 100644 index f63c32f1f38085322eaccd2a347c3e30a4d96a4a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Beyonce Dangerously In Love Album Zip.md +++ /dev/null @@ -1,9 +0,0 @@ - -

the album, which was released on june 16, 2013 in the united kingdom, includes 13 songs, including collaborations with rappers jay-z and kanye west. the album earned knowles an naacp image award for outstanding literary work. "pray you catch me," featuring john legend, was the album's lead single.

-

beyonce dangerously in love album zip


Downloadhttps://imgfil.com/2uy08T



-

in blue beyonce takes a break from the rollicking party. instead, she opts for a bittersweet ballad that somehow manages to sound modern and retro. beyonc gets to the point: are you looking at me? i see you lookin at me. oooooo! am i pretty? yes, im pretty. wanna dance with me? what you thinkin, baby? 

-

bey comes off as a very confident artist, perhaps with good reason. she knew she had a very strong debut album and was prepared for the high expectations she knew would be held for her as a popstar. she knew the press would be looking to her, and she made sure the singer was ready. "i like to be challenged and i think i'm the best challenge and so i like to be challenged. when i was younger, i didn't think i was. i was too timid and shy and i didn't really go out and try new things. i think i have a little bit of the confidence and the star quality to stand up for myself and not take any crap," she told mtv news in 2003. "i think i'm pretty strong, that's what i've been told. and i think i'm pretty strong, but i also think i'm really good at letting my guard down. and i like to be loved."

-

thats not to say that she didnt have her own set of challenges. with beyonc, every promo was like a whirlwind. she was an artist who was driven and driven hard, but she also had some real moments of weakness. the singer told vh1s behind the music she had serious bouts of fatigue and stress, and she had to work extremely hard on the new album to make sure that her personality didnt get lost in the mix. but like many artists that have had a successful debut, dangerously in love didn't sell many copies. the first single, crazy in love, barely made the radio airwaves and was a commercial flop. the album flopped, too, with the only real success coming from the r&b/soul-pop crossover smash irreemplazable. that was great for the singer, but as the tour neared its conclusion, it was clear that dangerously in love wasnt going to be the runaway smash the industry was expecting. soon, beyonce would be a household name, but as she would soon find out, that doesnt always make it easier to navigate the waters of the pop world.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Shab Hd 720p Full Movie In Hindi.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Shab Hd 720p Full Movie In Hindi.md deleted file mode 100644 index 5537631c096409bf1876bf321ea22faf70e17f9d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Shab Hd 720p Full Movie In Hindi.md +++ /dev/null @@ -1,61 +0,0 @@ -
-

Download Shab Hd 720p Full Movie In Hindi: A Review

- -

Shab is a 2017 Hindi movie directed by Onir and starring Raveena Tandon, Ashish Bisht, Arpita Chatterjee, and Sanjay Suri. The movie is a romantic drama that revolves around several personalities who are looking for happiness and love in a complex, unforgiving, and cold city. The movie explores themes such as identity, sexuality, loneliness, and betrayal.

-

Download Shab Hd 720p Full Movie In Hindi


DOWNLOAD > https://imgfil.com/2uxXbC



- -

The movie was released on June 30, 2017 and received mixed reviews from critics and audiences. Some praised the movie for its realistic portrayal of urban life and relationships, while others criticized it for its slow pace, lack of depth, and poor execution. The movie was also a box office flop, earning only Rs. 1.15 crore against a budget of Rs. 6 crore.

- -

Download Shab Hd 720p Full Movie In Hindi: The Plot

- -

The movie follows the lives of four characters who are connected by fate and desire. Afzar (Bisht) is a young man from a small town who comes to Delhi to become a model. He meets Sonal (Tandon), a wealthy and influential fashion patron who offers him a chance to fulfill his dreams. However, he soon realizes that Sonal has ulterior motives and expects him to be her toy boy.

- -

Raina (Chatterjee) is a waitress at a cafe who has a dark past and a secret identity. She lives with her boyfriend Benoit (Simon Frenay), a French musician who is unaware of her true self. She also has a platonic relationship with Afzar, whom she considers as her friend and confidant.

- -

Neil (Suri) is a successful fashion designer who is gay and closeted. He is in love with Ashish (Areesz Ganddi), a struggling actor who is also gay but open about his sexuality. Neil faces pressure from his family and society to get married and have children.

- -

The movie shows how these characters deal with their personal and professional challenges, their hopes and dreams, their secrets and lies, and their love and loss.

-

- -

Download Shab Hd 720p Full Movie In Hindi: The Verdict

- -

Shab is a movie that tries to be bold and provocative, but fails to deliver on its promise. The movie has a good premise and a talented cast, but suffers from poor execution and direction. The movie is slow, dull, and boring, with no engaging moments or twists. The movie also lacks depth and emotion, as the characters are poorly developed and unrelatable.

- -

The movie does have some positive aspects, such as the cinematography by Ashish Bisht, who captures the mood and atmosphere of Delhi in an authentic way. The movie also has some decent performances by the actors, especially Raveena Tandon, who plays her role with grace and dignity.

- -

Overall, Shab is a movie that disappoints on all levels. It is not a movie that will make you feel anything or learn anything. It is not a movie that will entertain you or inspire you. It is not a movie that you should download or watch.

-

Download Shab Hd 720p Full Movie In Hindi: The Technical Aspects

- -

One of the technical aspects of Shab that deserves some praise is the sound design by Resul Pookutty, who won an Oscar for his work on Slumdog Millionaire. The sound design of Shab creates a realistic and immersive experience for the viewers, as it captures the sounds of the city and the characters. The sound design also enhances the mood and tone of the movie, as it conveys the emotions and tensions of the scenes.

- -

The movie also features a decent music score by Mithoon, who composed some melodious and soulful songs for the movie. The songs are sung by popular singers such as Arijit Singh, KK, Mohammed Irfan, and Neha Bhasin. The songs fit well with the theme and genre of the movie, as they express the feelings and thoughts of the characters.

- -

The movie also has a good editing by Irene Dhar Malik, who manages to keep the movie coherent and smooth despite its non-linear narrative. The movie also uses some visual effects and graphics to indicate the time and location of the events.

- -
Download Shab Hd 720p Full Movie In Hindi: The Conclusion
- -

Shab is a movie that attempts to be a realistic and artistic portrayal of urban life and relationships. It is a movie that has a good concept and a good cast, but fails to execute it well. It is a movie that is slow, boring, and shallow, with no memorable moments or messages. It is a movie that you should not download or watch.

- -

If you are looking for a movie that will make you feel something or learn something, you should look elsewhere. There are many other movies that are better than Shab in terms of story, direction, performance, and entertainment. Shab is a movie that will make you regret wasting your time and money.

-
Download Shab Hd 720p Full Movie In Hindi: The Bonus Features
- -

If you are still interested in watching Shab, you can check out the bonus features on the BluRay disc. The disc includes some featurettes, interviews, and behind-the-scenes footage that show how the movie was made and what inspired it. The disc also includes some deleted scenes and songs that were not included in the movie.

- -

Some of the bonus features are: - -- Making of Shab: A 20-minute documentary that shows the process of making the movie, from the script to the casting to the shooting. It features interviews with the director, the writers, the producers, and the cast and crew. -- Onir's Vision: A 10-minute featurette that shows how the director Onir conceived and executed his vision for the movie. It features interviews with Onir and some clips from his previous movies. -- The Music of Shab: A 15-minute featurette that shows how the music composer Mithoon created the songs and the background score for the movie. It features interviews with Mithoon and the singers, and some footage of the recording sessions. -- The Characters of Shab: A 10-minute featurette that shows how the actors prepared for their roles and portrayed their characters. It features interviews with Raveena Tandon, Ashish Bisht, Arpita Chatterjee, Sanjay Suri, and others. -- Deleted Scenes and Songs: A 5-minute segment that shows some scenes and songs that were cut from the movie due to various reasons.

- -

Download Shab Hd 720p Full Movie In Hindi is a BluRay disc that offers a mediocre viewing experience and some average bonus features. It is a BluRay disc that will not satisfy fans of Shab or romantic drama movies.

-

Download Shab Hd 720p Full Movie In Hindi is a movie that tells the story of several personalities who are looking for happiness and love in a complex, unforgiving, and cold city. It is a movie that features a talented cast, a realistic plot, and a good sound design. It is a movie that fails to deliver on its promise.

- -

The movie is slow, dull, and boring, with no engaging moments or twists. The movie also lacks depth and emotion, as the characters are poorly developed and unrelatable. The movie also suffers from poor execution and direction, as it does not capture the mood and tone of the story.

- -

The movie is not a movie that will make you feel anything or learn anything. It is not a movie that will entertain you or inspire you. It is not a movie that you should download or watch.

- -

If you are looking for a movie that will make you feel something or learn something, you should look elsewhere. There are many other movies that are better than Shab in terms of story, direction, performance, and entertainment. Shab is a movie that will make you regret wasting your time and money.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Heroes-Season-2-Hindi-Dubbed.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/Heroes-Season-2-Hindi-Dubbed.md deleted file mode 100644 index 7699f82ae8c28e9c2d9da3b5d4f7c0fd27f64c0d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Heroes-Season-2-Hindi-Dubbed.md +++ /dev/null @@ -1,70 +0,0 @@ -## Heroes Season 2 Hindi Dubbed - - - - - - ![Heroes Season 2 Hindi Dubbed](https://pm1.narvii.com/5840/e57fe3d758a6c7560d386aea9c9fd43d6a2655f0_00.jpg) - - - - - -**Click Here ::: [https://kneedacexbrew.blogspot.com/?d=2txjmB](https://kneedacexbrew.blogspot.com/?d=2txjmB)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "Heroes Season 2 Hindi Dubbed": - -# Heroes Season 2 Hindi Dubbed: Where to Watch the Sci-Fi Drama Online - - - -Heroes is a popular American sci-fi drama series that follows the lives of ordinary people who discover they have extraordinary abilities. The second season of Heroes aired from September 2007 to December 2007 and consisted of 11 episodes. The season introduced new characters and new threats, such as the deadly virus Shanti, the mysterious Company, and the ancient samurai Takezo Kensei. - - - -If you are a fan of Heroes and want to watch the second season in Hindi dubbed, you may be wondering where to find it online. Unfortunately, there is no official source for Heroes Season 2 Hindi Dubbed as of now. However, there are some unofficial websites that claim to offer the Hindi dubbed version of Heroes Season 2. These websites are not authorized by the creators or distributors of Heroes and may contain low-quality videos, malware, or pop-up ads. Therefore, we do not recommend using these websites to watch Heroes Season 2 Hindi Dubbed. - - - -The best way to watch Heroes Season 2 Hindi Dubbed is to wait for an official release by a licensed streaming platform or a DVD/Blu-ray release. Alternatively, you can watch Heroes Season 2 in English with subtitles on various online platforms, such as Amazon Prime Video[^1^], Netflix[^2^], or NBC.com[^3^]. You can also buy or rent Heroes Season 2 on iTunes[^4^], Google Play, or YouTube. - - - -Heroes Season 2 is a thrilling and captivating season that explores the themes of destiny, identity, and sacrifice. If you are looking for a sci-fi drama with a diverse cast of characters and a complex plot, you should definitely give Heroes Season 2 a try. - -Here is a possible continuation of the article: - -Some of the highlights of Heroes Season 2 are: - - - -- The introduction of new heroes, such as Maya and Alejandro Herrera, who can kill or heal with their eyes; Monica Dawson, who can mimic any physical skill she sees; and Elle Bishop, who can generate electricity. - -- The revelation of the origins of some of the main characters, such as Peter Petrelli, Hiro Nakamura, and Adam Monroe (the real name of Takezo Kensei). - -- The development of the relationships between the characters, such as the romance between Claire Bennet and West Rosen, the friendship between Matt Parkman and Mohinder Suresh, and the rivalry between Sylar and Noah Bennet. - -- The exploration of the mythology and history of the heroes, such as the legend of Takezo Kensei, the prophecy of Isaac Mendez, and the secrets of the Company. - -- The suspense and drama of the main storyline, which involves a race against time to stop a deadly virus from wiping out most of humanity. - - - -Heroes Season 2 Hindi Dubbed is a must-watch for fans of sci-fi and superheroes. It is a season that will keep you on the edge of your seat and make you care about the characters and their fates. If you have not watched Heroes Season 2 yet, you should definitely give it a chance. - - dfd1c89656 - - - - - diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Aliens Drive Me Crazy MOD APK 3.1.9 An Epic Shooter and Driving Adventure.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Aliens Drive Me Crazy MOD APK 3.1.9 An Epic Shooter and Driving Adventure.md deleted file mode 100644 index 55431638cc55a3a07d9beab7bf34cbffd0ee7845..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Aliens Drive Me Crazy MOD APK 3.1.9 An Epic Shooter and Driving Adventure.md +++ /dev/null @@ -1,88 +0,0 @@ -
-

Aliens Drive Me Crazy Mod APK 3.1 1: A Fun and Action-Packed Game for Android

-

If you are looking for a game that combines shooting, driving, and adventure, then you should try Aliens Drive Me Crazy Mod APK 3.1 1. This is a modified version of the original game that gives you unlimited coins, unlocked weapons and vehicles, and no ads. In this article, we will tell you what this game is about, why you should play it, and how to download and install it on your Android device.

-

aliens drive me crazy mod apk 3.1 1


Download Filehttps://urlin.us/2uSXgX



-

Introduction

-

What is Aliens Drive Me Crazy?

-

Aliens Drive Me Crazy is a game developed by Rebel Twins, a studio that specializes in creating fun and addictive games for mobile devices. The game was released in 2014 and has received over 10 million downloads on Google Play Store. The game has a simple premise: aliens have invaded the Earth and you have to stop them. You will play as a hero who can drive various vehicles, shoot different weapons, and explore different locations. You will also face boss battles, rescue hostages, and collect coins and items along the way.

-

Why should you play Aliens Drive Me Crazy Mod APK 3.1 1?

-

Aliens Drive Me Crazy is a game that will keep you entertained for hours with its fast-paced and thrilling gameplay. You will enjoy the following benefits when you play the modded version of the game:

- -

Features of Aliens Drive Me Crazy Mod APK 3.1 1

-

Unlimited coins

-

Coins are the main currency in the game that you can use to buy weapons, vehicles, upgrades, costumes, and more. You can earn coins by completing missions, destroying enemies, rescuing hostages, and collecting items. However, if you want to get everything in the game without spending too much time and effort, you can use the modded version of the game that gives you unlimited coins. You can buy anything you want in the game without worrying about running out of money.

-

Unlocked weapons and vehicles

-

The game offers a variety of weapons and vehicles that you can use to fight against the aliens. You can choose from pistols, shotguns, rifles, rocket launchers, grenades, swords, hammers, and more. You can also drive cars, motorcycles, tanks, helicopters, jetpacks, and more. However, some of these weapons and vehicles are locked and require you to reach certain levels or pay coins to unlock them. If you want to use them right away without any restrictions, you can use the modded version of the game that unlocks all of them for you.

-

No ads

-

The original version of the game contains ads that may pop up randomly or after every level. These ads can be annoying and distracting, especially when you are in the middle of an intense action scene. If you want to enjoy the game without any interruptions or distractions, you can use the modded version of the game that removes all the ads from the game.

-

How to download and install Aliens Drive Me Crazy Mod APK 3.1 1How to download and install Aliens Drive Me Crazy Mod APK 3.1 1

-

If you are interested in playing Aliens Drive Me Crazy Mod APK 3.1 1, you will need to download and install the APK file on your Android device. Here are the steps you need to follow:

-

aliens drive me crazy hack apk download
-aliens drive me crazy mod apk unlimited money
-aliens drive me crazy mod apk latest version
-aliens drive me crazy mod apk android 1
-aliens drive me crazy mod apk revdl
-aliens drive me crazy mod apk free shopping
-aliens drive me crazy mod apk rexdl
-aliens drive me crazy mod apk happymod
-aliens drive me crazy mod apk no ads
-aliens drive me crazy mod apk offline
-aliens drive me crazy premium apk
-aliens drive me crazy pro apk
-aliens drive me crazy full apk
-aliens drive me crazy unlocked apk
-aliens drive me crazy cheat apk
-aliens drive me crazy cracked apk
-aliens drive me crazy unlimited coins apk
-aliens drive me crazy unlimited gems apk
-aliens drive me crazy unlimited weapons apk
-aliens drive me crazy unlimited health apk
-download aliens drive me crazy mod apk for android
-download aliens drive me crazy mod apk for pc
-download aliens drive me crazy mod apk for ios
-download aliens drive me crazy mod apk for windows 10
-download aliens drive me crazy mod apk for laptop
-how to install aliens drive me crazy mod apk
-how to play aliens drive me crazy mod apk
-how to update aliens drive me crazy mod apk
-how to hack aliens drive me crazy mod apk
-how to get aliens drive me crazy mod apk
-aliens drive me crazy game mod apk
-aliens drive me crazy shooting game mod apk
-aliens drive me crazy adventure game mod apk
-aliens drive me crazy action game mod apk
-aliens drive me crazy arcade game mod apk
-best alien games for android mod apk
-best alien shooting games for android mod apk
-best alien driving games for android mod apk
-best alien invasion games for android mod apk
-best alien adventure games for android mod apk

-

Step 1: Download the APK file from a trusted source

-

The first thing you need to do is to find a reliable website that offers the modded version of the game. You can search for it on Google or use the link we have provided below. Make sure you download the latest version of the game, which is 3.1 1. The file size is about 70 MB, so make sure you have enough space on your device.

-

Download Aliens Drive Me Crazy Mod APK 3.1 1 here

-

Step 2: Enable unknown sources on your device

-

The next thing you need to do is to allow your device to install apps from unknown sources. This is because the modded version of the game is not available on the official Google Play Store, so you need to enable this option to install it. To do this, go to your device settings, then security, then unknown sources, and toggle it on. You may see a warning message, but don't worry, it is safe to install the game.

-

Step 3: Install the APK file and enjoy the game

-

The final thing you need to do is to locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for a few seconds until the installation is complete. Once it is done, you can launch the game and enjoy all the features of Aliens Drive Me Crazy Mod APK 3.1 1.

-

Conclusion

-

Aliens Drive Me Crazy Mod APK 3.1 1 is a fun and action-packed game that will keep you entertained for hours. You will love the unlimited coins, unlocked weapons and vehicles, and no ads that this modded version of the game offers. You will also enjoy the colorful graphics, smooth controls, and hilarious sound effects that make this game a joy to play. If you are looking for a game that combines shooting, driving, and adventure, then you should download and install Aliens Drive Me Crazy Mod APK 3.1 1 on your Android device today.

-

FAQs

-

Here are some of the frequently asked questions about Aliens Drive Me Crazy Mod APK 3.1 1:

-
    -
  1. Is Aliens Drive Me Crazy Mod APK 3.1 1 safe to download and install?
  2. -

    Yes, it is safe to download and install Aliens Drive Me Crazy Mod APK 3.1 1 as long as you use a trusted source like the one we have provided above. The modded version of the game does not contain any viruses or malware that can harm your device or compromise your privacy.

    -
  3. Do I need to root my device to play Aliens Drive Me Crazy Mod APK 3.1 1?
  4. -

    No, you do not need to root your device to play Aliens Drive Me Crazy Mod APK 3.1 1. The modded version of the game works fine on both rooted and non-rooted devices.

    -
  5. Can I play Aliens Drive Me Crazy Mod APK 3.1 1 offline?
  6. -

    Yes, you can play Aliens Drive Me Crazy Mod APK 3.1 1 offline without any internet connection. However, some features of the game may require an internet connection, such as leaderboards, achievements, and social media integration.

    -
  7. Can I play Aliens Drive Me Crazy Mod APK 3.1 1 with my friends?
  8. -

    Yes, you can play Aliens Drive Me Crazy Mod APK 3.1 1 with your friends online or locally. The game supports multiplayer mode where you can team up with your friends or compete against them in various missions and challenges.

    -
  9. How can I update Aliens Drive Me Crazy Mod APK 3.1 1?
  10. -

    To update Aliens Drive Me Crazy Mod APK 3.1 1, you will need to download and install the latest version of the modded version of the game from the same source you used before. You may also need to uninstall the previous version of the game before installing the new one.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blade Forge 3D MOD APK and Become a Master Blacksmith.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blade Forge 3D MOD APK and Become a Master Blacksmith.md deleted file mode 100644 index b67d5e1b268c6e0762c3ae173338c0e9e429e4f3..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blade Forge 3D MOD APK and Become a Master Blacksmith.md +++ /dev/null @@ -1,101 +0,0 @@ - -

Blade Forge 3D Mod APK: A Fun and Creative Game for Blacksmith Lovers

-

Do you love crafting weapons and blades? Do you want to experience the life of a blacksmith in a fun and realistic way? If yes, then you should try Blade Forge 3D, a simulation game that lets you create your own blades from scratch. And if you want to enjoy the game even more, you should download Blade Forge 3D Mod APK, a modified version that gives you unlimited money, no ads, and access to all blades. In this article, we will tell you everything you need to know about Blade Forge 3D Mod APK, including its features, how to download and install it, and some frequently asked questions.

-

blade forge 3d mod apk


Download 🗸🗸🗸 https://urlin.us/2uT2NM



-

Introduction

-

What is Blade Forge 3D?

-

Blade Forge 3D is a simulation game developed by Kwalee Ltd, a UK-based game studio. The game was released in May 2020 and has gained over 10 million downloads on Google Play Store. The game is rated 4.0 out of 5 stars by more than 100 thousand users.

-

In Blade Forge 3D, you play as a blacksmith who can craft different types of blades using various materials and techniques. You can choose from different shapes, sizes, colors, and designs for your blades. You can also test your blades on different objects and enemies to see how they perform. The game is easy to play but hard to master, as you need to balance the quality, durability, and sharpness of your blades.

-

What is Blade Forge 3D Mod APK?

-

Blade Forge 3D Mod APK is a modified version of the original game that gives you some extra benefits that are not available in the official version. These benefits include unlimited money, no ads, and unlock all blades. With these features, you can enjoy the game without any limitations or interruptions. You can craft any blade you want without worrying about the cost or the availability. You can also get rid of the annoying ads that pop up every now and then. You can download Blade Forge 3D Mod APK for free from various websites on the internet.

-

Features of Blade Forge 3D Mod APK

-

Unlimited Money

-

One of the main features of Blade Forge 3D Mod APK is that it gives you unlimited money. Money is used in the game to buy materials, upgrade your tools, and unlock new blades. Normally, you have to earn money by completing tasks and selling your blades. However, with Blade Forge 3D Mod APK, you don't have to worry about that. You can get as much money as you want without doing anything. You can spend your money freely and buy whatever you need or want.

-

blade forge 3d unlimited money apk
-download blade forge 3d mod apk latest version
-blade forge 3d hack apk free download
-blade forge 3d mod apk android 1
-blade forge 3d mod apk no ads
-blade forge 3d simulation game mod apk
-blade forge 3d mod apk revdl
-blade forge 3d mod apk happymod
-blade forge 3d mod apk offline
-blade forge 3d mod apk unlimited gems
-blade forge 3d mod apk rexdl
-blade forge 3d mod apk pure
-blade forge 3d mod apk vip unlocked
-blade forge 3d mod apk for pc
-blade forge 3d mod apk online
-blade forge 3d mod apk unlimited everything
-blade forge 3d mod apk obb
-blade forge 3d mod apk ios
-blade forge 3d mod apk unlimited coins
-blade forge 3d mod apk all unlocked
-blade forge 3d mod apk unlimited levels
-blade forge 3d mod apk full version
-blade forge 3d mod apk pro
-blade forge 3d mod apk premium
-blade forge 3d mod apk mega mod
-blade forge 3d mod apk unlimited energy
-blade forge 3d mod apk god mode
-blade forge 3d mod apk unlimited weapons
-blade forge 3d mod apk no root
-blade forge 3d mod apk cheat
-blade forge 3d mod apk update
-blade forge 3d mod apk new version
-blade forge 3d mod apk old version
-blade forge 3d mod apk original
-blade forge 3d mod apk cracked
-blade forge 3d mod apk unlimited gold
-blade forge 3d mod apk unlocked everything
-blade forge 3d mod apk high damage
-blade forge 3d mod apk easy win
-blade forge 3d mod apk no verification

-

No Ads

-

Another feature of Blade Forge 3D Mod APK is that it removes all the ads from the game. Ads are annoying and distracting, especially when they interrupt your gameplay or cover your screen. They can also consume your data and battery life. With Blade Forge 3D Mod APK, you don't have to deal with any ads at all. You can play the game smoothly and peacefully without any interruptions or distractions.

-

Unlock All Blades

-

The last feature of Blade Forge 3D Mod APK is that it unlocks all the blades in the game. Blades are the main items in the game that you can craft and use. There are many types of blades in the game, such as swords, daggers, axes, spears, scythes, etc. Each blade has its own characteristics and abilities. Some blades are more powerful, rare, or expensive than others. Normally, you have to unlock the blades by completing certain tasks or paying money. However, with Blade Forge 3D Mod APK, you can access all the blades from the start. You can choose any blade you like and craft it with ease.

-

How to Download and Install Blade Forge 3D Mod APK

-

Step 1: Enable Unknown Sources

-

Before you can install Blade Forge 3D Mod APK, you need to enable unknown sources on your device. This is because the APK file is not from the official Google Play Store and your device may block it by default. To enable unknown sources, follow these steps:

- -

Step 2: Download the APK File

-

Next, you need to download the APK file of Blade Forge 3D Mod APK from a reliable website. There are many websites that offer the APK file, but some of them may be fake or malicious. To avoid any risks, you should download the APK file from a trusted source, such as [APKPure] or [APKFab]. These websites are safe and verified by millions of users. To download the APK file, follow these steps:

- -

Step 3: Install the APK File

-

Finally, you need to install the APK file of Blade Forge 3D Mod APK on your device. This is a simple and quick process that only takes a few minutes. To install the APK file, follow these steps:

- -

Conclusion

-

Blade Forge 3D is a fun and creative game that lets you craft your own blades and test them on various objects and enemies. The game is easy to play but hard to master, as you need to balance the quality, durability, and sharpness of your blades. If you want to enhance your gaming experience, you should download Blade Forge 3D Mod APK, a modified version that gives you unlimited money, no ads, and access to all blades. You can download Blade Forge 3D Mod APK for free from various websites on the internet. You just need to follow some simple steps to enable unknown sources, download the APK file, and install it on your device. Then, you can enjoy Blade Forge 3D Mod APK without any limitations or interruptions.

-

FAQs

-

Here are some frequently asked questions about Blade Forge 3D Mod APK:

-

Q: Is Blade Forge 3D Mod APK safe to use?

-

A: Yes, Blade Forge 3D Mod APK is safe to use as long as you download it from a reliable website. The APK file does not contain any viruses or malware that can harm your device or data. However, you should always be careful when downloading any files from unknown sources and scan them with an antivirus before installing them.

-

Q: Do I need to root my device to use Blade Forge 3D Mod APK?

-

A: No, you do not need to root your device to use Blade Forge 3D Mod APK. The modded version works fine on both rooted and non-rooted devices. You just need to enable unknown sources and install the APK file as usual.

-

Q: Will I get banned for using Blade Forge 3D Mod APK?

-

A: No, you will not get banned for using Blade Forge 3D Mod APK. The modded version does not interfere with the game's servers or online features. You can play the game normally without any risk of getting banned.

-

Q: Can I update Blade Forge 3D Mod APK?

-

A: Yes, you can update Blade Forge 3D Mod APK whenever a new version is available. However, you may have to uninstall the previous version and download the new version from the same website. You may also lose your progress and data if you update the modded version, so make sure to back up your files before updating.

-

Q: What are some alternatives to Blade Forge 3D Mod APK?

-

A: If you are looking for some other games that are similar to Blade Forge 3D, you can try these alternatives:

- -

I hope this article has helped you learn more about Blade Forge 3D Mod APK and how to download and install it on your device. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun playing Blade Forge 3D Mod APK!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat 11 for Android The Ultimate Fighting Game.md b/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat 11 for Android The Ultimate Fighting Game.md deleted file mode 100644 index 18b15a4c5e1428e3cdfa7d501e611f20c43d4c9e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Mortal Kombat 11 for Android The Ultimate Fighting Game.md +++ /dev/null @@ -1,142 +0,0 @@ -
-

Download Game Mortal Kombat 11 Android: How to Enjoy the Ultimate Fighting Experience on Your Mobile Device

-

If you are a fan of fighting games, you must have heard of Mortal Kombat, one of the most popular and influential franchises in the genre. Mortal Kombat is known for its brutal and visceral combat, its iconic characters, and its trademark fatalities. And now, you can enjoy the latest installment of this legendary series, Mortal Kombat 11, on your android device. But how do you download game mortal kombat 11 android? And how do you optimize your gaming experience? In this article, we will answer these questions and more. Read on to find out how to unleash your power and become the ultimate fighter on your mobile device.

-

download game mortal kombat 11 android


Download File 🗹 https://jinyurl.com/2uNPgt



-

What is Mortal Kombat 11?

-

Mortal Kombat 11 is the eleventh main entry in the Mortal Kombat series, developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment. It was released in April 2019 for PlayStation 4, Xbox One, Nintendo Switch, and Windows PC, and later in November 2020 for PlayStation 5 and Xbox Series X/S. It is also available for android devices through two different methods, which we will discuss later.

-

The latest installment of the iconic fighting game franchise

-

Mortal Kombat 11 continues the story of the previous game, Mortal Kombat X, in which Raiden, the god of thunder, has become corrupted by the power of Shinnok's amulet and has decided to protect Earthrealm by any means necessary. This leads him to clash with other characters, both old and new, who have their own agendas and motivations. The game also introduces a new villain, Kronika, the keeper of time, who wants to erase Raiden's interference and create a new timeline.

-

The features and gameplay of Mortal Kombat 11

-

Mortal Kombat 11 is a fighting game that allows you to choose from a roster of over 30 characters, each with their own unique abilities, moves, and fatalities. You can customize your characters with different skins, gear, abilities, intros, outros, taunts, and banners. You can also use a new feature called Fatal Blow, which is a powerful attack that can be activated when your health is below 30%. Another new feature is Krushing Blow, which is a cinematic variation of a special move that triggers when certain conditions are met.

-

The game offers various modes for you to play, such as Story Mode, which lets you follow the narrative of the game; Towers of Time, which are dynamic challenges that change periodically; Klassic Towers, which are traditional arcade ladders; Online Mode, which lets you compete with other players around the world; Training Mode, which lets you practice your skills; and Krypt Mode, which lets you explore Shang Tsung's island and unlock various rewards.

-

The characters and modes of Mortal Kombat 11

-

Mortal Kombat 11 features a diverse cast of characters from different realms and timelines. Some of them are returning favorites from previous games such as Scorpion, Sub-Zero, Liu Kang, Sonya Blade, Raiden, and Shao Kahn; some of them are new additions from Mortal Kombat X, such as Cassie Cage, Jacqui Briggs, Kotal Kahn, and D'Vorah; and some of them are brand new characters, such as Geras, Cetrion, and The Terminator. You can also unlock and play as guest characters from other franchises, such as Joker, Spawn, RoboCop, and Rambo.

-

How to download game mortal kombat 11 android?

-

As we mentioned earlier, there are two ways to download game mortal kombat 11 android: the official way and the unofficial way. Let's take a look at each of them and see how they differ.

-

The official way: Mortal Kombat Mobile app

-

The official way to download game mortal kombat 11 android is to use the Mortal Kombat Mobile app, which is a free-to-play version of the game that is compatible with android devices. The Mortal Kombat Mobile app is not exactly the same as the console or PC version of the game, but it does share some similarities and features.

-

How to download game mortal kombat 11 on android
-Download game mortal kombat 11 apk for android
-Mortal kombat 11 android game free download
-Download game mortal kombat 11 mod apk android
-Mortal kombat 11 mobile game download for android
-Download game mortal kombat 11 offline android
-Mortal kombat 11 android game download play store
-Download game mortal kombat 11 full version android
-Mortal kombat 11 android game size and requirements
-Download game mortal kombat 11 hack android
-Mortal kombat 11 android game release date and news
-Download game mortal kombat 11 latest version android
-Mortal kombat 11 android game features and gameplay
-Download game mortal kombat 11 online android
-Mortal kombat 11 android game review and rating
-Download game mortal kombat 11 unlimited money android
-Mortal kombat 11 android game characters and roster
-Download game mortal kombat 11 data obb android
-Mortal kombat 11 android game tips and tricks
-Download game mortal kombat 11 cheats android
-Mortal kombat 11 android game graphics and performance
-Download game mortal kombat 11 update android
-Mortal kombat 11 android game story and mode
-Download game mortal kombat 11 cracked android
-Mortal kombat 11 android game controller support
-Download game mortal kombat 11 beta android
-Mortal kombat 11 android game fatalities and moves
-Download game mortal kombat 11 highly compressed android
-Mortal kombat 11 android game best settings and optimization
-Download game mortal kombat 11 patch android
-Mortal kombat 11 android game system requirements and compatibility
-Download game mortal kombat 11 original android
-Mortal kombat 11 android game bugs and issues
-Download game mortal kombat 11 premium edition android
-Mortal kombat 11 android game skins and costumes
-Download game mortal kombat 11 ultimate edition android
-Mortal kombat 11 android game multiplayer and faction wars
-Download game mortal kombat 11 steam key android
-Mortal kombat 11 android game achievements and rewards
-Download game mortal kombat 11 dlc pack android

-

How to install and play Mortal Kombat Mobile app

-

To install and play Mortal Kombat Mobile app, you need to follow these steps:

-
    -
  1. Go to the Google Play Store and search for Mortal Kombat Mobile app.
  2. -
  3. Download and install the app on your device. The app requires about 1.1 GB of storage space.
  4. -
  5. Launch the app and accept the terms and conditions.
  6. -
  7. Create or log in to your WB Games account. This will allow you to sync your progress and access online features.
  8. -
  9. Choose your preferred language and region.
  10. -
  11. Enjoy the game!
  12. -
-

The benefits and drawbacks of Mortal Kombat Mobile app

-

The Mortal Kombat Mobile app has some benefits and drawbacks that you should be aware of before you decide to download it. Here are some of them:

- - - - - - - -
BenefitsDrawbacks
- It is free to download and play.- It has in-app purchases and ads that can affect your gaming experience.
- It has high-quality graphics and sound effects.- It requires a stable internet connection to play.
- It has a large roster of characters that you can collect and upgrade.- It has a different gameplay system than the console or PC version, which may not appeal to some fans.
- It has exclusive content and events that are not available in the console or PC version.- It has limited modes and features compared to the console or PC version.
- It allows you to link your account with the console or PC version and unlock rewards in both games.- It may not run smoothly on some devices or cause battery drain or overheating issues.

The unofficial way: Mortal Kombat 11 Mobile website

-

The unofficial way to download game mortal kombat 11 android is to use the Mortal Kombat 11 Mobile website, which is a fan-made version of the game that claims to be compatible with android devices. The Mortal Kombat 11 Mobile website is not endorsed or supported by the official developers or publishers of the game, and it may contain malware or viruses that can harm your device or steal your personal information.

-

How to access and download Mortal Kombat 11 Mobile website

-

To access and download Mortal Kombat 11 Mobile website, you need to follow these steps:

-
    -
  1. Go to your browser and search for Mortal Kombat 11 Mobile website.
  2. -
  3. Find and click on the link that leads you to the website. Be careful not to click on any ads or pop-ups that may appear.
  4. -
  5. On the website, you will see a button that says "Download Now". Click on it and wait for the download to start.
  6. -
  7. Once the download is complete, you will need to install the APK file on your device. You may need to enable unknown sources in your settings to do this.
  8. -
  9. Launch the game and enjoy!
  10. -
-

The advantages and disadvantages of Mortal Kombat 11 Mobile website

-

The Mortal Kombat 11 Mobile website has some advantages and disadvantages that you should be aware of before you decide to download it. Here are some of them:

- - - - - - - -
AdvantagesDisadvantages
- It claims to offer the same gameplay and features as the console or PC version of the game.- It is not authorized or verified by the official developers or publishers of the game.
- It does not require any in-app purchases or ads to play.- It may contain malware or viruses that can damage your device or compromise your security.
- It does not require an internet connection to play.- It may not work properly on some devices or cause crashes or glitches.
- It allows you to play as any character without unlocking them.- It may violate the intellectual property rights of the original creators of the game.
- It updates regularly with new content and fixes.- It may be removed or blocked by the authorities at any time.

How to optimize your mortal kombat 11 android experience?

-

Now that you know how to download game mortal kombat 11 android, you may wonder how to make the most of your gaming experience. Whether you choose the official or the unofficial way, there are some tips and tricks that can help you improve your performance and enjoyment of the game. Here are some of them:

-

Tips and tricks for playing Mortal Kombat 11 on android

-

Here are some tips and tricks that can help you play Mortal Kombat 11 on android better:

- -

Best devices and settings for running Mortal Kombat 11 on android

-

Here are some recommendations for the best devices and settings for running Mortal Kombat 11 on android:

- -

Common issues and solutions for Mortal Kombat 11 on android

-

Here are some common issues and solutions for Mortal Kombat 11 on android:

- -

Conclusion

-

Mortal Kombat 11 is one of the best fighting games ever made, and you can enjoy it on your android device with either the official or the unofficial way. However, each way has its pros and cons, so you should weigh them carefully before you decide to download game mortal kombat 11 android. Also, you should follow some tips and tricks to optimize your gaming experience and avoid some common issues. We hope this article has helped you learn how to download game mortal kombat 11 android and how to have fun with it. Now go ahead and unleash your power!

-

FAQs

-

Here are some frequently asked questions about downloading game mortal kombat 11 android:

-
    -
  1. Is Mortal Kombat 11 free on android?
  2. -

    Mortal Kombat 11 is not free on android. However, you can download the Mortal Kombat Mobile app for free from the Google Play Store. This is a free-to-play version of the game that has some similarities and features with Mortal Kombat 11. Alternatively, you can access and download the Mortal Kombat 11 Mobile website for free from your browser. This is a fan-made version of the game that claims to offer the same gameplay and features as Mortal Kombat 11. However, this is not an official or authorized way to download game mortal kombat 11 android, and it may pose some risks to your device or security.

    -
  3. Can I play Mortal Kombat 11 on android with a controller?
  4. -

    Yes, you can play Mortal Kombat 11 on android with a controller, as long as your device supports it. You can use either a wired or a wireless controller, such as a PS4, Xbox One, or Switch controller. To connect your controller to your device, you need to follow the instructions of your device and controller manufacturer. Once your controller is connected, you can customize the controls in the game settings.

    -
  5. Can I play Mortal Kombat 11 on android with my friends?
  6. -

    Yes, you can play Mortal Kombat 11 on android with your friends, either online or offline. To play online, you need to have an internet connection and a WB Games account. You can then invite your friends to join your faction, chat with them, and challenge them to matches. To play offline, you need to have two devices with the game installed and connected via Bluetooth or Wi-Fi. You can then select the Versus mode and choose your opponent.

    -
  7. How do I unlock more characters in Mortal Kombat 11 on android?
  8. -

    There are different ways to unlock more characters in Mortal Kombat 11 on android, depending on which way you download the game. If you use the Mortal Kombat Mobile app, you can unlock more characters by opening packs, completing towers, participating in events, or spending in-game currency. If you use the Mortal Kombat 11 Mobile website, you can unlock more characters by downloading updates, entering codes, or using cheats.

    -
  9. How do I perform fatalities in Mortal Kombat 11 on android?
  10. -

    Fatalities are finishing moves that you can perform at the end of a match to brutally kill your opponent. To perform fatalities in Mortal Kombat 11 on android, you need to know the specific input and distance for each character and fatality. You can find this information in the game menu or online. Once you have this information, you need to defeat your opponent until their health bar flashes red and the announcer says "Finish Him/Her". Then, you need to input the correct sequence of buttons or gestures within a few seconds and watch the gruesome result.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/European Truck Simulator APK Mod Customize Your Truck and Explore Amazing Cities.md b/spaces/1phancelerku/anime-remove-background/European Truck Simulator APK Mod Customize Your Truck and Explore Amazing Cities.md deleted file mode 100644 index e0b8a8dbb56010eb696f1b07b209cf704c8ca0c3..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/European Truck Simulator APK Mod Customize Your Truck and Explore Amazing Cities.md +++ /dev/null @@ -1,114 +0,0 @@ - -

    European Truck Simulator Mod APK: Drive Across Europe With Unlimited Money

    -

    Do you love driving trucks and exploring new places? Do you want to experience the thrill of being a real trucker in Europe? If yes, then you should try European Truck Simulator, a realistic and immersive truck simulation game that lets you travel across many countries from Europe, visit incredible places like Berlin, Prague, Madrid, Rome, Paris and more. You can play the career mode of this truck simulator, make money, purchase new trucks and upgrades, and challenge your friends with the online multiplayer mode.

    -

    european truck simulator apk mod


    DOWNLOADhttps://jinyurl.com/2uNNCD



    -

    But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money to buy any truck or upgrade you want? Well, there is a way to do that. You can use European Truck Simulator Mod APK, a modified version of the game that gives you access to unlimited money and other features. In this article, we will tell you everything you need to know about European Truck Simulator Mod APK, including what it is, why you should use it, how to download and install it, and some tips and tricks for playing the game. Let's get started!

    -

    What is European Truck Simulator?

    -

    European Truck Simulator is a truck simulation game developed by Ovidiu Pop, a popular developer of simulation games. The game was released in 2015 and has since gained millions of downloads and positive reviews from players. The game features 12 European truck brands with 4x2 and 6x4 axles, more than 20 realistic cities, country roads, highways and offroads, easy controls (tilt, buttons or touch steering wheel), realistic weather conditions and day/night cycle, visual damage on trucks, detailed interiors for each truck brand, amazing engine sounds, improved AI traffic system, online multiplayer with servers or convoy mode, achievements and leaderboards, controller support, and more.

    -

    The game is available for free on Google Play Store , but it also offers in-app purchases that range from $0.99 to $49.99 per item. These purchases allow you to buy more money, remove ads, unlock all trucks, and get premium features. However, if you don't want to spend real money on the game, you can use European Truck Simulator Mod APK instead.

    -

    How to download and install European Truck Simulator Mod APK?

    -

    European Truck Simulator Mod APK is a modified version of the game that gives you unlimited money and other features that are not available in the original version. You can use this mod APK to buy any truck or upgrade you want, without worrying about running out of money. You can also enjoy the game without any ads or interruptions.

    -

    european truck simulator mod apk unlimited money
    -euro truck driver simulator apk mod
    -european truck simulator 2 mod apk
    -euro truck evolution simulator mod apk
    -european truck simulator 4.2 mod apk
    -euro truck driver 2018 mod apk
    -european truck simulator multiplayer mod apk
    -euro truck simulator pro 2 mod apk
    -european truck simulator hack mod apk
    -euro truck driver 3d mod apk
    -european truck simulator android mod apk
    -euro truck simulator offroad cargo transport mod apk
    -european truck simulator latest mod apk
    -euro truck driver 2019 mod apk
    -european truck simulator 3.1 mod apk
    -euro truck simulator bus mod apk
    -european truck simulator free download mod apk
    -euro truck driver 2.6.0 mod apk
    -european truck simulator full version mod apk
    -euro truck simulator cargo delivery mod apk
    -european truck simulator offline mod apk
    -euro truck driver 2.5.0 mod apk
    -european truck simulator premium mod apk
    -euro truck driver 2.3.0 mod apk
    -european truck simulator unlocked mod apk
    -euro truck driver 2.2.0 mod apk
    -european truck simulator all trucks mod apk
    -euro truck driver online multiplayer mod apk
    -european truck simulator realistic mod apk
    -euro truck driver ovilex mod apk
    -european truck simulator graphics mod apk
    -euro truck driver hack version download apk
    -european truck simulator cheats mod apk
    -euro truck driver unlimited coins and gems apk
    -european truck simulator best mods apk
    -euro truck driver game download for android hack version
    -european truck simulator new update mod apk
    -euro truck driver old version hack download for android
    -european truck simulator no ads mod apk
    -euro truck driver game download for android apkpure
    -european truck simulator controller support mod apk
    -euro truck driver game download for android uptodown
    -european truck simulator customizations mod apk
    -euro truck driver game download for android mob.org
    -european truck simulator weather conditions mod apk
    -euro truck driver game download for android rexdl
    -european truck simulator damage mod apk
    -euro truck driver game download for android revdl
    -european truck simulator interior mods apk

    -

    To download and install European Truck Simulator Mod APK, you need to follow these steps:

    -
      -
    1. Go to , a reliable website that offers mod APKs for various games and apps.
    2. -
    3. Search for European Truck Simulator Mod APK in the search bar or browse through the categories.
    4. -
    5. Click on the download button and wait for the file to be downloaded on your device.
    6. -
    7. Once the file is downloaded, go to your device settings and enable unknown sources. This will allow you to install apps from sources other than Google Play Store.
    8. -
    9. Locate the downloaded file in your file manager and tap on it to start the installation process.
    10. -
    11. Follow the instructions on the screen and wait for the installation to be completed.
    12. -
    13. Launch the game and enjoy!
    14. -
    -

    Why use European Truck Simulator Mod APK?

    -

    You might be wondering why you should use European Truck Simulator Mod APK instead of the original version of the game. Well, there are many reasons why using this mod APK can enhance your gaming experience. Here are some of them:

    -

    Benefits of the mod APKBenefits of the mod APK

    - -

    Risks of the mod APK

    -

    While using European Truck Simulator Mod APK can have many benefits, it also comes with some risks that you should be aware of. Here are some of them:

    - -

    Tips and tricks for playing European Truck Simulator

    -

    Now that you know how to download and install European Truck Simulator Mod APK, you might be wondering how to play the game and have fun. Well, we have some tips and tricks for you that can help you improve your skills and enjoy the game more. Here are some of them:

    -

    Customize your truck

    -

    One of the best things about European Truck Simulator is that you can customize your truck according to your taste and preference. You can change the color, paint job, accessories, wheels, lights, horns, exhausts, and more. You can also upgrade your engine, transmission, brakes, suspension, fuel tank, and more. Customizing your truck can make it look more unique and attractive, as well as improve its performance and efficiency.

    -

    Follow the traffic rules

    -

    European Truck Simulator is a realistic simulation game that follows the traffic rules and regulations of Europe. You have to obey the speed limits, traffic lights, signs, signals, lane markings, and more. You also have to respect other vehicles on the road, such as cars, buses, motorcycles, bicycles, pedestrians, etc. If you break any traffic rule or cause any accident, you will be fined or penalized by the police. Following the traffic rules can make your driving experience more safe and smooth.

    -

    Explore different routes and cities

    -

    European Truck Simulator offers you a vast map of Europe with more than 20 realistic cities to visit. You can explore different routes and roads that connect these cities, such as country roads, highways, offroads, etc. You can also enjoy the scenic views of nature, landmarks, monuments, buildings, etc. that you encounter along the way. Exploring different routes and cities can make your gameplay more diverse and interesting.

    -

    Join online multiplayer mode

    -

    If you want to challenge yourself and compete with other players from around the world, you can join the online multiplayer mode of European Truck Simulator. You can either join a server or create a convoy with your friends. You can chat with other players using voice or text messages. You can also compare your scores and achievements with other players on the leaderboards. Joining online multiplayer mode can make your gameplay more social and fun.

    -

    Conclusion

    -

    European Truck Simulator is a great game for anyone who loves driving trucks and exploring

    European Truck Simulator is a great game for anyone who loves driving trucks and exploring new places. It offers a realistic and immersive truck simulation experience that can keep you entertained for hours. However, if you want to enjoy the game without any limitations or restrictions, you can use European Truck Simulator Mod APK, a modified version of the game that gives you unlimited money and other features. You can download and install this mod APK from a reliable website and follow the instructions given in this article. You can also use some tips and tricks to improve your skills and have fun playing the game.

    -

    We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy trucking!

    -

    FAQs

    -

    Here are some frequently asked questions about European Truck Simulator Mod APK:

    -
      -
    1. Is European Truck Simulator Mod APK safe to use?
    2. -

      European Truck Simulator Mod APK is generally safe to use, as long as you download it from a trusted source and scan it with an antivirus before installing it. However, you should always use it at your own risk and discretion, as it may violate the terms and conditions of the game or Google Play Store.

      -
    3. What are the requirements for using European Truck Simulator Mod APK?
    4. -

      To use European Truck Simulator Mod APK, you need to have an Android device with Android 4.1 or higher, at least 1 GB of RAM, and at least 200 MB of free storage space. You also need to enable unknown sources in your device settings to install the mod APK.

      -
    5. Can I play European Truck Simulator Mod APK offline?
    6. -

      Yes, you can play European Truck Simulator Mod APK offline, as it does not require an internet connection to run. However, you will not be able to access the online multiplayer mode or update the game without an internet connection.

      -
    7. Can I use European Truck Simulator Mod APK with other mods or cheats?
    8. -

      No, you cannot use European Truck Simulator Mod APK with other mods or cheats, as it may cause compatibility issues or errors that can affect your gameplay or damage your device. You should only use one mod or cheat at a time.

      -
    9. Can I update European Truck Simulator Mod APK?
    10. -

      Yes, you can update European Truck Simulator Mod APK, as long as the mod APK is compatible with the latest version of the game. You can check for updates on the website where you downloaded the mod APK or on the game itself. However, you may lose some of the mod features or data after updating, so you should always backup your data before updating.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/attentions.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/AI-Naga/Vehicle_Damage_Detection/README.md b/spaces/AI-Naga/Vehicle_Damage_Detection/README.md deleted file mode 100644 index d59d14b435814cbc1a49623e2aeb04b8103a4415..0000000000000000000000000000000000000000 --- a/spaces/AI-Naga/Vehicle_Damage_Detection/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vehicle Damage Detection -emoji: 🏃 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/motion_process.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/motion_process.py deleted file mode 100644 index 7819c8b3cc61b6e48c65d1a456342119060696ea..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/motion_process.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from utils.quaternion import quaternion_to_cont6d, qrot, qinv - -def recover_root_rot_pos(data): - rot_vel = data[..., 0] - r_rot_ang = torch.zeros_like(rot_vel).to(data.device) - '''Get Y-axis rotation from rotation velocity''' - r_rot_ang[..., 1:] = rot_vel[..., :-1] - r_rot_ang = torch.cumsum(r_rot_ang, dim=-1) - - r_rot_quat = torch.zeros(data.shape[:-1] + (4,)).to(data.device) - r_rot_quat[..., 0] = torch.cos(r_rot_ang) - r_rot_quat[..., 2] = torch.sin(r_rot_ang) - - r_pos = torch.zeros(data.shape[:-1] + (3,)).to(data.device) - r_pos[..., 1:, [0, 2]] = data[..., :-1, 1:3] - '''Add Y-axis rotation to root position''' - r_pos = qrot(qinv(r_rot_quat), r_pos) - - r_pos = torch.cumsum(r_pos, dim=-2) - - r_pos[..., 1] = data[..., 3] - return r_rot_quat, r_pos - - -def recover_from_rot(data, joints_num, skeleton): - r_rot_quat, r_pos = recover_root_rot_pos(data) - - r_rot_cont6d = quaternion_to_cont6d(r_rot_quat) - - start_indx = 1 + 2 + 1 + (joints_num - 1) * 3 - end_indx = start_indx + (joints_num - 1) * 6 - cont6d_params = data[..., start_indx:end_indx] - # print(r_rot_cont6d.shape, cont6d_params.shape, r_pos.shape) - cont6d_params = torch.cat([r_rot_cont6d, cont6d_params], dim=-1) - cont6d_params = cont6d_params.view(-1, joints_num, 6) - - positions = skeleton.forward_kinematics_cont6d(cont6d_params, r_pos) - - return positions - - -def recover_from_ric(data, joints_num): - r_rot_quat, r_pos = recover_root_rot_pos(data) - positions = data[..., 4:(joints_num - 1) * 3 + 4] - positions = positions.view(positions.shape[:-1] + (-1, 3)) - - '''Add Y-axis rotation to local joints''' - positions = qrot(qinv(r_rot_quat[..., None, :]).expand(positions.shape[:-1] + (4,)), positions) - - '''Add root XZ to joints''' - positions[..., 0] += r_pos[..., 0:1] - positions[..., 2] += r_pos[..., 2:3] - - '''Concate root and joints''' - positions = torch.cat([r_pos.unsqueeze(-2), positions], dim=-2) - - return positions - \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/emotion/pre_align.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/emotion/pre_align.py deleted file mode 100644 index 3b625295a118845c01a3677004070714d11c162b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/emotion/pre_align.py +++ /dev/null @@ -1,25 +0,0 @@ -import os - -from data_gen.tts.base_preprocess import BasePreprocessor -import glob -import re - -class EmoPreAlign(BasePreprocessor): - - def meta_data(self): - spks = ['0012', '0011', '0013', '0014', '0015', '0016', '0017', '0018', '0019', '0020'] - pattern = re.compile('[\t\n ]+') - for spk in spks: - for line in open(f"{self.raw_data_dir}/{spk}/{spk}.txt", 'r'): # 打开文件 - line = re.sub(pattern, ' ', line) - if line == ' ': continue - split_ = line.split(' ') - txt = ' '.join(split_[1: -2]) - item_name = split_[0] - emotion = split_[-2] - wav_fn = f'{self.raw_data_dir}/{spk}/{emotion}/{item_name}.wav' - yield item_name, wav_fn, txt, spk, emotion - - -if __name__ == "__main__": - EmoPreAlign().process() diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 7d137b8cf36718c1c58faa09f9dd919e5fb2977b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,87 +0,0 @@ -"""SAMPLING ONLY.""" -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -MODEL_TYPES = { - "eps": "noise", - "v": "v" -} - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type=MODEL_TYPES[self.model.parameterization], - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None \ No newline at end of file diff --git a/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/download.py b/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/download.py deleted file mode 100644 index a9aa79830aa22d28dedf09d5994d6bb4494faa19..0000000000000000000000000000000000000000 --- a/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/download.py +++ /dev/null @@ -1,139 +0,0 @@ -import streamlit as st -import pickle -import pandas as pd -import json -import base64 -import uuid -import re - -import importlib.util - - -def import_from_file(module_name: str, filepath: str): - """ - Imports a module from file. - Args: - module_name (str): Assigned to the module's __name__ parameter (does not - influence how the module is named outside of this function) - filepath (str): Path to the .py file - Returns: - The module - """ - spec = importlib.util.spec_from_file_location(module_name, filepath) - module = importlib.util.module_from_spec(spec) - spec.loader.exec_module(module) - return module - - -def notebook_header(text): - """ - Insert section header into a jinja file, formatted as notebook cell. - Leave 2 blank lines before the header. - """ - return f"""# # {text} -""" - - -def code_header(text): - """ - Insert section header into a jinja file, formatted as Python comment. - Leave 2 blank lines before the header. - """ - seperator_len = (75 - len(text)) / 2 - seperator_len_left = math.floor(seperator_len) - seperator_len_right = math.ceil(seperator_len) - return f"# {'-' * seperator_len_left} {text} {'-' * seperator_len_right}" - - -def to_notebook(code): - """Converts Python code to Jupyter notebook format.""" - notebook = jupytext.reads(code, fmt="py") - return jupytext.writes(notebook, fmt="ipynb") - - -def open_link(url, new_tab=True): - """Dirty hack to open a new web page with a streamlit button.""" - # From: https://discuss.streamlit.io/t/how-to-link-a-button-to-a-webpage/1661/3 - if new_tab: - js = f"window.open('{url}')" # New tab or window - else: - js = f"window.location.href = '{url}'" # Current tab - html = ''.format(js) - div = Div(text=html) - st.bokeh_chart(div) - - -def download_button(object_to_download, download_filename, button_text): - """ - Generates a link to download the given object_to_download. - From: https://discuss.streamlit.io/t/a-download-button-with-custom-css/4220 - Params: - ------ - object_to_download: The object to be downloaded. - download_filename (str): filename and extension of file. e.g. mydata.csv, - some_txt_output.txt download_link_text (str): Text to display for download - link. - button_text (str): Text to display on download button (e.g. 'click here to download file') - pickle_it (bool): If True, pickle file. - Returns: - ------- - (str): the anchor tag to download object_to_download - Examples: - -------- - download_link(your_df, 'YOUR_DF.csv', 'Click to download data!') - download_link(your_str, 'YOUR_STRING.txt', 'Click to download text!') - """ - - # if: - if isinstance(object_to_download, bytes): - pass - - elif isinstance(object_to_download, pd.DataFrame): - object_to_download = object_to_download.to_csv(index=False) - # Try JSON encode for everything else - else: - object_to_download = json.dumps(object_to_download) - - try: - # some strings <-> bytes conversions necessary here - b64 = base64.b64encode(object_to_download.encode()).decode() - except AttributeError as e: - b64 = base64.b64encode(object_to_download).decode() - - button_uuid = str(uuid.uuid4()).replace("-", "") - button_id = re.sub("\d+", "", button_uuid) - - custom_css = f""" - """ - - dl_link = ( - custom_css - + f'{button_text}

    ' - ) - - st.markdown(dl_link, unsafe_allow_html=True) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_long_sleeved_shirt_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_long_sleeved_shirt_256x192/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/lm.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/lm.py deleted file mode 100644 index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/lm.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/GetChartDataset.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/GetChartDataset.js deleted file mode 100644 index 1f8d3d5326788383d1edd115395c2a24f71d8d44..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/GetChartDataset.js +++ /dev/null @@ -1,21 +0,0 @@ -var GetChartDataset = function (datasetIndex) { - if (this.chart === undefined) { - return undefined; - } - - if (typeof (datasetIndex) === 'string') { - var datasets = this.chart.data.datasets, dataset; - for (var i = 0, cnt = datasets.length; i < cnt; i++) { - dataset = datasets[i]; - if (dataset.label === datasetIndex) { - return dataset; - } - } - } else { - return this.chart.data.datasets[datasetIndex]; - } - - return undefined; -} - -export default GetChartDataset; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResetGrid.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResetGrid.js deleted file mode 100644 index 37c5edb830311e5f0f4c07ca1c0584d7b738a32e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/ResetGrid.js +++ /dev/null @@ -1,77 +0,0 @@ -import ArrayFill from '../../../plugins/utils/array/Fill.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; - -var ResetGrid = function (columnCount, rowCount, columnProportions, rowProportions, space) { - if (columnProportions === undefined) { - columnProportions = 0; - } - if (rowProportions === undefined) { - rowProportions = 0; - } - - this.columnCount = columnCount; - this.rowCount = rowCount; - this.gridCount = columnCount * rowCount; - - // children - if (this.sizerChildren === undefined) { - this.sizerChildren = []; - } else { - this.removeAll(); - } - this.sizerChildren.length = columnCount * rowCount; - ArrayFill(this.sizerChildren, null); - - // proportions - this.columnProportions = []; - this.columnProportions.length = columnCount; - if (typeof (columnProportions) === 'number') { - ArrayFill(this.columnProportions, columnProportions); - } else { - for (var i = 0; i < columnCount; i++) { - this.columnProportions[i] = columnProportions[i] || 0; - } - } - this.rowProportions = []; - this.rowProportions.length = rowCount; - if (typeof (rowProportions) === 'number') { - ArrayFill(this.rowProportions, rowProportions); - } else { - for (var i = 0; i < rowCount; i++) { - this.rowProportions[i] = rowProportions[i] || 0; - } - } - - // width & height - this.columnWidth = []; - this.columnWidth.length = columnCount; - this.rowHeight = []; - this.rowHeight.length = rowCount; - - // space - this.space.column = []; - this.space.column.length = columnCount - 1; - var columnSpace = GetValue(space, 'column', 0); - if (typeof (columnSpace) === 'number') { - ArrayFill(this.space.column, columnSpace); - } else { - for (var i = 0, cnt = this.space.column.length; i < cnt; i++) { - this.space.column[i] = columnSpace[i] || 0; - } - } - this.space.row = []; - this.space.row.length = rowCount - 1; - var rowSpace = GetValue(space, 'row', 0); - if (typeof (rowSpace) === 'number') { - ArrayFill(this.space.row, rowSpace); - } else { - for (var i = 0, cnt = this.space.row.length; i < cnt; i++) { - this.space.row[i] = rowSpace[i] || 0; - } - } - - return this; -} - -export default ResetGrid; \ No newline at end of file diff --git a/spaces/Aloento/9Nine-PITS/commons.py b/spaces/Aloento/9Nine-PITS/commons.py deleted file mode 100644 index cb0518d7c9940093f378cc45c71a1dadfa74a7a5..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-PITS/commons.py +++ /dev/null @@ -1,192 +0,0 @@ -# from https://github.com/jaywalnut310/vits -import math - -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def intersperse_with_language_id(text, lang, item): - n = len(text) - _text = [item] * (2 * n + 1) - _lang = [None] * (2 * n + 1) - _text[1::2] = text - _lang[1::2] = lang - _lang[::2] = lang + [lang[-1]] - - return _text, _lang - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) - * ids_str_max).to(dtype=torch.long) - ids_str = torch.max(torch.zeros(ids_str.size()).to(ids_str.device), ids_str).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_slice_segments_for_cat(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = torch.rand([b // 2]).to(device=x.device) - ids_str = (torch.cat([ids_str, ids_str], dim=0) - * ids_str_max).to(dtype=torch.long) - ids_str = torch.max(torch.zeros(ids_str.size()).to(ids_str.device), ids_str).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / (num_timescales - 1) - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d( - length, channels, min_timescale, max_timescale - ) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d( - length, channels, min_timescale, max_timescale - ) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Ame42/rwms/power_BI_module.py b/spaces/Ame42/rwms/power_BI_module.py deleted file mode 100644 index ed0645911849d27e262c1c6aca6b800b0b1d3aab..0000000000000000000000000000000000000000 --- a/spaces/Ame42/rwms/power_BI_module.py +++ /dev/null @@ -1,19 +0,0 @@ -# 'dataset' holds the input data for this script -import pandas -from datastore import get_22_data, split_join - -date_time_col = "Date Time (GMT+01:00)" -time_col = "Time (GMT+01:00)" -dur_col = "Daylight duration (SEC)" -id_col = "index" - - -data = get_22_data() -data.drop(axis=1, columns=["THP BLIND (PSI)"], inplace=True) -data.dropna(axis=0, inplace=True, how="any") -data.reset_index(inplace=True) -data.drop(axis=1, columns="level_0", inplace=True) -dummies = pandas.get_dummies(data["Well index"]) -data = pandas.concat([data, dummies], axis=1).reindex(data.index) -data.drop(columns=["Well index", "index"], axis=1, inplace=True) -# data.to_csv("output/data.csv", index_label="id") diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index a89fc1389ce0f1f9712b4b5d684e632aaee25ce8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './retinanet_ghm_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/resnest/mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/resnest/mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py deleted file mode 100644 index 29f21fd040614425e8b36415b660823ad6bd38e1..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/resnest/mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py +++ /dev/null @@ -1,64 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - pretrained='open-mmlab://resnest50', - backbone=dict( - type='ResNeSt', - stem_channels=64, - depth=50, - radix=2, - reduction_factor=4, - avg_down_stride=True, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch'), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg), - mask_head=dict(norm_cfg=norm_cfg))) -# # use ResNeSt img_norm -img_norm_cfg = dict( - mean=[123.68, 116.779, 103.939], std=[58.393, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index 0f22d0fb6362252ac02f3f152a42997c68b90343..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './danet_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Anew1007/extras/Dockerfile b/spaces/Anew1007/extras/Dockerfile deleted file mode 100644 index f45cdfda0fab5fe7680df646ea7caf47d45e4352..0000000000000000000000000000000000000000 --- a/spaces/Anew1007/extras/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM python:3.11 - -WORKDIR /app - -COPY requirements-complete.txt . -RUN pip install -r requirements-complete.txt - -RUN mkdir /.cache && chmod -R 777 /.cache -RUN mkdir .chroma && chmod -R 777 .chroma - -COPY . . - - -RUN chmod -R 777 /app - -RUN --mount=type=secret,id=password,mode=0444,required=true \ - cat /run/secrets/password > /test - -EXPOSE 7860 - -CMD ["python", "server.py", "--cpu", "--enable-modules=caption,summarize,classify,silero-tts,edge-tts,chromadb"] diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/save_files.js b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/save_files.js deleted file mode 100644 index bdb0e3342146c374a63046df41d988841c98e3ec..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/save_files.js +++ /dev/null @@ -1,40 +0,0 @@ -// Functions for downloading JSON files -function getCurrentTimestamp() { - const now = new Date(); - const timezoneOffset = now.getTimezoneOffset() * 60000; // Convert to milliseconds - const localTime = new Date(now.getTime() - timezoneOffset); - const formattedTimestamp = localTime.toISOString().replace(/[-:]/g, "").slice(0, 15); - return formattedTimestamp; -} - -function saveFile(contents, filename) { - const element = document.createElement("a"); - element.setAttribute("href", "data:text/plain;charset=utf-8," + encodeURIComponent(contents)); - element.setAttribute("download", filename); - element.style.display = "none"; - document.body.appendChild(element); - element.click(); - document.body.removeChild(element); -} - -function saveHistory(history, character, mode) { - let path = null; - - if (["chat", "chat-instruct"].includes(mode) && character && character.trim() !== "") { - path = `history_${character}_${getCurrentTimestamp()}.json`; - } else { - try { - path = `history_${mode}_${getCurrentTimestamp()}.json`; - } catch (error) { - path = `history_${getCurrentTimestamp()}.json`; - } - } - saveFile(history, path); -} - -function saveSession(session) { - let path = null; - - path = `session_${getCurrentTimestamp()}.json`; - saveFile(session, path); -} diff --git a/spaces/Apex-X/Tm/app.py b/spaces/Apex-X/Tm/app.py deleted file mode 100644 index ac81150d2a4acd6d7124fd9c15115ab12892b61a..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/Tm/app.py +++ /dev/null @@ -1,69 +0,0 @@ -# -* coding:UTF-8 -* -# !/usr/bin/env python -import numpy as np -import gradio as gr -import roop.globals -from roop.core import ( - start, - decode_execution_providers, - suggest_max_memory, - suggest_execution_threads, -) -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import normalize_output_path -import os -from PIL import Image - - -def swap_face(source_file, target_file): - - source_path = "input.jpg" - target_path = "target.jpg" - - source_image = Image.fromarray(source_file) - source_image.save(source_path) - target_image = Image.fromarray(target_file) - target_image.save(target_path) - - print("source_path: ", source_path) - print("target_path: ", target_path) - - roop.globals.source_path = source_path - roop.globals.target_path = target_path - output_path = "output.jpg" - roop.globals.output_path = normalize_output_path( - roop.globals.source_path, roop.globals.target_path, output_path - ) - roop.globals.frame_processors = ["face_swapper"] - roop.globals.headless = True - roop.globals.keep_fps = True - roop.globals.keep_audio = True - roop.globals.keep_frames = False - roop.globals.many_faces = False - roop.globals.video_encoder = "libx264" - roop.globals.video_quality = 18 - roop.globals.max_memory = suggest_max_memory() - roop.globals.execution_providers = decode_execution_providers(["cpu"]) - roop.globals.execution_threads = suggest_execution_threads() - - print( - "start process", - roop.globals.source_path, - roop.globals.target_path, - roop.globals.output_path, - ) - - for frame_processor in get_frame_processors_modules( - roop.globals.frame_processors - ): - if not frame_processor.pre_check(): - return - - start() - return output_path - - -app = gr.Interface( - fn=swap_face, inputs=[gr.Image(), gr.Image()], outputs="image" -) -app.launch() diff --git a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_lstm.py b/spaces/Arnx/MusicGenXvAKN/tests/modules/test_lstm.py deleted file mode 100644 index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_lstm.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch - -from audiocraft.modules.lstm import StreamableLSTM - - -class TestStreamableLSTM: - - def test_lstm(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=False) - x = torch.randn(B, C, T) - y = lstm(x) - - print(y.shape) - assert y.shape == torch.Size([B, C, T]) - - def test_lstm_skip(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=True) - x = torch.randn(B, C, T) - y = lstm(x) - - assert y.shape == torch.Size([B, C, T]) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_palettes.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_palettes.py deleted file mode 100644 index 3c748d33e45bfcdc690ceee490cbb50b516cd2b3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_palettes.py +++ /dev/null @@ -1,309 +0,0 @@ -from .palette import Palette - - -# Taken from https://en.wikipedia.org/wiki/ANSI_escape_code (Windows 10 column) -WINDOWS_PALETTE = Palette( - [ - (12, 12, 12), - (197, 15, 31), - (19, 161, 14), - (193, 156, 0), - (0, 55, 218), - (136, 23, 152), - (58, 150, 221), - (204, 204, 204), - (118, 118, 118), - (231, 72, 86), - (22, 198, 12), - (249, 241, 165), - (59, 120, 255), - (180, 0, 158), - (97, 214, 214), - (242, 242, 242), - ] -) - -# # The standard ansi colors (including bright variants) -STANDARD_PALETTE = Palette( - [ - (0, 0, 0), - (170, 0, 0), - (0, 170, 0), - (170, 85, 0), - (0, 0, 170), - (170, 0, 170), - (0, 170, 170), - (170, 170, 170), - (85, 85, 85), - (255, 85, 85), - (85, 255, 85), - (255, 255, 85), - (85, 85, 255), - (255, 85, 255), - (85, 255, 255), - (255, 255, 255), - ] -) - - -# The 256 color palette -EIGHT_BIT_PALETTE = Palette( - [ - (0, 0, 0), - (128, 0, 0), - (0, 128, 0), - (128, 128, 0), - (0, 0, 128), - (128, 0, 128), - (0, 128, 128), - (192, 192, 192), - (128, 128, 128), - (255, 0, 0), - (0, 255, 0), - (255, 255, 0), - (0, 0, 255), - (255, 0, 255), - (0, 255, 255), - (255, 255, 255), - (0, 0, 0), - (0, 0, 95), - (0, 0, 135), - (0, 0, 175), - (0, 0, 215), - (0, 0, 255), - (0, 95, 0), - (0, 95, 95), - (0, 95, 135), - (0, 95, 175), - (0, 95, 215), - (0, 95, 255), - (0, 135, 0), - (0, 135, 95), - (0, 135, 135), - (0, 135, 175), - (0, 135, 215), - (0, 135, 255), - (0, 175, 0), - (0, 175, 95), - (0, 175, 135), - (0, 175, 175), - (0, 175, 215), - (0, 175, 255), - (0, 215, 0), - (0, 215, 95), - (0, 215, 135), - (0, 215, 175), - (0, 215, 215), - (0, 215, 255), - (0, 255, 0), - (0, 255, 95), - (0, 255, 135), - (0, 255, 175), - (0, 255, 215), - (0, 255, 255), - (95, 0, 0), - (95, 0, 95), - (95, 0, 135), - (95, 0, 175), - (95, 0, 215), - (95, 0, 255), - (95, 95, 0), - (95, 95, 95), - (95, 95, 135), - (95, 95, 175), - (95, 95, 215), - (95, 95, 255), - (95, 135, 0), - (95, 135, 95), - (95, 135, 135), - (95, 135, 175), - (95, 135, 215), - (95, 135, 255), - (95, 175, 0), - (95, 175, 95), - (95, 175, 135), - (95, 175, 175), - (95, 175, 215), - (95, 175, 255), - (95, 215, 0), - (95, 215, 95), - (95, 215, 135), - (95, 215, 175), - (95, 215, 215), - (95, 215, 255), - (95, 255, 0), - (95, 255, 95), - (95, 255, 135), - (95, 255, 175), - (95, 255, 215), - (95, 255, 255), - (135, 0, 0), - (135, 0, 95), - (135, 0, 135), - (135, 0, 175), - (135, 0, 215), - (135, 0, 255), - (135, 95, 0), - (135, 95, 95), - (135, 95, 135), - (135, 95, 175), - (135, 95, 215), - (135, 95, 255), - (135, 135, 0), - (135, 135, 95), - (135, 135, 135), - (135, 135, 175), - (135, 135, 215), - (135, 135, 255), - (135, 175, 0), - (135, 175, 95), - (135, 175, 135), - (135, 175, 175), - (135, 175, 215), - (135, 175, 255), - (135, 215, 0), - (135, 215, 95), - (135, 215, 135), - (135, 215, 175), - (135, 215, 215), - (135, 215, 255), - (135, 255, 0), - (135, 255, 95), - (135, 255, 135), - (135, 255, 175), - (135, 255, 215), - (135, 255, 255), - (175, 0, 0), - (175, 0, 95), - (175, 0, 135), - (175, 0, 175), - (175, 0, 215), - (175, 0, 255), - (175, 95, 0), - (175, 95, 95), - (175, 95, 135), - (175, 95, 175), - (175, 95, 215), - (175, 95, 255), - (175, 135, 0), - (175, 135, 95), - (175, 135, 135), - (175, 135, 175), - (175, 135, 215), - (175, 135, 255), - (175, 175, 0), - (175, 175, 95), - (175, 175, 135), - (175, 175, 175), - (175, 175, 215), - (175, 175, 255), - (175, 215, 0), - (175, 215, 95), - (175, 215, 135), - (175, 215, 175), - (175, 215, 215), - (175, 215, 255), - (175, 255, 0), - (175, 255, 95), - (175, 255, 135), - (175, 255, 175), - (175, 255, 215), - (175, 255, 255), - (215, 0, 0), - (215, 0, 95), - (215, 0, 135), - (215, 0, 175), - (215, 0, 215), - (215, 0, 255), - (215, 95, 0), - (215, 95, 95), - (215, 95, 135), - (215, 95, 175), - (215, 95, 215), - (215, 95, 255), - (215, 135, 0), - (215, 135, 95), - (215, 135, 135), - (215, 135, 175), - (215, 135, 215), - (215, 135, 255), - (215, 175, 0), - (215, 175, 95), - (215, 175, 135), - (215, 175, 175), - (215, 175, 215), - (215, 175, 255), - (215, 215, 0), - (215, 215, 95), - (215, 215, 135), - (215, 215, 175), - (215, 215, 215), - (215, 215, 255), - (215, 255, 0), - (215, 255, 95), - (215, 255, 135), - (215, 255, 175), - (215, 255, 215), - (215, 255, 255), - (255, 0, 0), - (255, 0, 95), - (255, 0, 135), - (255, 0, 175), - (255, 0, 215), - (255, 0, 255), - (255, 95, 0), - (255, 95, 95), - (255, 95, 135), - (255, 95, 175), - (255, 95, 215), - (255, 95, 255), - (255, 135, 0), - (255, 135, 95), - (255, 135, 135), - (255, 135, 175), - (255, 135, 215), - (255, 135, 255), - (255, 175, 0), - (255, 175, 95), - (255, 175, 135), - (255, 175, 175), - (255, 175, 215), - (255, 175, 255), - (255, 215, 0), - (255, 215, 95), - (255, 215, 135), - (255, 215, 175), - (255, 215, 215), - (255, 215, 255), - (255, 255, 0), - (255, 255, 95), - (255, 255, 135), - (255, 255, 175), - (255, 255, 215), - (255, 255, 255), - (8, 8, 8), - (18, 18, 18), - (28, 28, 28), - (38, 38, 38), - (48, 48, 48), - (58, 58, 58), - (68, 68, 68), - (78, 78, 78), - (88, 88, 88), - (98, 98, 98), - (108, 108, 108), - (118, 118, 118), - (128, 128, 128), - (138, 138, 138), - (148, 148, 148), - (158, 158, 158), - (168, 168, 168), - (178, 178, 178), - (188, 188, 188), - (198, 198, 198), - (208, 208, 208), - (218, 218, 218), - (228, 228, 228), - (238, 238, 238), - ] -) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/color.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/color.py deleted file mode 100644 index dfe455937c86b5b7cc83f5506ae0f7010bece1b1..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/color.py +++ /dev/null @@ -1,622 +0,0 @@ -import platform -import re -from colorsys import rgb_to_hls -from enum import IntEnum -from functools import lru_cache -from typing import TYPE_CHECKING, NamedTuple, Optional, Tuple - -from ._palettes import EIGHT_BIT_PALETTE, STANDARD_PALETTE, WINDOWS_PALETTE -from .color_triplet import ColorTriplet -from .repr import Result, rich_repr -from .terminal_theme import DEFAULT_TERMINAL_THEME - -if TYPE_CHECKING: # pragma: no cover - from .terminal_theme import TerminalTheme - from .text import Text - - -WINDOWS = platform.system() == "Windows" - - -class ColorSystem(IntEnum): - """One of the 3 color system supported by terminals.""" - - STANDARD = 1 - EIGHT_BIT = 2 - TRUECOLOR = 3 - WINDOWS = 4 - - def __repr__(self) -> str: - return f"ColorSystem.{self.name}" - - def __str__(self) -> str: - return repr(self) - - -class ColorType(IntEnum): - """Type of color stored in Color class.""" - - DEFAULT = 0 - STANDARD = 1 - EIGHT_BIT = 2 - TRUECOLOR = 3 - WINDOWS = 4 - - def __repr__(self) -> str: - return f"ColorType.{self.name}" - - -ANSI_COLOR_NAMES = { - "black": 0, - "red": 1, - "green": 2, - "yellow": 3, - "blue": 4, - "magenta": 5, - "cyan": 6, - "white": 7, - "bright_black": 8, - "bright_red": 9, - "bright_green": 10, - "bright_yellow": 11, - "bright_blue": 12, - "bright_magenta": 13, - "bright_cyan": 14, - "bright_white": 15, - "grey0": 16, - "gray0": 16, - "navy_blue": 17, - "dark_blue": 18, - "blue3": 20, - "blue1": 21, - "dark_green": 22, - "deep_sky_blue4": 25, - "dodger_blue3": 26, - "dodger_blue2": 27, - "green4": 28, - "spring_green4": 29, - "turquoise4": 30, - "deep_sky_blue3": 32, - "dodger_blue1": 33, - "green3": 40, - "spring_green3": 41, - "dark_cyan": 36, - "light_sea_green": 37, - "deep_sky_blue2": 38, - "deep_sky_blue1": 39, - "spring_green2": 47, - "cyan3": 43, - "dark_turquoise": 44, - "turquoise2": 45, - "green1": 46, - "spring_green1": 48, - "medium_spring_green": 49, - "cyan2": 50, - "cyan1": 51, - "dark_red": 88, - "deep_pink4": 125, - "purple4": 55, - "purple3": 56, - "blue_violet": 57, - "orange4": 94, - "grey37": 59, - "gray37": 59, - "medium_purple4": 60, - "slate_blue3": 62, - "royal_blue1": 63, - "chartreuse4": 64, - "dark_sea_green4": 71, - "pale_turquoise4": 66, - "steel_blue": 67, - "steel_blue3": 68, - "cornflower_blue": 69, - "chartreuse3": 76, - "cadet_blue": 73, - "sky_blue3": 74, - "steel_blue1": 81, - "pale_green3": 114, - "sea_green3": 78, - "aquamarine3": 79, - "medium_turquoise": 80, - "chartreuse2": 112, - "sea_green2": 83, - "sea_green1": 85, - "aquamarine1": 122, - "dark_slate_gray2": 87, - "dark_magenta": 91, - "dark_violet": 128, - "purple": 129, - "light_pink4": 95, - "plum4": 96, - "medium_purple3": 98, - "slate_blue1": 99, - "yellow4": 106, - "wheat4": 101, - "grey53": 102, - "gray53": 102, - "light_slate_grey": 103, - "light_slate_gray": 103, - "medium_purple": 104, - "light_slate_blue": 105, - "dark_olive_green3": 149, - "dark_sea_green": 108, - "light_sky_blue3": 110, - "sky_blue2": 111, - "dark_sea_green3": 150, - "dark_slate_gray3": 116, - "sky_blue1": 117, - "chartreuse1": 118, - "light_green": 120, - "pale_green1": 156, - "dark_slate_gray1": 123, - "red3": 160, - "medium_violet_red": 126, - "magenta3": 164, - "dark_orange3": 166, - "indian_red": 167, - "hot_pink3": 168, - "medium_orchid3": 133, - "medium_orchid": 134, - "medium_purple2": 140, - "dark_goldenrod": 136, - "light_salmon3": 173, - "rosy_brown": 138, - "grey63": 139, - "gray63": 139, - "medium_purple1": 141, - "gold3": 178, - "dark_khaki": 143, - "navajo_white3": 144, - "grey69": 145, - "gray69": 145, - "light_steel_blue3": 146, - "light_steel_blue": 147, - "yellow3": 184, - "dark_sea_green2": 157, - "light_cyan3": 152, - "light_sky_blue1": 153, - "green_yellow": 154, - "dark_olive_green2": 155, - "dark_sea_green1": 193, - "pale_turquoise1": 159, - "deep_pink3": 162, - "magenta2": 200, - "hot_pink2": 169, - "orchid": 170, - "medium_orchid1": 207, - "orange3": 172, - "light_pink3": 174, - "pink3": 175, - "plum3": 176, - "violet": 177, - "light_goldenrod3": 179, - "tan": 180, - "misty_rose3": 181, - "thistle3": 182, - "plum2": 183, - "khaki3": 185, - "light_goldenrod2": 222, - "light_yellow3": 187, - "grey84": 188, - "gray84": 188, - "light_steel_blue1": 189, - "yellow2": 190, - "dark_olive_green1": 192, - "honeydew2": 194, - "light_cyan1": 195, - "red1": 196, - "deep_pink2": 197, - "deep_pink1": 199, - "magenta1": 201, - "orange_red1": 202, - "indian_red1": 204, - "hot_pink": 206, - "dark_orange": 208, - "salmon1": 209, - "light_coral": 210, - "pale_violet_red1": 211, - "orchid2": 212, - "orchid1": 213, - "orange1": 214, - "sandy_brown": 215, - "light_salmon1": 216, - "light_pink1": 217, - "pink1": 218, - "plum1": 219, - "gold1": 220, - "navajo_white1": 223, - "misty_rose1": 224, - "thistle1": 225, - "yellow1": 226, - "light_goldenrod1": 227, - "khaki1": 228, - "wheat1": 229, - "cornsilk1": 230, - "grey100": 231, - "gray100": 231, - "grey3": 232, - "gray3": 232, - "grey7": 233, - "gray7": 233, - "grey11": 234, - "gray11": 234, - "grey15": 235, - "gray15": 235, - "grey19": 236, - "gray19": 236, - "grey23": 237, - "gray23": 237, - "grey27": 238, - "gray27": 238, - "grey30": 239, - "gray30": 239, - "grey35": 240, - "gray35": 240, - "grey39": 241, - "gray39": 241, - "grey42": 242, - "gray42": 242, - "grey46": 243, - "gray46": 243, - "grey50": 244, - "gray50": 244, - "grey54": 245, - "gray54": 245, - "grey58": 246, - "gray58": 246, - "grey62": 247, - "gray62": 247, - "grey66": 248, - "gray66": 248, - "grey70": 249, - "gray70": 249, - "grey74": 250, - "gray74": 250, - "grey78": 251, - "gray78": 251, - "grey82": 252, - "gray82": 252, - "grey85": 253, - "gray85": 253, - "grey89": 254, - "gray89": 254, - "grey93": 255, - "gray93": 255, -} - - -class ColorParseError(Exception): - """The color could not be parsed.""" - - -RE_COLOR = re.compile( - r"""^ -\#([0-9a-f]{6})$| -color\(([0-9]{1,3})\)$| -rgb\(([\d\s,]+)\)$ -""", - re.VERBOSE, -) - - -@rich_repr -class Color(NamedTuple): - """Terminal color definition.""" - - name: str - """The name of the color (typically the input to Color.parse).""" - type: ColorType - """The type of the color.""" - number: Optional[int] = None - """The color number, if a standard color, or None.""" - triplet: Optional[ColorTriplet] = None - """A triplet of color components, if an RGB color.""" - - def __rich__(self) -> "Text": - """Displays the actual color if Rich printed.""" - from .style import Style - from .text import Text - - return Text.assemble( - f"", - ) - - def __rich_repr__(self) -> Result: - yield self.name - yield self.type - yield "number", self.number, None - yield "triplet", self.triplet, None - - @property - def system(self) -> ColorSystem: - """Get the native color system for this color.""" - if self.type == ColorType.DEFAULT: - return ColorSystem.STANDARD - return ColorSystem(int(self.type)) - - @property - def is_system_defined(self) -> bool: - """Check if the color is ultimately defined by the system.""" - return self.system not in (ColorSystem.EIGHT_BIT, ColorSystem.TRUECOLOR) - - @property - def is_default(self) -> bool: - """Check if the color is a default color.""" - return self.type == ColorType.DEFAULT - - def get_truecolor( - self, theme: Optional["TerminalTheme"] = None, foreground: bool = True - ) -> ColorTriplet: - """Get an equivalent color triplet for this color. - - Args: - theme (TerminalTheme, optional): Optional terminal theme, or None to use default. Defaults to None. - foreground (bool, optional): True for a foreground color, or False for background. Defaults to True. - - Returns: - ColorTriplet: A color triplet containing RGB components. - """ - - if theme is None: - theme = DEFAULT_TERMINAL_THEME - if self.type == ColorType.TRUECOLOR: - assert self.triplet is not None - return self.triplet - elif self.type == ColorType.EIGHT_BIT: - assert self.number is not None - return EIGHT_BIT_PALETTE[self.number] - elif self.type == ColorType.STANDARD: - assert self.number is not None - return theme.ansi_colors[self.number] - elif self.type == ColorType.WINDOWS: - assert self.number is not None - return WINDOWS_PALETTE[self.number] - else: # self.type == ColorType.DEFAULT: - assert self.number is None - return theme.foreground_color if foreground else theme.background_color - - @classmethod - def from_ansi(cls, number: int) -> "Color": - """Create a Color number from it's 8-bit ansi number. - - Args: - number (int): A number between 0-255 inclusive. - - Returns: - Color: A new Color instance. - """ - return cls( - name=f"color({number})", - type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT), - number=number, - ) - - @classmethod - def from_triplet(cls, triplet: "ColorTriplet") -> "Color": - """Create a truecolor RGB color from a triplet of values. - - Args: - triplet (ColorTriplet): A color triplet containing red, green and blue components. - - Returns: - Color: A new color object. - """ - return cls(name=triplet.hex, type=ColorType.TRUECOLOR, triplet=triplet) - - @classmethod - def from_rgb(cls, red: float, green: float, blue: float) -> "Color": - """Create a truecolor from three color components in the range(0->255). - - Args: - red (float): Red component in range 0-255. - green (float): Green component in range 0-255. - blue (float): Blue component in range 0-255. - - Returns: - Color: A new color object. - """ - return cls.from_triplet(ColorTriplet(int(red), int(green), int(blue))) - - @classmethod - def default(cls) -> "Color": - """Get a Color instance representing the default color. - - Returns: - Color: Default color. - """ - return cls(name="default", type=ColorType.DEFAULT) - - @classmethod - @lru_cache(maxsize=1024) - def parse(cls, color: str) -> "Color": - """Parse a color definition.""" - original_color = color - color = color.lower().strip() - - if color == "default": - return cls(color, type=ColorType.DEFAULT) - - color_number = ANSI_COLOR_NAMES.get(color) - if color_number is not None: - return cls( - color, - type=(ColorType.STANDARD if color_number < 16 else ColorType.EIGHT_BIT), - number=color_number, - ) - - color_match = RE_COLOR.match(color) - if color_match is None: - raise ColorParseError(f"{original_color!r} is not a valid color") - - color_24, color_8, color_rgb = color_match.groups() - if color_24: - triplet = ColorTriplet( - int(color_24[0:2], 16), int(color_24[2:4], 16), int(color_24[4:6], 16) - ) - return cls(color, ColorType.TRUECOLOR, triplet=triplet) - - elif color_8: - number = int(color_8) - if number > 255: - raise ColorParseError(f"color number must be <= 255 in {color!r}") - return cls( - color, - type=(ColorType.STANDARD if number < 16 else ColorType.EIGHT_BIT), - number=number, - ) - - else: # color_rgb: - components = color_rgb.split(",") - if len(components) != 3: - raise ColorParseError( - f"expected three components in {original_color!r}" - ) - red, green, blue = components - triplet = ColorTriplet(int(red), int(green), int(blue)) - if not all(component <= 255 for component in triplet): - raise ColorParseError( - f"color components must be <= 255 in {original_color!r}" - ) - return cls(color, ColorType.TRUECOLOR, triplet=triplet) - - @lru_cache(maxsize=1024) - def get_ansi_codes(self, foreground: bool = True) -> Tuple[str, ...]: - """Get the ANSI escape codes for this color.""" - _type = self.type - if _type == ColorType.DEFAULT: - return ("39" if foreground else "49",) - - elif _type == ColorType.WINDOWS: - number = self.number - assert number is not None - fore, back = (30, 40) if number < 8 else (82, 92) - return (str(fore + number if foreground else back + number),) - - elif _type == ColorType.STANDARD: - number = self.number - assert number is not None - fore, back = (30, 40) if number < 8 else (82, 92) - return (str(fore + number if foreground else back + number),) - - elif _type == ColorType.EIGHT_BIT: - assert self.number is not None - return ("38" if foreground else "48", "5", str(self.number)) - - else: # self.standard == ColorStandard.TRUECOLOR: - assert self.triplet is not None - red, green, blue = self.triplet - return ("38" if foreground else "48", "2", str(red), str(green), str(blue)) - - @lru_cache(maxsize=1024) - def downgrade(self, system: ColorSystem) -> "Color": - """Downgrade a color system to a system with fewer colors.""" - - if self.type in (ColorType.DEFAULT, system): - return self - # Convert to 8-bit color from truecolor color - if system == ColorSystem.EIGHT_BIT and self.system == ColorSystem.TRUECOLOR: - assert self.triplet is not None - _h, l, s = rgb_to_hls(*self.triplet.normalized) - # If saturation is under 15% assume it is grayscale - if s < 0.15: - gray = round(l * 25.0) - if gray == 0: - color_number = 16 - elif gray == 25: - color_number = 231 - else: - color_number = 231 + gray - return Color(self.name, ColorType.EIGHT_BIT, number=color_number) - - red, green, blue = self.triplet - six_red = red / 95 if red < 95 else 1 + (red - 95) / 40 - six_green = green / 95 if green < 95 else 1 + (green - 95) / 40 - six_blue = blue / 95 if blue < 95 else 1 + (blue - 95) / 40 - - color_number = ( - 16 + 36 * round(six_red) + 6 * round(six_green) + round(six_blue) - ) - return Color(self.name, ColorType.EIGHT_BIT, number=color_number) - - # Convert to standard from truecolor or 8-bit - elif system == ColorSystem.STANDARD: - if self.system == ColorSystem.TRUECOLOR: - assert self.triplet is not None - triplet = self.triplet - else: # self.system == ColorSystem.EIGHT_BIT - assert self.number is not None - triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number]) - - color_number = STANDARD_PALETTE.match(triplet) - return Color(self.name, ColorType.STANDARD, number=color_number) - - elif system == ColorSystem.WINDOWS: - if self.system == ColorSystem.TRUECOLOR: - assert self.triplet is not None - triplet = self.triplet - else: # self.system == ColorSystem.EIGHT_BIT - assert self.number is not None - if self.number < 16: - return Color(self.name, ColorType.WINDOWS, number=self.number) - triplet = ColorTriplet(*EIGHT_BIT_PALETTE[self.number]) - - color_number = WINDOWS_PALETTE.match(triplet) - return Color(self.name, ColorType.WINDOWS, number=color_number) - - return self - - -def parse_rgb_hex(hex_color: str) -> ColorTriplet: - """Parse six hex characters in to RGB triplet.""" - assert len(hex_color) == 6, "must be 6 characters" - color = ColorTriplet( - int(hex_color[0:2], 16), int(hex_color[2:4], 16), int(hex_color[4:6], 16) - ) - return color - - -def blend_rgb( - color1: ColorTriplet, color2: ColorTriplet, cross_fade: float = 0.5 -) -> ColorTriplet: - """Blend one RGB color in to another.""" - r1, g1, b1 = color1 - r2, g2, b2 = color2 - new_color = ColorTriplet( - int(r1 + (r2 - r1) * cross_fade), - int(g1 + (g2 - g1) * cross_fade), - int(b1 + (b2 - b1) * cross_fade), - ) - return new_color - - -if __name__ == "__main__": # pragma: no cover - - from .console import Console - from .table import Table - from .text import Text - - console = Console() - - table = Table(show_footer=False, show_edge=True) - table.add_column("Color", width=10, overflow="ellipsis") - table.add_column("Number", justify="right", style="yellow") - table.add_column("Name", style="green") - table.add_column("Hex", style="blue") - table.add_column("RGB", style="magenta") - - colors = sorted((v, k) for k, v in ANSI_COLOR_NAMES.items()) - for color_number, name in colors: - if "grey" in name: - continue - color_cell = Text(" " * 10, style=f"on {name}") - if color_number < 16: - table.add_row(color_cell, f"{color_number}", Text(f'"{name}"')) - else: - color = EIGHT_BIT_PALETTE[color_number] # type: ignore[has-type] - table.add_row( - color_cell, str(color_number), Text(f'"{name}"'), color.hex, color.rgb - ) - - console.print(table) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/check.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/check.py deleted file mode 100644 index 539481c946043c53aa61bd62cfd4b4146934697d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/check.py +++ /dev/null @@ -1,151 +0,0 @@ -"""distutils.command.check - -Implements the Distutils 'check' command. -""" -import contextlib - -from distutils.core import Command -from distutils.errors import DistutilsSetupError - -with contextlib.suppress(ImportError): - import docutils.utils - import docutils.parsers.rst - import docutils.frontend - import docutils.nodes - - class SilentReporter(docutils.utils.Reporter): - def __init__( - self, - source, - report_level, - halt_level, - stream=None, - debug=0, - encoding='ascii', - error_handler='replace', - ): - self.messages = [] - super().__init__( - source, report_level, halt_level, stream, debug, encoding, error_handler - ) - - def system_message(self, level, message, *children, **kwargs): - self.messages.append((level, message, children, kwargs)) - return docutils.nodes.system_message( - message, level=level, type=self.levels[level], *children, **kwargs - ) - - -class check(Command): - """This command checks the meta-data of the package.""" - - description = "perform some checks on the package" - user_options = [ - ('metadata', 'm', 'Verify meta-data'), - ( - 'restructuredtext', - 'r', - ( - 'Checks if long string meta-data syntax ' - 'are reStructuredText-compliant' - ), - ), - ('strict', 's', 'Will exit with an error if a check fails'), - ] - - boolean_options = ['metadata', 'restructuredtext', 'strict'] - - def initialize_options(self): - """Sets default values for options.""" - self.restructuredtext = 0 - self.metadata = 1 - self.strict = 0 - self._warnings = 0 - - def finalize_options(self): - pass - - def warn(self, msg): - """Counts the number of warnings that occurs.""" - self._warnings += 1 - return Command.warn(self, msg) - - def run(self): - """Runs the command.""" - # perform the various tests - if self.metadata: - self.check_metadata() - if self.restructuredtext: - if 'docutils' in globals(): - try: - self.check_restructuredtext() - except TypeError as exc: - raise DistutilsSetupError(str(exc)) - elif self.strict: - raise DistutilsSetupError('The docutils package is needed.') - - # let's raise an error in strict mode, if we have at least - # one warning - if self.strict and self._warnings > 0: - raise DistutilsSetupError('Please correct your package.') - - def check_metadata(self): - """Ensures that all required elements of meta-data are supplied. - - Required fields: - name, version - - Warns if any are missing. - """ - metadata = self.distribution.metadata - - missing = [] - for attr in 'name', 'version': - if not getattr(metadata, attr, None): - missing.append(attr) - - if missing: - self.warn("missing required meta-data: %s" % ', '.join(missing)) - - def check_restructuredtext(self): - """Checks if the long string fields are reST-compliant.""" - data = self.distribution.get_long_description() - for warning in self._check_rst_data(data): - line = warning[-1].get('line') - if line is None: - warning = warning[1] - else: - warning = '{} (line {})'.format(warning[1], line) - self.warn(warning) - - def _check_rst_data(self, data): - """Returns warnings when the provided data doesn't compile.""" - # the include and csv_table directives need this to be a path - source_path = self.distribution.script_name or 'setup.py' - parser = docutils.parsers.rst.Parser() - settings = docutils.frontend.OptionParser( - components=(docutils.parsers.rst.Parser,) - ).get_default_values() - settings.tab_width = 4 - settings.pep_references = None - settings.rfc_references = None - reporter = SilentReporter( - source_path, - settings.report_level, - settings.halt_level, - stream=settings.warning_stream, - debug=settings.debug, - encoding=settings.error_encoding, - error_handler=settings.error_encoding_error_handler, - ) - - document = docutils.nodes.document(settings, reporter, source=source_path) - document.note_source(source_path, -1) - try: - parser.parse(data, document) - except AttributeError as e: - reporter.messages.append( - (-1, 'Could not finish the parsing: %s.' % e, '', {}) - ) - - return reporter.messages diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py deleted file mode 100644 index 29d0ef9102b2db0ffbf723c168aa32d2451b9419..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Wrappers around on some nn functions, mainly to support empty tensors. - -Ideally, add support directly in PyTorch to empty tensors in those functions. - -These can be removed once https://github.com/pytorch/pytorch/issues/12013 -is implemented -""" - -from typing import List, Optional -import torch -from torch.nn import functional as F - - -def shapes_to_tensor(x: List[int], device: Optional[torch.device] = None) -> torch.Tensor: - """ - Turn a list of integer scalars or integer Tensor scalars into a vector, - in a way that's both traceable and scriptable. - - In tracing, `x` should be a list of scalar Tensor, so the output can trace to the inputs. - In scripting or eager, `x` should be a list of int. - """ - if torch.jit.is_scripting(): - return torch.as_tensor(x, device=device) - if torch.jit.is_tracing(): - assert all( - [isinstance(t, torch.Tensor) for t in x] - ), "Shape should be tensor during tracing!" - # as_tensor should not be used in tracing because it records a constant - ret = torch.stack(x) - if ret.device != device: # avoid recording a hard-coded device if not necessary - ret = ret.to(device=device) - return ret - return torch.as_tensor(x, device=device) - - -def cat(tensors: List[torch.Tensor], dim: int = 0): - """ - Efficient version of torch.cat that avoids a copy if there is only a single element in a list - """ - assert isinstance(tensors, (list, tuple)) - if len(tensors) == 1: - return tensors[0] - return torch.cat(tensors, dim) - - -def cross_entropy(input, target, *, reduction="mean", **kwargs): - """ - Same as `torch.nn.functional.cross_entropy`, but returns 0 (instead of nan) - for empty inputs. - """ - if target.numel() == 0 and reduction == "mean": - return input.sum() * 0.0 # connect the gradient - return F.cross_entropy(input, target, reduction=reduction, **kwargs) - - -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class Conv2d(torch.nn.Conv2d): - """ - A wrapper around :class:`torch.nn.Conv2d` to support empty inputs and more features. - """ - - def __init__(self, *args, **kwargs): - """ - Extra keyword arguments supported in addition to those in `torch.nn.Conv2d`: - - Args: - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - - It assumes that norm layer is used before activation. - """ - norm = kwargs.pop("norm", None) - activation = kwargs.pop("activation", None) - super().__init__(*args, **kwargs) - - self.norm = norm - self.activation = activation - - def forward(self, x): - # torchscript does not support SyncBatchNorm yet - # https://github.com/pytorch/pytorch/issues/40507 - # and we skip these codes in torchscript since: - # 1. currently we only support torchscript in evaluation mode - # 2. features needed by exporting module to torchscript are added in PyTorch 1.6 or - # later version, `Conv2d` in these PyTorch versions has already supported empty inputs. - if not torch.jit.is_scripting(): - if x.numel() == 0 and self.training: - # https://github.com/pytorch/pytorch/issues/12013 - assert not isinstance( - self.norm, torch.nn.SyncBatchNorm - ), "SyncBatchNorm does not support empty inputs!" - - x = F.conv2d( - x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - -ConvTranspose2d = torch.nn.ConvTranspose2d -BatchNorm2d = torch.nn.BatchNorm2d -interpolate = F.interpolate -Linear = torch.nn.Linear - - -def nonzero_tuple(x): - """ - A 'as_tuple=True' version of torch.nonzero to support torchscript. - because of https://github.com/pytorch/pytorch/issues/38718 - """ - if torch.jit.is_scripting(): - if x.dim() == 0: - return x.unsqueeze(0).nonzero().unbind(1) - return x.nonzero().unbind(1) - else: - return x.nonzero(as_tuple=True) diff --git a/spaces/Benson/text-generation/Examples/Bus Simulator Ultimate Mod Apk Revdl.md b/spaces/Benson/text-generation/Examples/Bus Simulator Ultimate Mod Apk Revdl.md deleted file mode 100644 index 460a8f79132064c1ea0c1faac7b141a05c8b382f..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bus Simulator Ultimate Mod Apk Revdl.md +++ /dev/null @@ -1,94 +0,0 @@ - -

    Pokemon Go APK Original: Cómo descargar y jugar la sensación de juego global

    -

    Pokemon Go es un juego de smartphone gratuito que te permite atrapar a Pokémon en una versión aumentada del mundo real. Usando el sistema GPS de tu teléfono inteligente y el mapa preinstalado en el juego, puedes caminar por las calles y atrapar a Pokémon mientras surgen. Pokemon Go es la sensación de juego global que se ha descargado más de 1 mil millones de veces y nombrado "Mejor juego móvil" por los Game Developers Choice Awards y "Mejor aplicación del año" por TechCrunch. En este artículo, le mostraremos cómo descargar y jugar Pokemon Go APK Original, que es la versión original del juego que no está disponible en Google Play Store.

    -

    ¿Qué es Pokemon Go APK Original?

    -

    Pokemon Go APK Original es el paquete de aplicaciones para Android (APK) archivo de la versión original de Pokemon Go que fue lanzado en julio de 2016. Un archivo APK es un archivo comprimido que contiene todos los archivos y datos necesarios para ejecutar una aplicación Android. A diferencia de las aplicaciones que se descargan de Google Play Store, que se instalan y actualizan automáticamente por Google, los archivos APK deben ser descargados e instalados manualmente por el usuario.

    -

    bus simulator ultimate mod apk revdl


    Download ✯✯✯ https://bltlly.com/2v6MLM



    -

    La diferencia entre archivos APK y XAPK

    -

    Algunos sitios web pueden ofrecer para descargar Pokemon Go XAPK en lugar de APK. Un archivo XAPK es una versión extendida de un archivo APK que contiene archivos adicionales como archivos de datos OBB o APK divididos. Los archivos de datos OBB se utilizan para almacenar grandes cantidades de datos del juego, como gráficos, sonidos y videos. Los APK divididos se utilizan para admitir diferentes configuraciones de dispositivos, como tamaños de pantalla, idiomas y arquitecturas. Los archivos XAPK necesitan una aplicación o herramienta especial para extraerlos e instalarlos en tu dispositivo.

    -

    Los beneficios de descargar Pokemon Go APK Original

    -

    Hay varios beneficios de descargar Pokemon Go APK Original en lugar de conseguirlo de Google Play Store. Algunos de ellos son:

    - -

    Cómo descargar e instalar Pokemon Go APK Original en su dispositivo Android

    -

    Descargar e instalar Pokemon Go APK Original en su dispositivo Android es fácil y simple. Solo tienes que seguir estos pasos:

    -

    Paso 1: Habilitar fuentes desconocidas

    -

    Antes de que pueda instalar cualquier archivo APK en su dispositivo, debe habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Es posible que

    reciba un mensaje de advertencia de que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Pulse Aceptar para continuar.

    -

    Paso 2: Descargar el archivo APK de una fuente de confianza

    -

    Siguiente, es necesario descargar el archivo APK de Pokemon Go Original de una fuente de confianza. Hay muchos sitios web que ofrecen archivos APK de forma gratuita, pero algunos de ellos pueden contener malware o virus que pueden dañar su dispositivo o robar su información personal. Para evitar esto, solo debe descargar archivos APK de fuentes confiables y verificadas. Una de las mejores fuentes para Pokemon Go APK Original es [APKPure], que es un sitio web popular y confiable que proporciona archivos APK seguros y puros para los usuarios de Android. Para descargar el archivo APK de APKPure, siga estos pasos:

    -
      -
    1. Ir al sitio web [APKPure] en el navegador de su dispositivo.
    2. -
    3. Buscar Pokemon Ir en la barra de búsqueda y toque en el resultado.
    4. -
    5. Toque en el botón Descargar APK y elija una ubicación de descarga en su dispositivo.
    6. -
    7. Espera a que termine la descarga.
    8. -
    -

    Paso 3: Instalar el archivo APK

    - -
      -
    1. Localice el archivo APK en su dispositivo utilizando una aplicación de administrador de archivos o la carpeta de descargas de su dispositivo.
    2. -
    3. Toque en el archivo APK y toque en Instalar cuando se le solicite.
    4. -
    5. Espere a que se complete la instalación.
    6. -
    -

    Paso 4: Iniciar el juego y disfrutar de

    -

    Después de instalar el archivo APK, puede iniciar el juego y disfrutar jugando Pokémon Go Original en su dispositivo. Para hacer esto, siga estos pasos:

    -

    -
      -
    1. Ve al cajón de aplicaciones de tu dispositivo y toca el icono de Pokemon Go.
    2. -
    3. Permite que el juego acceda a la ubicación, cámara y almacenamiento de tu dispositivo cuando se te pregunte.
    4. -
    5. Inicia sesión con tu cuenta de Google o crea una nueva cuenta de Pokemon Trainer Club.
    6. -
    7. Elige tu avatar y personalízalo con diferentes trajes y accesorios.
    8. -
    9. Selecciona tu Pokémon inicial de Bulbasaur, Charmander o Squirtle.
    10. -
    11. ¡Empieza a explorar el mundo de los Pokémon y atrápalos a todos!
    12. -
    -

    Cómo actualizar Pokemon Go APK Original a la última versión

    -

    Pokemon Go se actualiza constantemente con nuevas características, eventos y Pokémon para mantener el juego fresco y emocionante. Para disfrutar de la última versión de Pokémon Go Original, necesitas actualizar el archivo APK regularmente. Hay dos maneras de hacer esto:

    -

    Opción 1: Usa la función de actualización en el juego

    -

    La forma más fácil de actualizar Pokémon Go Original es utilizar la función de actualización en el juego. Esta función le notificará cuando una nueva versión del juego esté disponible y le permitirá descargarlo e instalarlo directamente desde el juego. Para utilizar esta función, siga estos pasos:

    -
      -
    1. Iniciar el juego y toque en el icono de Pokeball en la parte inferior de la pantalla.
    2. -
    3. Toque en Configuración en la esquina superior derecha de la pantalla.
    4. -
    5. Desplácese hacia abajo y toque en Buscar actualizaciones.
    6. -
    7. Si una nueva versión está disponible, toque en Actualizar ahora y espere a que la descarga y la instalación finalicen.
    8. -
    -

    Opción 2: Descargar e instalar el último archivo APK manualmente

    - -
      -
    1. Ir al sitio web [APKPure] en el navegador de su dispositivo.
    2. -
    3. Buscar Pokemon Ir en la barra de búsqueda y toque en el resultado.
    4. -
    5. Toque en el botón Actualizar y elija una ubicación de descarga en su dispositivo.
    6. -
    7. Espera a que termine la descarga.
    8. -
    9. Localice el archivo APK en su dispositivo utilizando una aplicación de administrador de archivos o la carpeta de descargas de su dispositivo.
    10. -
    11. Toque en el archivo APK y toque en Instalar cuando se le solicite.
    12. -
    13. Espere a que se complete la instalación.
    14. -
    -

    Cómo jugar Pokemon Go APK Original y divertirse

    -

    Pokémon Go Original es más que un juego. Es una aventura que te permite explorar el mundo real con un toque virtual. Usted puede descubrir nuevos lugares, conocer gente nueva, y coger Pokémon increíble en el camino. Aquí hay algunos consejos sobre cómo jugar Pokemon Go Original y divertirse:

    -

    Explora y descubre Pokémon dondequiera que estés

    -

    Pokemon Go Original utiliza el GPS y la cámara de tu dispositivo para mostrarte Pokémon en el mundo real. Puedes encontrar Pokémon en diferentes entornos como parques, bosques, lagos, montañas, ciudades y más. También puedes usar elementos como módulos de incienso y señuelo para atraer más Pokémon a tu ubicación. Para atrapar a un Pokémon, necesitas tocarlo y luego mover el dedo en la pantalla para lanzarle una Pokeball. También puedes usar artículos como Razz Berries y Nanab Berries para hacer que los Pokémon sean más fáciles de atrapar. Algunos Pokémon son raros y difíciles de encontrar, por lo que es posible que tengas que viajar a diferentes lugares o esperar a eventos especiales para encontrarlos.

    -

    Atrapa más Pokémon para completar tu Pokedex

    - -

    Viaja junto a tu amigo Pokémon para ayudar a hacer tu Pokémon más fuerte y ganar recompensas

    -

    Puedes elegir uno de tus Pokémon como tu Amigo Pokémon y tenerlo caminando contigo en tus aventuras. Tu amigo Pokémon aparecerá junto a tu avatar en el mapa y en la pantalla de tu perfil. También puedes interactuar con tu Pokémon amigo alimentándolo con bayas, jugando con él o tomando instantáneas de él. Al caminar con tu amigo Pokémon, puedes ganar caramelos para ese tipo de Pokémon específico, que puedes usar para encender o evolucionar tu Pokémon. También puedes aumentar tu nivel de amigo con tu amigo Pokémon ganando corazones afectivos, lo que desbloqueará beneficios como caramelos de bonificación, CP boost o encontrar recuerdos.

    -

    Compite en batallas épicas de gimnasia y equipo con otros entrenadores para atrapar poderosos Pokémon durante las batallas de asalto

    -

    Pokémon Go Original no es solo un juego en solitario, sino también un juego social que te permite interactuar con otros jugadores de todo el mundo. Puedes unirte a uno de los tres equipos: Team Instinct (amarillo), Team Mystic (azul) o Team Valor (rojo). A continuación, puede competir con otros equipos para el control de los gimnasios, que son puntos de referencia que aparecen en el mapa. Para desafiar a un gimnasio, necesitas tocar en él y luego seleccionar un equipo de seis Pokémon para luchar contra los Pokémon defensores. También puedes cooperar con otros jugadores de cualquier equipo para derrotar a los poderosos Pokémon que aparecen en las Batallas Raid, que son eventos cronometrados que ocurren en ciertos Gimnasios. Al participar en batallas de gimnasio y batallas de asalto, puedes ganar objetos como Pokeballs, pociones, revives, dulces raros, bayas de oro razz, máquinas técnicas y más.

    -

    Conclusión

    - -

    Preguntas frecuentes

    -
      -
    1. Q: ¿Es Pokemon Go APK original seguro de descargar e instalar?
    2. -
    3. A: Sí, siempre y cuando se descarga el archivo APK de una fuente de confianza como [APKPure], que proporciona archivos APK seguros y puros para los usuarios de Android.
    4. -
    5. Q: ¿Necesito una conexión a Internet para jugar Pokemon Go APK Original?
    6. -
    7. A: Sí, necesita una conexión a Internet (Wi-Fi o datos móviles) para jugar Pokemon Go APK Original, ya que el juego se basa en el GPS y los datos del mapa para mostrar Pokémon en el mundo real.
    8. -
    9. Q: ¿Cómo puedo guardar mi progreso en Pokemon Go APK Original?
    10. -
    11. A: Su progreso en Pokemon Go APK Original se guarda automáticamente en el servidor del juego cuando se inicia sesión con su cuenta de Google o Pokemon Trainer Club cuenta. También puedes sincronizar los datos del juego en varios dispositivos utilizando la misma cuenta.
    12. -
    13. Q: ¿Cómo puedo transferir mis datos desde la versión de Google Play Store de Pokemon Ir a la versión APK?
    14. -
    15. A: Usted no necesita transferir sus datos desde la versión de Google Play Store de Pokemon Go a la versión APK, ya que ambos son compatibles entre sí. Puedes usar la misma cuenta para jugar el juego en ambas versiones sin perder tu progreso.
    16. -
    17. Q: ¿Cómo me pongo en contacto con el desarrollador de Pokemon Go APK Original si tengo alguna pregunta o problema?
    18. -
    19. A: Puede ponerse en contacto con el desarrollador de Pokemon Go APK Original visitando su sitio web oficial [Pokemon Go] o enviando un correo electrónico a [pokemon-go-support@nianticlabs.com]. También puede consultar su [Centro de ayuda] para obtener preguntas frecuentes y consejos para solucionar problemas.
    20. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Fnf Msica Batalla Original Mod.md b/spaces/Benson/text-generation/Examples/Descargar Fnf Msica Batalla Original Mod.md deleted file mode 100644 index 28aa086bbff32fa4e1db8ca496e6dc9ce158ce40..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Fnf Msica Batalla Original Mod.md +++ /dev/null @@ -1,70 +0,0 @@ - -

    Introducción

    -

    Si eres un fan de los juegos de ritmo y las batallas musicales, es posible que hayas oído hablar de Friday Night Funkin', un popular juego web que fue lanzado en 2020. En este juego, juegas como Boyfriend, un rapero de pelo azul que quiere impresionar a su Girlfriend al ganar batallas de música freestyle contra varios oponentes, como sus padres, sus ex y algunos personajes espeluznantes.

    -

    descargar fnf música batalla original mod


    DOWNLOAD >>>>> https://bltlly.com/2v6IFA



    -

    Friday Night Funkin' es un juego de código abierto que ha inspirado muchos mods hechos por fans que agregan nuevos personajes, canciones, modos y características al juego original. Uno de estos mods es FNF Music Battle Original Mod, un juego de música de ritmo que está disponible en dispositivos Android. Este mod es desarrollado por Onesoft Global PTE.LTD y tiene más de 10 millones de descargas en Google Play Store.

    -

    En este artículo, vamos a explorar de qué trata FNF Music Battle Original Mod, cuáles son sus características, jugabilidad, beneficios, inconvenientes, comparación con otros mods, e instrucciones de instalación. Al final de este artículo, usted tendrá una mejor comprensión de este mod y si usted debe darle una oportunidad o no.

    -

    Características

    -

    FNF Music Battle Original Mod tiene varias características que lo hacen diferente del juego original de Friday Night Funkin'. Aquí están algunas de ellas:

    - -

    Juego

    -

    El modo de juego de FNF Music Battle Original Mod es similar al juego original de Friday Night Funkin'. Usted tiene que coincidir con el ritmo de la música pulsando las teclas de flecha en el teclado o tocando los botones de flecha en la pantalla. Usted tiene que seguir la dirección de las flechas que aparecen en la pantalla y pulse o pulse en el momento adecuado. Si lo haces correctamente, llenarás la barra de progreso y ganarás la batalla musical. Si pierdes demasiadas notas o presionas los botones incorrectos, perderás la batalla musical y tendrás que empezar de nuevo.

    -

    El juego tiene un sistema de puntuación que te recompensa por tu precisión y sincronización. Puedes obtener diferentes calificaciones dependiendo de lo bien que lo hagas, como Enfermo, Bueno, Malo, o Señorita. También puede obtener combos para golpear varias notas en una fila sin perder ninguno. Cuanto mayor sea su puntuación y combo, mejores serán sus posibilidades de ganar.

    -

    -

    El juego también tiene una barra de salud que muestra cuánta vida te queda. Si pierde demasiadas notas o presiona los botones incorrectos, su barra de salud disminuirá y se volverá roja. Si tu barra de salud llega a cero, perderás la batalla musical y tendrás que empezar de nuevo. Puede restaurar su salud pulsando más notas correctamente y llenando la barra de progreso.

    -

    Beneficios

    -

    Reproducción de FNF Music Battle Original Mod puede tener muchos beneficios para usted, tales como:

    - -

    Inconvenientes

    -

    Reproducción de FNF Music Battle Original Mod también puede tener algunos inconvenientes para usted, tales como:

    - -

    Comparación

    - - | Mod | Similitudes | Diferencias | | -- - | -- | -- | Whitty | - Tiene personajes invitados de otros mods o juegos de FNF.
    - Tiene canciones pegadizas y un juego desafiante.
    - Tiene imágenes y animaciones interesantes.
    - Tiene el modo historia y el modo Freeplay.
    - Tiene niveles de dificultad fácil, normal y difícil.
    - Tiene tablas de clasificación en línea.| - Whitty es un mod de PC que requiere la descarga de archivos.
    - Whitty solo tiene un personaje invitado: Whitty.
    - Whitty tiene solo cuatro canciones: Lo-Fight, Overhead, Ballistic y Remix.
    - Whitty tiene un tema más oscuro y vanguardista.
    - Whitty tiene más diálogos y escenas.
    - Whitty tiene más errores y fallas.
    - Whitty no tiene anuncios.| | Hex | - Tiene personajes invitados de otros mods o juegos de FNF.
    - Tiene canciones pegadizas y un juego desafiante.
    - Tiene el modo historia y el modo Freeplay.
    - Tiene niveles de dificultad fácil, normal y difícil.
    - Tiene tablas de clasificación en línea.| - Hex es un mod de PC que requiere la descarga de archivos.
    - Hex solo tiene un personaje invitado: Hex.
    - Hex tiene seis canciones: Dunk, Ram, Hello World, Glitcher, Corruption, and encore.
    - Hex tiene un tema futurista y cyberpunk.
    - Hex tiene más diálogos y escenas.
    - Hex tiene más errores y fallas.
    - Hex no tiene anuncios.| | Kapi | - Tiene personajes invitados de otros FNF mods o juegos.
    - Tiene canciones pegadizas y juego desafiante.
    - Tiene efectos visuales y animaciones interesantes.
    - Tiene el modo historia y el modo Freeplay.
    - Tiene niveles de dificultad fácil, normal y difícil.
    - Tiene tablas de clasificación en línea.| - Kapi es un mod de PC que requiere la descarga de archivos.
    - Kapi solo tiene un personaje invitado: Kapi.
    - Kapi tiene cuatro canciones: Wocky, Beathoven, Hairball y Nyaw.
    - Kapi tiene un tema lindo y colorido.
    - Kapi tiene más diálogos y escenas.
    - Kapi tiene más errores y fallas.
    - Kapi no tiene anuncios.| | Neo | - Tiene personajes invitados de otros mods o juegos de FNF.
    - Tiene canciones pegadizas y juego desafiante.
    - Tiene efectos visuales y animaciones interesantes. - -

    Si desea reproducir FNF Music Battle Original Mod en su dispositivo Android, puede seguir estos pasos para instalarlo:

    -
      -
    1. Ir a Google Play Store y buscar FNF Music Battle Original Mod o haga clic en este enlace: .
    2. -
    3. Toque en el botón Instalar y espere a que termine la descarga.
    4. -
    5. Abra la aplicación y conceda los permisos necesarios.
    6. -
    7. ¡Disfruta del juego!
    8. -
    -

    Si desea reproducir FNF Music Battle Original Mod en su PC, puede seguir estos pasos para instalarlo:

    -
      -
    1. Ir a este sitio web: y descargar el archivo APK de FNF Music Battle Original Mod.
    2. -
    3. Descargar un emulador de Android de su elección, como BlueStacks o NoxPlayer.
    4. -
    5. Instala el emulador en tu PC y ejecútalo.
    6. -
    7. Arrastre y suelte el archivo APK de FNF Music Battle Original Mod en el emulador o utilice el navegador incorporado para encontrarlo.
    8. -
    9. Instalar la aplicación y abrirla.
    10. -
    11. ¡Disfruta del juego!
    12. -
    -

    Conclusión

    -

    FNF Music Battle Original Mod es un juego de música de ritmo que se basa en el popular juego Friday Night Funkin'. Tiene muchas características, tales como personajes, canciones, modos, visuales, sistema de puntuación, barra de salud, tablas de clasificación en línea. También tiene algunos beneficios, como factor de diversión, desafío, variedad, compatibilidad. También tiene algunos inconvenientes, como errores, dificultad, anuncios y actualizaciones. También tiene algunas similitudes y diferencias con otros mods de FNF, como Whitty, Hex, Kapi y Neo. Es fácil de instalar en dispositivos Android o PC con un emulador.

    -

    Si usted está buscando un juego de ritmo divertido y desafiante que tiene un montón de contenido y variedad, es posible que desee probar FNF Music Battle Original Mod. Es una gran manera de disfrutar de la música y los personajes de Friday Night Funkin' y sus mods en su teléfono o computadora. Sin embargo, si usted está buscando un juego más pulido y actualizado que tiene menos errores y anuncios, es posible que desee seguir con el juego original u otros mods de PC.

    - -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre FNF Music Battle Original Mod y sus respuestas:

    -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/example.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/example.py deleted file mode 100644 index 9f831bcde11b527e6fd01f55089091692c4dafb2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/example.py +++ /dev/null @@ -1,236 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -from botocore.docs.shape import ShapeDocumenter -from botocore.docs.utils import py_default - - -class BaseExampleDocumenter(ShapeDocumenter): - def document_example( - self, section, shape, prefix=None, include=None, exclude=None - ): - """Generates an example based on a shape - - :param section: The section to write the documentation to. - - :param shape: The shape of the operation. - - :param prefix: Anything to be included before the example - - :type include: Dictionary where keys are parameter names and - values are the shapes of the parameter names. - :param include: The parameter shapes to include in the documentation. - - :type exclude: List of the names of the parameters to exclude. - :param exclude: The names of the parameters to exclude from - documentation. - """ - history = [] - section.style.new_line() - section.style.start_codeblock() - if prefix is not None: - section.write(prefix) - self.traverse_and_document_shape( - section=section, - shape=shape, - history=history, - include=include, - exclude=exclude, - ) - final_blank_line_section = section.add_new_section('final-blank-line') - final_blank_line_section.style.new_line() - - def document_recursive_shape(self, section, shape, **kwargs): - section.write('{\'... recursive ...\'}') - - def document_shape_default( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - py_type = self._get_special_py_default(shape) - if py_type is None: - py_type = py_default(shape.type_name) - - if self._context.get('streaming_shape') == shape: - py_type = 'StreamingBody()' - section.write(py_type) - - def document_shape_type_string( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - if 'enum' in shape.metadata: - for i, enum in enumerate(shape.metadata['enum']): - section.write('\'%s\'' % enum) - if i < len(shape.metadata['enum']) - 1: - section.write('|') - else: - self.document_shape_default(section, shape, history) - - def document_shape_type_list( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - param_shape = shape.member - list_section = section.add_new_section('list-value') - self._start_nested_param(list_section, '[') - param_section = list_section.add_new_section( - 'member', context={'shape': param_shape.name} - ) - self.traverse_and_document_shape( - section=param_section, shape=param_shape, history=history - ) - ending_comma_section = list_section.add_new_section('ending-comma') - ending_comma_section.write(',') - ending_bracket_section = list_section.add_new_section('ending-bracket') - self._end_nested_param(ending_bracket_section, ']') - - def document_shape_type_structure( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - if not shape.members: - section.write('{}') - return - - section = section.add_new_section('structure-value') - self._start_nested_param(section, '{') - - input_members = self._add_members_to_shape(shape.members, include) - - for i, param in enumerate(input_members): - if exclude and param in exclude: - continue - param_section = section.add_new_section(param) - param_section.write('\'%s\': ' % param) - param_shape = input_members[param] - param_value_section = param_section.add_new_section( - 'member-value', context={'shape': param_shape.name} - ) - self.traverse_and_document_shape( - section=param_value_section, - shape=param_shape, - history=history, - name=param, - ) - if i < len(input_members) - 1: - ending_comma_section = param_section.add_new_section( - 'ending-comma' - ) - ending_comma_section.write(',') - ending_comma_section.style.new_line() - self._end_structure(section, '{', '}') - - def document_shape_type_map( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - map_section = section.add_new_section('map-value') - self._start_nested_param(map_section, '{') - value_shape = shape.value - key_section = map_section.add_new_section( - 'key', context={'shape': shape.key.name} - ) - key_section.write('\'string\': ') - value_section = map_section.add_new_section( - 'value', context={'shape': value_shape.name} - ) - self.traverse_and_document_shape( - section=value_section, shape=value_shape, history=history - ) - end_bracket_section = map_section.add_new_section('ending-bracket') - self._end_nested_param(end_bracket_section, '}') - - def _add_members_to_shape(self, members, include): - if include: - members = members.copy() - for param in include: - members[param.name] = param - return members - - def _start_nested_param(self, section, start=None): - if start is not None: - section.write(start) - section.style.indent() - section.style.indent() - section.style.new_line() - - def _end_nested_param(self, section, end=None): - section.style.dedent() - section.style.dedent() - section.style.new_line() - if end is not None: - section.write(end) - - def _end_structure(self, section, start, end): - # If there are no members in the strucuture, then make sure the - # start and the end bracket are on the same line, by removing all - # previous text and writing the start and end. - if not section.available_sections: - section.clear_text() - section.write(start + end) - self._end_nested_param(section) - else: - end_bracket_section = section.add_new_section('ending-bracket') - self._end_nested_param(end_bracket_section, end) - - -class ResponseExampleDocumenter(BaseExampleDocumenter): - EVENT_NAME = 'response-example' - - def document_shape_type_event_stream( - self, section, shape, history, **kwargs - ): - section.write('EventStream(') - self.document_shape_type_structure(section, shape, history, **kwargs) - end_section = section.add_new_section('event-stream-end') - end_section.write(')') - - -class RequestExampleDocumenter(BaseExampleDocumenter): - EVENT_NAME = 'request-example' - - def document_shape_type_structure( - self, section, shape, history, include=None, exclude=None, **kwargs - ): - param_format = '\'%s\'' - operator = ': ' - start = '{' - end = '}' - - if len(history) <= 1: - operator = '=' - start = '(' - end = ')' - param_format = '%s' - section = section.add_new_section('structure-value') - self._start_nested_param(section, start) - input_members = self._add_members_to_shape(shape.members, include) - - for i, param in enumerate(input_members): - if exclude and param in exclude: - continue - param_section = section.add_new_section(param) - param_section.write(param_format % param) - param_section.write(operator) - param_shape = input_members[param] - param_value_section = param_section.add_new_section( - 'member-value', context={'shape': param_shape.name} - ) - self.traverse_and_document_shape( - section=param_value_section, - shape=param_shape, - history=history, - name=param, - ) - if i < len(input_members) - 1: - ending_comma_section = param_section.add_new_section( - 'ending-comma' - ) - ending_comma_section.write(',') - ending_comma_section.style.new_line() - self._end_structure(section, start, end) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/quota.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/quota.py deleted file mode 100644 index c3e91ae367298b636089880169bd312ca98babf9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/quota.py +++ /dev/null @@ -1,56 +0,0 @@ -"""Retry quota implementation. - - -""" -import threading - - -class RetryQuota: - INITIAL_CAPACITY = 500 - - def __init__(self, initial_capacity=INITIAL_CAPACITY, lock=None): - self._max_capacity = initial_capacity - self._available_capacity = initial_capacity - if lock is None: - lock = threading.Lock() - self._lock = lock - - def acquire(self, capacity_amount): - """Attempt to aquire a certain amount of capacity. - - If there's not sufficient amount of capacity available, ``False`` - is returned. Otherwise, ``True`` is returned, which indicates that - capacity was successfully allocated. - - """ - # The acquire() is only called when we encounter a retryable - # response so we aren't worried about locking the entire method. - with self._lock: - if capacity_amount > self._available_capacity: - return False - self._available_capacity -= capacity_amount - return True - - def release(self, capacity_amount): - """Release capacity back to the retry quota. - - The capacity being released will be truncated if necessary - to ensure the max capacity is never exceeded. - - """ - # Implementation note: The release() method is called as part - # of the "after-call" event, which means it gets invoked for - # every API call. In the common case where the request is - # successful and we're at full capacity, we can avoid locking. - # We can't exceed max capacity so there's no work we have to do. - if self._max_capacity == self._available_capacity: - return - with self._lock: - amount = min( - self._max_capacity - self._available_capacity, capacity_amount - ) - self._available_capacity += amount - - @property - def available_capacity(self): - return self._available_capacity diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/token.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/token.py deleted file mode 100644 index e3e565ad591485563a93db89609213c00ca16ca3..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/token.py +++ /dev/null @@ -1,213 +0,0 @@ -""" - pygments.token - ~~~~~~~~~~~~~~ - - Basic token types and the standard tokens. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - - -class _TokenType(tuple): - parent = None - - def split(self): - buf = [] - node = self - while node is not None: - buf.append(node) - node = node.parent - buf.reverse() - return buf - - def __init__(self, *args): - # no need to call super.__init__ - self.subtypes = set() - - def __contains__(self, val): - return self is val or ( - type(val) is self.__class__ and - val[:len(self)] == self - ) - - def __getattr__(self, val): - if not val or not val[0].isupper(): - return tuple.__getattribute__(self, val) - new = _TokenType(self + (val,)) - setattr(self, val, new) - self.subtypes.add(new) - new.parent = self - return new - - def __repr__(self): - return 'Token' + (self and '.' or '') + '.'.join(self) - - def __copy__(self): - # These instances are supposed to be singletons - return self - - def __deepcopy__(self, memo): - # These instances are supposed to be singletons - return self - - -Token = _TokenType() - -# Special token types -Text = Token.Text -Whitespace = Text.Whitespace -Escape = Token.Escape -Error = Token.Error -# Text that doesn't belong to this lexer (e.g. HTML in PHP) -Other = Token.Other - -# Common token types for source code -Keyword = Token.Keyword -Name = Token.Name -Literal = Token.Literal -String = Literal.String -Number = Literal.Number -Punctuation = Token.Punctuation -Operator = Token.Operator -Comment = Token.Comment - -# Generic types for non-source code -Generic = Token.Generic - -# String and some others are not direct children of Token. -# alias them: -Token.Token = Token -Token.String = String -Token.Number = Number - - -def is_token_subtype(ttype, other): - """ - Return True if ``ttype`` is a subtype of ``other``. - - exists for backwards compatibility. use ``ttype in other`` now. - """ - return ttype in other - - -def string_to_tokentype(s): - """ - Convert a string into a token type:: - - >>> string_to_token('String.Double') - Token.Literal.String.Double - >>> string_to_token('Token.Literal.Number') - Token.Literal.Number - >>> string_to_token('') - Token - - Tokens that are already tokens are returned unchanged: - - >>> string_to_token(String) - Token.Literal.String - """ - if isinstance(s, _TokenType): - return s - if not s: - return Token - node = Token - for item in s.split('.'): - node = getattr(node, item) - return node - - -# Map standard token types to short names, used in CSS class naming. -# If you add a new item, please be sure to run this file to perform -# a consistency check for duplicate values. -STANDARD_TYPES = { - Token: '', - - Text: '', - Whitespace: 'w', - Escape: 'esc', - Error: 'err', - Other: 'x', - - Keyword: 'k', - Keyword.Constant: 'kc', - Keyword.Declaration: 'kd', - Keyword.Namespace: 'kn', - Keyword.Pseudo: 'kp', - Keyword.Reserved: 'kr', - Keyword.Type: 'kt', - - Name: 'n', - Name.Attribute: 'na', - Name.Builtin: 'nb', - Name.Builtin.Pseudo: 'bp', - Name.Class: 'nc', - Name.Constant: 'no', - Name.Decorator: 'nd', - Name.Entity: 'ni', - Name.Exception: 'ne', - Name.Function: 'nf', - Name.Function.Magic: 'fm', - Name.Property: 'py', - Name.Label: 'nl', - Name.Namespace: 'nn', - Name.Other: 'nx', - Name.Tag: 'nt', - Name.Variable: 'nv', - Name.Variable.Class: 'vc', - Name.Variable.Global: 'vg', - Name.Variable.Instance: 'vi', - Name.Variable.Magic: 'vm', - - Literal: 'l', - Literal.Date: 'ld', - - String: 's', - String.Affix: 'sa', - String.Backtick: 'sb', - String.Char: 'sc', - String.Delimiter: 'dl', - String.Doc: 'sd', - String.Double: 's2', - String.Escape: 'se', - String.Heredoc: 'sh', - String.Interpol: 'si', - String.Other: 'sx', - String.Regex: 'sr', - String.Single: 's1', - String.Symbol: 'ss', - - Number: 'm', - Number.Bin: 'mb', - Number.Float: 'mf', - Number.Hex: 'mh', - Number.Integer: 'mi', - Number.Integer.Long: 'il', - Number.Oct: 'mo', - - Operator: 'o', - Operator.Word: 'ow', - - Punctuation: 'p', - Punctuation.Marker: 'pm', - - Comment: 'c', - Comment.Hashbang: 'ch', - Comment.Multiline: 'cm', - Comment.Preproc: 'cp', - Comment.PreprocFile: 'cpf', - Comment.Single: 'c1', - Comment.Special: 'cs', - - Generic: 'g', - Generic.Deleted: 'gd', - Generic.Emph: 'ge', - Generic.Error: 'gr', - Generic.Heading: 'gh', - Generic.Inserted: 'gi', - Generic.Output: 'go', - Generic.Prompt: 'gp', - Generic.Strong: 'gs', - Generic.Subheading: 'gu', - Generic.Traceback: 'gt', -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/sandbox.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/sandbox.py deleted file mode 100644 index 034fc80d20ea4a59d77af6f808dbcfc3b87612c3..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/sandbox.py +++ /dev/null @@ -1,530 +0,0 @@ -import os -import sys -import tempfile -import operator -import functools -import itertools -import re -import contextlib -import pickle -import textwrap -import builtins - -import pkg_resources -from distutils.errors import DistutilsError -from pkg_resources import working_set - -if sys.platform.startswith('java'): - import org.python.modules.posix.PosixModule as _os -else: - _os = sys.modules[os.name] -try: - _file = file -except NameError: - _file = None -_open = open - - -__all__ = [ - "AbstractSandbox", - "DirectorySandbox", - "SandboxViolation", - "run_setup", -] - - -def _execfile(filename, globals, locals=None): - """ - Python 3 implementation of execfile. - """ - mode = 'rb' - with open(filename, mode) as stream: - script = stream.read() - if locals is None: - locals = globals - code = compile(script, filename, 'exec') - exec(code, globals, locals) - - -@contextlib.contextmanager -def save_argv(repl=None): - saved = sys.argv[:] - if repl is not None: - sys.argv[:] = repl - try: - yield saved - finally: - sys.argv[:] = saved - - -@contextlib.contextmanager -def save_path(): - saved = sys.path[:] - try: - yield saved - finally: - sys.path[:] = saved - - -@contextlib.contextmanager -def override_temp(replacement): - """ - Monkey-patch tempfile.tempdir with replacement, ensuring it exists - """ - os.makedirs(replacement, exist_ok=True) - - saved = tempfile.tempdir - - tempfile.tempdir = replacement - - try: - yield - finally: - tempfile.tempdir = saved - - -@contextlib.contextmanager -def pushd(target): - saved = os.getcwd() - os.chdir(target) - try: - yield saved - finally: - os.chdir(saved) - - -class UnpickleableException(Exception): - """ - An exception representing another Exception that could not be pickled. - """ - - @staticmethod - def dump(type, exc): - """ - Always return a dumped (pickled) type and exc. If exc can't be pickled, - wrap it in UnpickleableException first. - """ - try: - return pickle.dumps(type), pickle.dumps(exc) - except Exception: - # get UnpickleableException inside the sandbox - from setuptools.sandbox import UnpickleableException as cls - - return cls.dump(cls, cls(repr(exc))) - - -class ExceptionSaver: - """ - A Context Manager that will save an exception, serialized, and restore it - later. - """ - - def __enter__(self): - return self - - def __exit__(self, type, exc, tb): - if not exc: - return - - # dump the exception - self._saved = UnpickleableException.dump(type, exc) - self._tb = tb - - # suppress the exception - return True - - def resume(self): - "restore and re-raise any exception" - - if '_saved' not in vars(self): - return - - type, exc = map(pickle.loads, self._saved) - raise exc.with_traceback(self._tb) - - -@contextlib.contextmanager -def save_modules(): - """ - Context in which imported modules are saved. - - Translates exceptions internal to the context into the equivalent exception - outside the context. - """ - saved = sys.modules.copy() - with ExceptionSaver() as saved_exc: - yield saved - - sys.modules.update(saved) - # remove any modules imported since - del_modules = ( - mod_name - for mod_name in sys.modules - if mod_name not in saved - # exclude any encodings modules. See #285 - and not mod_name.startswith('encodings.') - ) - _clear_modules(del_modules) - - saved_exc.resume() - - -def _clear_modules(module_names): - for mod_name in list(module_names): - del sys.modules[mod_name] - - -@contextlib.contextmanager -def save_pkg_resources_state(): - saved = pkg_resources.__getstate__() - try: - yield saved - finally: - pkg_resources.__setstate__(saved) - - -@contextlib.contextmanager -def setup_context(setup_dir): - temp_dir = os.path.join(setup_dir, 'temp') - with save_pkg_resources_state(): - with save_modules(): - with save_path(): - hide_setuptools() - with save_argv(): - with override_temp(temp_dir): - with pushd(setup_dir): - # ensure setuptools commands are available - __import__('setuptools') - yield - - -_MODULES_TO_HIDE = { - 'setuptools', - 'distutils', - 'pkg_resources', - 'Cython', - '_distutils_hack', -} - - -def _needs_hiding(mod_name): - """ - >>> _needs_hiding('setuptools') - True - >>> _needs_hiding('pkg_resources') - True - >>> _needs_hiding('setuptools_plugin') - False - >>> _needs_hiding('setuptools.__init__') - True - >>> _needs_hiding('distutils') - True - >>> _needs_hiding('os') - False - >>> _needs_hiding('Cython') - True - """ - base_module = mod_name.split('.', 1)[0] - return base_module in _MODULES_TO_HIDE - - -def hide_setuptools(): - """ - Remove references to setuptools' modules from sys.modules to allow the - invocation to import the most appropriate setuptools. This technique is - necessary to avoid issues such as #315 where setuptools upgrading itself - would fail to find a function declared in the metadata. - """ - _distutils_hack = sys.modules.get('_distutils_hack', None) - if _distutils_hack is not None: - _distutils_hack.remove_shim() - - modules = filter(_needs_hiding, sys.modules) - _clear_modules(modules) - - -def run_setup(setup_script, args): - """Run a distutils setup script, sandboxed in its directory""" - setup_dir = os.path.abspath(os.path.dirname(setup_script)) - with setup_context(setup_dir): - try: - sys.argv[:] = [setup_script] + list(args) - sys.path.insert(0, setup_dir) - # reset to include setup dir, w/clean callback list - working_set.__init__() - working_set.callbacks.append(lambda dist: dist.activate()) - - with DirectorySandbox(setup_dir): - ns = dict(__file__=setup_script, __name__='__main__') - _execfile(setup_script, ns) - except SystemExit as v: - if v.args and v.args[0]: - raise - # Normal exit, just return - - -class AbstractSandbox: - """Wrap 'os' module and 'open()' builtin for virtualizing setup scripts""" - - _active = False - - def __init__(self): - self._attrs = [ - name - for name in dir(_os) - if not name.startswith('_') and hasattr(self, name) - ] - - def _copy(self, source): - for name in self._attrs: - setattr(os, name, getattr(source, name)) - - def __enter__(self): - self._copy(self) - if _file: - builtins.file = self._file - builtins.open = self._open - self._active = True - - def __exit__(self, exc_type, exc_value, traceback): - self._active = False - if _file: - builtins.file = _file - builtins.open = _open - self._copy(_os) - - def run(self, func): - """Run 'func' under os sandboxing""" - with self: - return func() - - def _mk_dual_path_wrapper(name): - original = getattr(_os, name) - - def wrap(self, src, dst, *args, **kw): - if self._active: - src, dst = self._remap_pair(name, src, dst, *args, **kw) - return original(src, dst, *args, **kw) - - return wrap - - for name in ["rename", "link", "symlink"]: - if hasattr(_os, name): - locals()[name] = _mk_dual_path_wrapper(name) - - def _mk_single_path_wrapper(name, original=None): - original = original or getattr(_os, name) - - def wrap(self, path, *args, **kw): - if self._active: - path = self._remap_input(name, path, *args, **kw) - return original(path, *args, **kw) - - return wrap - - if _file: - _file = _mk_single_path_wrapper('file', _file) - _open = _mk_single_path_wrapper('open', _open) - for name in [ - "stat", - "listdir", - "chdir", - "open", - "chmod", - "chown", - "mkdir", - "remove", - "unlink", - "rmdir", - "utime", - "lchown", - "chroot", - "lstat", - "startfile", - "mkfifo", - "mknod", - "pathconf", - "access", - ]: - if hasattr(_os, name): - locals()[name] = _mk_single_path_wrapper(name) - - def _mk_single_with_return(name): - original = getattr(_os, name) - - def wrap(self, path, *args, **kw): - if self._active: - path = self._remap_input(name, path, *args, **kw) - return self._remap_output(name, original(path, *args, **kw)) - return original(path, *args, **kw) - - return wrap - - for name in ['readlink', 'tempnam']: - if hasattr(_os, name): - locals()[name] = _mk_single_with_return(name) - - def _mk_query(name): - original = getattr(_os, name) - - def wrap(self, *args, **kw): - retval = original(*args, **kw) - if self._active: - return self._remap_output(name, retval) - return retval - - return wrap - - for name in ['getcwd', 'tmpnam']: - if hasattr(_os, name): - locals()[name] = _mk_query(name) - - def _validate_path(self, path): - """Called to remap or validate any path, whether input or output""" - return path - - def _remap_input(self, operation, path, *args, **kw): - """Called for path inputs""" - return self._validate_path(path) - - def _remap_output(self, operation, path): - """Called for path outputs""" - return self._validate_path(path) - - def _remap_pair(self, operation, src, dst, *args, **kw): - """Called for path pairs like rename, link, and symlink operations""" - return ( - self._remap_input(operation + '-from', src, *args, **kw), - self._remap_input(operation + '-to', dst, *args, **kw), - ) - - -if hasattr(os, 'devnull'): - _EXCEPTIONS = [os.devnull] -else: - _EXCEPTIONS = [] - - -class DirectorySandbox(AbstractSandbox): - """Restrict operations to a single subdirectory - pseudo-chroot""" - - write_ops = dict.fromkeys( - [ - "open", - "chmod", - "chown", - "mkdir", - "remove", - "unlink", - "rmdir", - "utime", - "lchown", - "chroot", - "mkfifo", - "mknod", - "tempnam", - ] - ) - - _exception_patterns = [] - "exempt writing to paths that match the pattern" - - def __init__(self, sandbox, exceptions=_EXCEPTIONS): - self._sandbox = os.path.normcase(os.path.realpath(sandbox)) - self._prefix = os.path.join(self._sandbox, '') - self._exceptions = [ - os.path.normcase(os.path.realpath(path)) for path in exceptions - ] - AbstractSandbox.__init__(self) - - def _violation(self, operation, *args, **kw): - from setuptools.sandbox import SandboxViolation - - raise SandboxViolation(operation, args, kw) - - if _file: - - def _file(self, path, mode='r', *args, **kw): - if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): - self._violation("file", path, mode, *args, **kw) - return _file(path, mode, *args, **kw) - - def _open(self, path, mode='r', *args, **kw): - if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path): - self._violation("open", path, mode, *args, **kw) - return _open(path, mode, *args, **kw) - - def tmpnam(self): - self._violation("tmpnam") - - def _ok(self, path): - active = self._active - try: - self._active = False - realpath = os.path.normcase(os.path.realpath(path)) - return ( - self._exempted(realpath) - or realpath == self._sandbox - or realpath.startswith(self._prefix) - ) - finally: - self._active = active - - def _exempted(self, filepath): - start_matches = ( - filepath.startswith(exception) for exception in self._exceptions - ) - pattern_matches = ( - re.match(pattern, filepath) for pattern in self._exception_patterns - ) - candidates = itertools.chain(start_matches, pattern_matches) - return any(candidates) - - def _remap_input(self, operation, path, *args, **kw): - """Called for path inputs""" - if operation in self.write_ops and not self._ok(path): - self._violation(operation, os.path.realpath(path), *args, **kw) - return path - - def _remap_pair(self, operation, src, dst, *args, **kw): - """Called for path pairs like rename, link, and symlink operations""" - if not self._ok(src) or not self._ok(dst): - self._violation(operation, src, dst, *args, **kw) - return (src, dst) - - def open(self, file, flags, mode=0o777, *args, **kw): - """Called for low-level os.open()""" - if flags & WRITE_FLAGS and not self._ok(file): - self._violation("os.open", file, flags, mode, *args, **kw) - return _os.open(file, flags, mode, *args, **kw) - - -WRITE_FLAGS = functools.reduce( - operator.or_, - [ - getattr(_os, a, 0) - for a in "O_WRONLY O_RDWR O_APPEND O_CREAT O_TRUNC O_TEMPORARY".split() - ], -) - - -class SandboxViolation(DistutilsError): - """A setup script attempted to modify the filesystem outside the sandbox""" - - tmpl = textwrap.dedent( - """ - SandboxViolation: {cmd}{args!r} {kwargs} - - The package setup script has attempted to modify files on your system - that are not within the EasyInstall build area, and has been aborted. - - This package cannot be safely installed by EasyInstall, and may not - support alternate installation locations even if you run its setup - script by hand. Please inform the package's author and the EasyInstall - maintainers to find out if a fix or workaround is available. - """ - ).lstrip() - - def __str__(self): - cmd, args, kwargs = self.args - return self.tmpl.format(**locals()) diff --git a/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/README.md b/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/README.md deleted file mode 100644 index dd223e74e13eae3153396dddcaffdbadfc7d38b5..0000000000000000000000000000000000000000 --- a/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: internlm-20b-chat-w4-turbomind -emoji: 🌍 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/MODEL_ZOO.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/MODEL_ZOO.md deleted file mode 100644 index 0510c976f6213f0fb9f106bb7f620c45e5be5670..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/MODEL_ZOO.md +++ /dev/null @@ -1,882 +0,0 @@ -# Detectron2 Model Zoo and Baselines - -## Introduction - -This file documents a large collection of baselines trained -with detectron2 in Sep-Oct, 2019. -All numbers were obtained on [Big Basin](https://engineering.fb.com/data-center-engineering/introducing-big-basin-our-next-generation-ai-hardware/) -servers with 8 NVIDIA V100 GPUs & NVLink. The software in use were PyTorch 1.3, CUDA 9.2, cuDNN 7.4.2 or 7.6.3. -You can access these models from code using [detectron2.model_zoo](https://detectron2.readthedocs.io/modules/model_zoo.html) APIs. - -In addition to these official baseline models, you can find more models in [projects/](projects/). - -#### How to Read the Tables -* The "Name" column contains a link to the config file. Running `tools/train_net.py` with this config file - and 8 GPUs will reproduce the model. -* Training speed is averaged across the entire training. - We keep updating the speed with latest version of detectron2/pytorch/etc., - so they might be different from the `metrics` file. -* Inference speed is measured by `tools/train_net.py --eval-only`, or [inference_on_dataset()](https://detectron2.readthedocs.io/modules/evaluation.html#detectron2.evaluation.inference_on_dataset), - with batch size 1 in detectron2 directly. - Measuring it with your own code will likely introduce other overhead. - Actual deployment in production should in general be faster than the given inference - speed due to more optimizations. -* The *model id* column is provided for ease of reference. - To check downloaded file integrity, any model on this page contains its md5 prefix in its file name. -* Training curves and other statistics can be found in `metrics` for each model. - -#### Common Settings for COCO Models -* All COCO models were trained on `train2017` and evaluated on `val2017`. -* The default settings are __not directly comparable__ with Detectron's standard settings. - For example, our default training data augmentation uses scale jittering in addition to horizontal flipping. - - To make fair comparisons with Detectron's settings, see - [Detectron1-Comparisons](configs/Detectron1-Comparisons/) for accuracy comparison, - and [benchmarks](https://detectron2.readthedocs.io/notes/benchmarks.html) - for speed comparison. -* For Faster/Mask R-CNN, we provide baselines based on __3 different backbone combinations__: - * __FPN__: Use a ResNet+FPN backbone with standard conv and FC heads for mask and box prediction, - respectively. It obtains the best - speed/accuracy tradeoff, but the other two are still useful for research. - * __C4__: Use a ResNet conv4 backbone with conv5 head. The original baseline in the Faster R-CNN paper. - * __DC5__ (Dilated-C5): Use a ResNet conv5 backbone with dilations in conv5, and standard conv and FC heads - for mask and box prediction, respectively. - This is used by the Deformable ConvNet paper. -* Most models are trained with the 3x schedule (~37 COCO epochs). - Although 1x models are heavily under-trained, we provide some ResNet-50 models with the 1x (~12 COCO epochs) - training schedule for comparison when doing quick research iteration. - -#### ImageNet Pretrained Models - -We provide backbone models pretrained on ImageNet-1k dataset. -These models have __different__ format from those provided in Detectron: we do not fuse BatchNorm into an affine layer. -* [R-50.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl): converted copy of [MSRA's original ResNet-50](https://github.com/KaimingHe/deep-residual-networks) model. -* [R-101.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-101.pkl): converted copy of [MSRA's original ResNet-101](https://github.com/KaimingHe/deep-residual-networks) model. -* [X-101-32x8d.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/FAIR/X-101-32x8d.pkl): ResNeXt-101-32x8d model trained with Caffe2 at FB. - -Pretrained models in Detectron's format can still be used. For example: -* [X-152-32x8d-IN5k.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl): - ResNeXt-152-32x8d model trained on ImageNet-5k with Caffe2 at FB (see ResNeXt paper for details on ImageNet-5k). -* [R-50-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47261647/R-50-GN.pkl): - ResNet-50 with Group Normalization. -* [R-101-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47592356/R-101-GN.pkl): - ResNet-101 with Group Normalization. - -Torchvision's ResNet models can be used after converted by [this script](tools/convert-torchvision-to-d2.py). - -#### License - -All models available for download through this document are licensed under the -[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/). - -### COCO Object Detection Baselines - -#### Faster R-CNN: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    model iddownload
    R50-C41x0.5510.1024.835.7137257644model | metrics
    R50-DC51x0.3800.0685.037.3137847829model | metrics
    R50-FPN1x0.2100.0383.037.9137257794model | metrics
    R50-C43x0.5430.1044.838.4137849393model | metrics
    R50-DC53x0.3780.0705.039.0137849425model | metrics
    R50-FPN3x0.2090.0383.040.2137849458model | metrics
    R101-C43x0.6190.1395.941.1138204752model | metrics
    R101-DC53x0.4520.0866.140.6138204841model | metrics
    R101-FPN3x0.2860.0514.142.0137851257model | metrics
    X101-FPN3x0.6380.0986.743.0139173657model | metrics
    - -#### RetinaNet: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    model iddownload
    R501x0.2000.0553.936.5137593951model | metrics
    R503x0.2010.0553.937.9137849486model | metrics
    R1013x0.2800.0685.139.9138363263model | metrics
    - -#### RPN & Fast R-CNN: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    prop.
    AR
    model iddownload
    RPN R50-C41x0.1300.0341.551.6137258005model | metrics
    RPN R50-FPN1x0.1860.0322.758.0137258492model | metrics
    Fast R-CNN R50-FPN1x0.1400.0292.637.8137635226model | metrics
    - -### COCO Instance Segmentation Baselines with Mask R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    R50-C41x0.5840.1105.236.832.2137259246model | metrics
    R50-DC51x0.4710.0766.538.334.2137260150model | metrics
    R50-FPN1x0.2610.0433.438.635.2137260431model | metrics
    R50-C43x0.5750.1115.239.834.4137849525model | metrics
    R50-DC53x0.4700.0766.540.035.9137849551model | metrics
    R50-FPN3x0.2610.0433.441.037.2137849600model | metrics
    R101-C43x0.6520.1456.342.636.7138363239model | metrics
    R101-DC53x0.5450.0927.641.937.3138363294model | metrics
    R101-FPN3x0.3400.0564.642.938.6138205316model | metrics
    X101-FPN3x0.6900.1037.244.339.5139653917model | metrics
    - -### COCO Person Keypoint Detection Baselines with Keypoint R-CNN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    kp.
    AP
    model iddownload
    R50-FPN1x0.3150.0725.053.664.0137261548model | metrics
    R50-FPN3x0.3160.0665.055.465.5137849621model | metrics
    R101-FPN3x0.3900.0766.156.466.1138363331model | metrics
    X101-FPN3x0.7380.1218.757.366.0139686956model | metrics
    - -### COCO Panoptic Segmentation Baselines with Panoptic FPN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    PQmodel iddownload
    R50-FPN1x0.3040.0534.837.634.739.4139514544model | metrics
    R50-FPN3x0.3020.0534.840.036.541.5139514569model | metrics
    R101-FPN3x0.3920.0666.042.438.543.0139514519model | metrics
    - - -### LVIS Instance Segmentation Baselines with Mask R-CNN - -Mask R-CNN baselines on the [LVIS dataset](https://lvisdataset.org), v0.5. -These baselines are described in Table 3(c) of the [LVIS paper](https://arxiv.org/abs/1908.03195). - -NOTE: the 1x schedule here has the same amount of __iterations__ as the COCO 1x baselines. -They are roughly 24 epochs of LVISv0.5 data. -The final results of these configs have large variance across different runs. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    R50-FPN1x0.2920.1077.123.624.4144219072model | metrics
    R101-FPN1x0.3710.1147.825.625.9144219035model | metrics
    X101-FPN1x0.7120.15110.226.727.1144219108model | metrics
    - - - -### Cityscapes & Pascal VOC Baselines - -Simple baselines for -* Mask R-CNN on Cityscapes instance segmentation (initialized from COCO pre-training, then trained on Cityscapes fine annotations only) -* Faster R-CNN on PASCAL VOC object detection (trained on VOC 2007 train+val + VOC 2012 train+val, tested on VOC 2007 using 11-point interpolated AP) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Nametrain
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    box
    AP50
    mask
    AP
    model iddownload
    R50-FPN, Cityscapes0.2400.0784.436.5142423278model | metrics
    R50-C4, VOC0.5370.0814.851.980.3142202221model | metrics
    - - - -### Other Settings - -Ablations for Deformable Conv and Cascade R-CNN: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    Baseline R50-FPN1x0.2610.0433.438.635.2137260431model | metrics
    Deformable Conv1x0.3420.0483.541.537.5138602867model | metrics
    Cascade R-CNN1x0.3170.0524.042.136.4138602847model | metrics
    Baseline R50-FPN3x0.2610.0433.441.037.2137849600model | metrics
    Deformable Conv3x0.3490.0473.542.738.5144998336model | metrics
    Cascade R-CNN3x0.3280.0534.044.338.5144998488model | metrics
    - - -Ablations for normalization methods: -(Note: The baseline uses `2fc` head while the others use `4conv1fc` head. According to the -[GroupNorm paper](https://arxiv.org/abs/1803.08494), the change in head does not improve the baseline by much) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Namelr
    sched
    train
    time
    (s/iter)
    inference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    model iddownload
    Baseline R50-FPN3x0.2610.0433.441.037.2137849600model | metrics
    SyncBN3x0.4120.0535.541.937.8169527823model | metrics
    GN3x0.3560.0697.342.638.6138602888model | metrics
    GN (scratch)3x0.4000.0699.839.936.6138602908model | metrics
    - - - -A few very large models trained for a long time, for demo purposes: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Nameinference
    time
    (s/im)
    train
    mem
    (GB)
    box
    AP
    mask
    AP
    PQmodel iddownload
    Panoptic FPN R1010.10711.447.441.346.1139797668model | metrics
    Mask R-CNN X1520.24215.150.244.018131413model | metrics
    above + test-time aug.51.945.9
    diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/__init__.py deleted file mode 100644 index 985aa67457644d3f8ea1cf071499c5a17a387797..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .config import add_attribute_config -from .build_loader import ( - build_detection_train_loader_with_attributes, - build_detection_test_loader_with_attributes, -) -from .roi_heads import AttributeRes5ROIHeads, AttributeStandardROIHeads -from . import visual_genome \ No newline at end of file diff --git a/spaces/CVPR/LIVE/pybind11/tests/pybind11_cross_module_tests.cpp b/spaces/CVPR/LIVE/pybind11/tests/pybind11_cross_module_tests.cpp deleted file mode 100644 index f705e310611619dff319f9b5d53b71e6fd54aec5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/pybind11_cross_module_tests.cpp +++ /dev/null @@ -1,123 +0,0 @@ -/* - tests/pybind11_cross_module_tests.cpp -- contains tests that require multiple modules - - Copyright (c) 2017 Jason Rhinelander - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "local_bindings.h" -#include -#include - -PYBIND11_MODULE(pybind11_cross_module_tests, m) { - m.doc() = "pybind11 cross-module test module"; - - // test_local_bindings.py tests: - // - // Definitions here are tested by importing both this module and the - // relevant pybind11_tests submodule from a test_whatever.py - - // test_load_external - bind_local(m, "ExternalType1", py::module_local()); - bind_local(m, "ExternalType2", py::module_local()); - - // test_exceptions.py - m.def("raise_runtime_error", []() { PyErr_SetString(PyExc_RuntimeError, "My runtime error"); throw py::error_already_set(); }); - m.def("raise_value_error", []() { PyErr_SetString(PyExc_ValueError, "My value error"); throw py::error_already_set(); }); - m.def("throw_pybind_value_error", []() { throw py::value_error("pybind11 value error"); }); - m.def("throw_pybind_type_error", []() { throw py::type_error("pybind11 type error"); }); - m.def("throw_stop_iteration", []() { throw py::stop_iteration(); }); - - // test_local_bindings.py - // Local to both: - bind_local(m, "LocalType", py::module_local()) - .def("get2", [](LocalType &t) { return t.i + 2; }) - ; - - // Can only be called with our python type: - m.def("local_value", [](LocalType &l) { return l.i; }); - - // test_nonlocal_failure - // This registration will fail (global registration when LocalFail is already registered - // globally in the main test module): - m.def("register_nonlocal", [m]() { - bind_local(m, "NonLocalType"); - }); - - // test_stl_bind_local - // stl_bind.h binders defaults to py::module_local if the types are local or converting: - py::bind_vector(m, "LocalVec"); - py::bind_map(m, "LocalMap"); - - // test_stl_bind_global - // and global if the type (or one of the types, for the map) is global (so these will fail, - // assuming pybind11_tests is already loaded): - m.def("register_nonlocal_vec", [m]() { - py::bind_vector(m, "NonLocalVec"); - }); - m.def("register_nonlocal_map", [m]() { - py::bind_map(m, "NonLocalMap"); - }); - // The default can, however, be overridden to global using `py::module_local()` or - // `py::module_local(false)`. - // Explicitly made local: - py::bind_vector(m, "NonLocalVec2", py::module_local()); - // Explicitly made global (and so will fail to bind): - m.def("register_nonlocal_map2", [m]() { - py::bind_map(m, "NonLocalMap2", py::module_local(false)); - }); - - // test_mixed_local_global - // We try this both with the global type registered first and vice versa (the order shouldn't - // matter). - m.def("register_mixed_global_local", [m]() { - bind_local(m, "MixedGlobalLocal", py::module_local()); - }); - m.def("register_mixed_local_global", [m]() { - bind_local(m, "MixedLocalGlobal", py::module_local(false)); - }); - m.def("get_mixed_gl", [](int i) { return MixedGlobalLocal(i); }); - m.def("get_mixed_lg", [](int i) { return MixedLocalGlobal(i); }); - - // test_internal_locals_differ - m.def("local_cpp_types_addr", []() { return (uintptr_t) &py::detail::registered_local_types_cpp(); }); - - // test_stl_caster_vs_stl_bind - py::bind_vector>(m, "VectorInt"); - - m.def("load_vector_via_binding", [](std::vector &v) { - return std::accumulate(v.begin(), v.end(), 0); - }); - - // test_cross_module_calls - m.def("return_self", [](LocalVec *v) { return v; }); - m.def("return_copy", [](const LocalVec &v) { return LocalVec(v); }); - - class Dog : public pets::Pet { public: Dog(std::string name) : Pet(name) {}; }; - py::class_(m, "Pet", py::module_local()) - .def("name", &pets::Pet::name); - // Binding for local extending class: - py::class_(m, "Dog") - .def(py::init()); - m.def("pet_name", [](pets::Pet &p) { return p.name(); }); - - py::class_(m, "MixGL", py::module_local()).def(py::init()); - m.def("get_gl_value", [](MixGL &o) { return o.i + 100; }); - - py::class_(m, "MixGL2", py::module_local()).def(py::init()); - - // test_vector_bool - // We can't test both stl.h and stl_bind.h conversions of `std::vector` within - // the same module (it would be an ODR violation). Therefore `bind_vector` of `bool` - // is defined here and tested in `test_stl_binders.py`. - py::bind_vector>(m, "VectorBool"); - - // test_missing_header_message - // The main module already includes stl.h, but we need to test the error message - // which appears when this header is missing. - m.def("missing_header_arg", [](std::vector) { }); - m.def("missing_header_return", []() { return std::vector(); }); -} diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.py b/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.py deleted file mode 100644 index 9475c4516845632da6c6c5b918ae05401d8f3f01..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_custom_type_casters.py +++ /dev/null @@ -1,90 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest -from pybind11_tests import custom_type_casters as m - - -def test_noconvert_args(msg): - a = m.ArgInspector() - assert msg(a.f("hi")) == """ - loading ArgInspector1 argument WITH conversion allowed. Argument value = hi - """ - assert msg(a.g("this is a", "this is b")) == """ - loading ArgInspector1 argument WITHOUT conversion allowed. Argument value = this is a - loading ArgInspector1 argument WITH conversion allowed. Argument value = this is b - 13 - loading ArgInspector2 argument WITH conversion allowed. Argument value = (default arg inspector 2) - """ # noqa: E501 line too long - assert msg(a.g("this is a", "this is b", 42)) == """ - loading ArgInspector1 argument WITHOUT conversion allowed. Argument value = this is a - loading ArgInspector1 argument WITH conversion allowed. Argument value = this is b - 42 - loading ArgInspector2 argument WITH conversion allowed. Argument value = (default arg inspector 2) - """ # noqa: E501 line too long - assert msg(a.g("this is a", "this is b", 42, "this is d")) == """ - loading ArgInspector1 argument WITHOUT conversion allowed. Argument value = this is a - loading ArgInspector1 argument WITH conversion allowed. Argument value = this is b - 42 - loading ArgInspector2 argument WITH conversion allowed. Argument value = this is d - """ - assert (a.h("arg 1") == - "loading ArgInspector2 argument WITHOUT conversion allowed. Argument value = arg 1") - assert msg(m.arg_inspect_func("A1", "A2")) == """ - loading ArgInspector2 argument WITH conversion allowed. Argument value = A1 - loading ArgInspector1 argument WITHOUT conversion allowed. Argument value = A2 - """ - - assert m.floats_preferred(4) == 2.0 - assert m.floats_only(4.0) == 2.0 - with pytest.raises(TypeError) as excinfo: - m.floats_only(4) - assert msg(excinfo.value) == """ - floats_only(): incompatible function arguments. The following argument types are supported: - 1. (f: float) -> float - - Invoked with: 4 - """ - - assert m.ints_preferred(4) == 2 - assert m.ints_preferred(True) == 0 - with pytest.raises(TypeError) as excinfo: - m.ints_preferred(4.0) - assert msg(excinfo.value) == """ - ints_preferred(): incompatible function arguments. The following argument types are supported: - 1. (i: int) -> int - - Invoked with: 4.0 - """ # noqa: E501 line too long - - assert m.ints_only(4) == 2 - with pytest.raises(TypeError) as excinfo: - m.ints_only(4.0) - assert msg(excinfo.value) == """ - ints_only(): incompatible function arguments. The following argument types are supported: - 1. (i: int) -> int - - Invoked with: 4.0 - """ - - -def test_custom_caster_destruction(): - """Tests that returning a pointer to a type that gets converted with a custom type caster gets - destroyed when the function has py::return_value_policy::take_ownership policy applied.""" - - cstats = m.destruction_tester_cstats() - # This one *doesn't* have take_ownership: the pointer should be used but not destroyed: - z = m.custom_caster_no_destroy() - assert cstats.alive() == 1 and cstats.default_constructions == 1 - assert z - - # take_ownership applied: this constructs a new object, casts it, then destroys it: - z = m.custom_caster_destroy() - assert z - assert cstats.default_constructions == 2 - - # Same, but with a const pointer return (which should *not* inhibit destruction): - z = m.custom_caster_destroy_const() - assert z - assert cstats.default_constructions == 3 - - # Make sure we still only have the original object (from ..._no_destroy()) alive: - assert cstats.alive() == 1 diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/fsaf.py b/spaces/CVPR/WALT/mmdet/models/detectors/fsaf.py deleted file mode 100644 index 9f10fa1ae10f31e6cb5de65505b14a4fc97dd022..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/fsaf.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FSAF(SingleStageDetector): - """Implementation of `FSAF `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FSAF, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/CVPR/regionclip-demo/detectron2/checkpoint/c2_model_loading.py b/spaces/CVPR/regionclip-demo/detectron2/checkpoint/c2_model_loading.py deleted file mode 100644 index 8c8d181bd7200bd3fd38446e743f8f16780d6e76..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/checkpoint/c2_model_loading.py +++ /dev/null @@ -1,407 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import re -from typing import Dict, List -import torch -from tabulate import tabulate - - -def convert_basic_c2_names(original_keys): - """ - Apply some basic name conversion to names in C2 weights. - It only deals with typical backbone models. - - Args: - original_keys (list[str]): - Returns: - list[str]: The same number of strings matching those in original_keys. - """ - layer_keys = copy.deepcopy(original_keys) - layer_keys = [ - {"pred_b": "linear_b", "pred_w": "linear_w"}.get(k, k) for k in layer_keys - ] # some hard-coded mappings - - layer_keys = [k.replace("_", ".") for k in layer_keys] - layer_keys = [re.sub("\\.b$", ".bias", k) for k in layer_keys] - layer_keys = [re.sub("\\.w$", ".weight", k) for k in layer_keys] - # Uniform both bn and gn names to "norm" - layer_keys = [re.sub("bn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.bias$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.rm", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.mean$", "norm.running_mean", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.riv$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.running.var$", "norm.running_var", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.gamma$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("bn\\.beta$", "norm.bias", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.s$", "norm.weight", k) for k in layer_keys] - layer_keys = [re.sub("gn\\.bias$", "norm.bias", k) for k in layer_keys] - - # stem - layer_keys = [re.sub("^res\\.conv1\\.norm\\.", "conv1.norm.", k) for k in layer_keys] - # to avoid mis-matching with "conv1" in other components (e.g. detection head) - layer_keys = [re.sub("^conv1\\.", "stem.conv1.", k) for k in layer_keys] - - # layer1-4 is used by torchvision, however we follow the C2 naming strategy (res2-5) - # layer_keys = [re.sub("^res2.", "layer1.", k) for k in layer_keys] - # layer_keys = [re.sub("^res3.", "layer2.", k) for k in layer_keys] - # layer_keys = [re.sub("^res4.", "layer3.", k) for k in layer_keys] - # layer_keys = [re.sub("^res5.", "layer4.", k) for k in layer_keys] - - # blocks - layer_keys = [k.replace(".branch1.", ".shortcut.") for k in layer_keys] - layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys] - layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys] - layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys] - - # DensePose substitutions - layer_keys = [re.sub("^body.conv.fcn", "body_conv_fcn", k) for k in layer_keys] - layer_keys = [k.replace("AnnIndex.lowres", "ann_index_lowres") for k in layer_keys] - layer_keys = [k.replace("Index.UV.lowres", "index_uv_lowres") for k in layer_keys] - layer_keys = [k.replace("U.lowres", "u_lowres") for k in layer_keys] - layer_keys = [k.replace("V.lowres", "v_lowres") for k in layer_keys] - return layer_keys - - -def convert_c2_detectron_names(weights): - """ - Map Caffe2 Detectron weight names to Detectron2 names. - - Args: - weights (dict): name -> tensor - - Returns: - dict: detectron2 names -> tensor - dict: detectron2 names -> C2 names - """ - logger = logging.getLogger(__name__) - logger.info("Renaming Caffe2 weights ......") - original_keys = sorted(weights.keys()) - layer_keys = copy.deepcopy(original_keys) - - layer_keys = convert_basic_c2_names(layer_keys) - - # -------------------------------------------------------------------------- - # RPN hidden representation conv - # -------------------------------------------------------------------------- - # FPN case - # In the C2 model, the RPN hidden layer conv is defined for FPN level 2 and then - # shared for all other levels, hence the appearance of "fpn2" - layer_keys = [ - k.replace("conv.rpn.fpn2", "proposal_generator.rpn_head.conv") for k in layer_keys - ] - # Non-FPN case - layer_keys = [k.replace("conv.rpn", "proposal_generator.rpn_head.conv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # RPN box transformation conv - # -------------------------------------------------------------------------- - # FPN case (see note above about "fpn2") - layer_keys = [ - k.replace("rpn.bbox.pred.fpn2", "proposal_generator.rpn_head.anchor_deltas") - for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits.fpn2", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - # Non-FPN case - layer_keys = [ - k.replace("rpn.bbox.pred", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys - ] - layer_keys = [ - k.replace("rpn.cls.logits", "proposal_generator.rpn_head.objectness_logits") - for k in layer_keys - ] - - # -------------------------------------------------------------------------- - # Fast R-CNN box head - # -------------------------------------------------------------------------- - layer_keys = [re.sub("^bbox\\.pred", "bbox_pred", k) for k in layer_keys] - layer_keys = [re.sub("^cls\\.score", "cls_score", k) for k in layer_keys] - layer_keys = [re.sub("^fc6\\.", "box_head.fc1.", k) for k in layer_keys] - layer_keys = [re.sub("^fc7\\.", "box_head.fc2.", k) for k in layer_keys] - # 4conv1fc head tensor names: head_conv1_w, head_conv1_gn_s - layer_keys = [re.sub("^head\\.conv", "box_head.conv", k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # FPN lateral and output convolutions - # -------------------------------------------------------------------------- - def fpn_map(name): - """ - Look for keys with the following patterns: - 1) Starts with "fpn.inner." - Example: "fpn.inner.res2.2.sum.lateral.weight" - Meaning: These are lateral pathway convolutions - 2) Starts with "fpn.res" - Example: "fpn.res2.2.sum.weight" - Meaning: These are FPN output convolutions - """ - splits = name.split(".") - norm = ".norm" if "norm" in splits else "" - if name.startswith("fpn.inner."): - # splits example: ['fpn', 'inner', 'res2', '2', 'sum', 'lateral', 'weight'] - stage = int(splits[2][len("res") :]) - return "fpn_lateral{}{}.{}".format(stage, norm, splits[-1]) - elif name.startswith("fpn.res"): - # splits example: ['fpn', 'res2', '2', 'sum', 'weight'] - stage = int(splits[1][len("res") :]) - return "fpn_output{}{}.{}".format(stage, norm, splits[-1]) - return name - - layer_keys = [fpn_map(k) for k in layer_keys] - - # -------------------------------------------------------------------------- - # Mask R-CNN mask head - # -------------------------------------------------------------------------- - # roi_heads.StandardROIHeads case - layer_keys = [k.replace(".[mask].fcn", "mask_head.mask_fcn") for k in layer_keys] - layer_keys = [re.sub("^\\.mask\\.fcn", "mask_head.mask_fcn", k) for k in layer_keys] - layer_keys = [k.replace("mask.fcn.logits", "mask_head.predictor") for k in layer_keys] - # roi_heads.Res5ROIHeads case - layer_keys = [k.replace("conv5.mask", "mask_head.deconv") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Keypoint R-CNN head - # -------------------------------------------------------------------------- - # interestingly, the keypoint head convs have blob names that are simply "conv_fcnX" - layer_keys = [k.replace("conv.fcn", "roi_heads.keypoint_head.conv_fcn") for k in layer_keys] - layer_keys = [ - k.replace("kps.score.lowres", "roi_heads.keypoint_head.score_lowres") for k in layer_keys - ] - layer_keys = [k.replace("kps.score.", "roi_heads.keypoint_head.score.") for k in layer_keys] - - # -------------------------------------------------------------------------- - # Done with replacements - # -------------------------------------------------------------------------- - assert len(set(layer_keys)) == len(layer_keys) - assert len(original_keys) == len(layer_keys) - - new_weights = {} - new_keys_to_original_keys = {} - for orig, renamed in zip(original_keys, layer_keys): - new_keys_to_original_keys[renamed] = orig - if renamed.startswith("bbox_pred.") or renamed.startswith("mask_head.predictor."): - # remove the meaningless prediction weight for background class - new_start_idx = 4 if renamed.startswith("bbox_pred.") else 1 - new_weights[renamed] = weights[orig][new_start_idx:] - logger.info( - "Remove prediction weight for background class in {}. The shape changes from " - "{} to {}.".format( - renamed, tuple(weights[orig].shape), tuple(new_weights[renamed].shape) - ) - ) - elif renamed.startswith("cls_score."): - # move weights of bg class from original index 0 to last index - logger.info( - "Move classification weights for background class in {} from index 0 to " - "index {}.".format(renamed, weights[orig].shape[0] - 1) - ) - new_weights[renamed] = torch.cat([weights[orig][1:], weights[orig][:1]]) - else: - new_weights[renamed] = weights[orig] - - return new_weights, new_keys_to_original_keys - - -# Note the current matching is not symmetric. -# it assumes model_state_dict will have longer names. -def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_conversion=True): - """ - Match names between the two state-dict, and returns a new chkpt_state_dict with names - converted to match model_state_dict with heuristics. The returned dict can be later - loaded with fvcore checkpointer. - If `c2_conversion==True`, `ckpt_state_dict` is assumed to be a Caffe2 - model and will be renamed at first. - - Strategy: suppose that the models that we will create will have prefixes appended - to each of its keys, for example due to an extra level of nesting that the original - pre-trained weights from ImageNet won't contain. For example, model.state_dict() - might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains - res2.conv1.weight. We thus want to match both parameters together. - For that, we look for each model weight, look among all loaded keys if there is one - that is a suffix of the current weight name, and use it if that's the case. - If multiple matches exist, take the one with longest size - of the corresponding name. For example, for the same model as before, the pretrained - weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case, - we want to match backbone[0].body.conv1.weight to conv1.weight, and - backbone[0].body.res2.conv1.weight to res2.conv1.weight. - """ - model_keys = sorted(model_state_dict.keys()) - if c2_conversion: - ckpt_state_dict, original_keys = convert_c2_detectron_names(ckpt_state_dict) - # original_keys: the name in the original dict (before renaming) - else: - original_keys = {x: x for x in ckpt_state_dict.keys()} - ckpt_keys = sorted(ckpt_state_dict.keys()) - - def match(a, b): - # Matched ckpt_key should be a complete (starts with '.') suffix. - # For example, roi_heads.mesh_head.whatever_conv1 does not match conv1, - # but matches whatever_conv1 or mesh_head.whatever_conv1. - return a == b or a.endswith("." + b) - - # get a matrix of string matches, where each (i, j) entry correspond to the size of the - # ckpt_key string, if it matches - match_matrix = [len(j) if match(i, j) else 0 for i in model_keys for j in ckpt_keys] - match_matrix = torch.as_tensor(match_matrix).view(len(model_keys), len(ckpt_keys)) - # use the matched one with longest size in case of multiple matches - max_match_size, idxs = match_matrix.max(1) - # remove indices that correspond to no-match - idxs[max_match_size == 0] = -1 - - logger = logging.getLogger(__name__) - # matched_pairs (matched checkpoint key --> matched model key) - matched_keys = {} - result_state_dict = {} - for idx_model, idx_ckpt in enumerate(idxs.tolist()): - if idx_ckpt == -1: - continue - key_model = model_keys[idx_model] - key_ckpt = ckpt_keys[idx_ckpt] - value_ckpt = ckpt_state_dict[key_ckpt] - shape_in_model = model_state_dict[key_model].shape - - if shape_in_model != value_ckpt.shape: - logger.warning( - "Shape of {} in checkpoint is {}, while shape of {} in model is {}.".format( - key_ckpt, value_ckpt.shape, key_model, shape_in_model - ) - ) - logger.warning( - "{} will not be loaded. Please double check and see if this is desired.".format( - key_ckpt - ) - ) - continue - - assert key_model not in result_state_dict - result_state_dict[key_model] = value_ckpt - if key_ckpt in matched_keys: # already added to matched_keys - logger.error( - "Ambiguity found for {} in checkpoint!" - "It matches at least two keys in the model ({} and {}).".format( - key_ckpt, key_model, matched_keys[key_ckpt] - ) - ) - raise ValueError("Cannot match one checkpoint key to multiple keys in the model.") - - matched_keys[key_ckpt] = key_model - - # logging: - matched_model_keys = sorted(matched_keys.values()) - if len(matched_model_keys) == 0: - logger.warning("No weights in checkpoint matched with model.") - return ckpt_state_dict - common_prefix = _longest_common_prefix(matched_model_keys) - rev_matched_keys = {v: k for k, v in matched_keys.items()} - original_keys = {k: original_keys[rev_matched_keys[k]] for k in matched_model_keys} - - model_key_groups = _group_keys_by_module(matched_model_keys, original_keys) - table = [] - memo = set() - for key_model in matched_model_keys: - if key_model in memo: - continue - if key_model in model_key_groups: - group = model_key_groups[key_model] - memo |= set(group) - shapes = [tuple(model_state_dict[k].shape) for k in group] - table.append( - ( - _longest_common_prefix([k[len(common_prefix) :] for k in group]) + "*", - _group_str([original_keys[k] for k in group]), - " ".join([str(x).replace(" ", "") for x in shapes]), - ) - ) - else: - key_checkpoint = original_keys[key_model] - shape = str(tuple(model_state_dict[key_model].shape)) - table.append((key_model[len(common_prefix) :], key_checkpoint, shape)) - table_str = tabulate( - table, tablefmt="pipe", headers=["Names in Model", "Names in Checkpoint", "Shapes"] - ) - logger.info( - "Following weights matched with " - + (f"submodule {common_prefix[:-1]}" if common_prefix else "model") - + ":\n" - + table_str - ) - - unmatched_ckpt_keys = [k for k in ckpt_keys if k not in set(matched_keys.keys())] - for k in unmatched_ckpt_keys: - result_state_dict[k] = ckpt_state_dict[k] - return result_state_dict - - -def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]): - """ - Params in the same submodule are grouped together. - - Args: - keys: names of all parameters - original_names: mapping from parameter name to their name in the checkpoint - - Returns: - dict[name -> all other names in the same group] - """ - - def _submodule_name(key): - pos = key.rfind(".") - if pos < 0: - return None - prefix = key[: pos + 1] - return prefix - - all_submodules = [_submodule_name(k) for k in keys] - all_submodules = [x for x in all_submodules if x] - all_submodules = sorted(all_submodules, key=len) - - ret = {} - for prefix in all_submodules: - group = [k for k in keys if k.startswith(prefix)] - if len(group) <= 1: - continue - original_name_lcp = _longest_common_prefix_str([original_names[k] for k in group]) - if len(original_name_lcp) == 0: - # don't group weights if original names don't share prefix - continue - - for k in group: - if k in ret: - continue - ret[k] = group - return ret - - -def _longest_common_prefix(names: List[str]) -> str: - """ - ["abc.zfg", "abc.zef"] -> "abc." - """ - names = [n.split(".") for n in names] - m1, m2 = min(names), max(names) - ret = [a for a, b in zip(m1, m2) if a == b] - ret = ".".join(ret) + "." if len(ret) else "" - return ret - - -def _longest_common_prefix_str(names: List[str]) -> str: - m1, m2 = min(names), max(names) - lcp = [a for a, b in zip(m1, m2) if a == b] - lcp = "".join(lcp) - return lcp - - -def _group_str(names: List[str]) -> str: - """ - Turn "common1", "common2", "common3" into "common{1,2,3}" - """ - lcp = _longest_common_prefix_str(names) - rest = [x[len(lcp) :] for x in names] - rest = "{" + ",".join(rest) + "}" - ret = lcp + rest - - # add some simplification for BN specifically - ret = ret.replace("bn_{beta,running_mean,running_var,gamma}", "bn_*") - ret = ret.replace("bn_beta,bn_running_mean,bn_running_var,bn_gamma", "bn_*") - return ret diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/modeling/image_encoder.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/modeling/image_encoder.py deleted file mode 100644 index a6ad9ad2938842308e482a05c9d35ab08db9b2c3..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/segment_anything/modeling/image_encoder.py +++ /dev/null @@ -1,395 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from typing import Optional, Tuple, Type - -from .common import LayerNorm2d, MLPBlock - - -# This class and its supporting functions below lightly adapted from the ViTDet backbone available at: https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/vit.py # noqa -class ImageEncoderViT(nn.Module): - def __init__( - self, - img_size: int = 1024, - patch_size: int = 16, - in_chans: int = 3, - embed_dim: int = 768, - depth: int = 12, - num_heads: int = 12, - mlp_ratio: float = 4.0, - out_chans: int = 256, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_abs_pos: bool = True, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - global_attn_indexes: Tuple[int, ...] = (), - ) -> None: - """ - Args: - img_size (int): Input image size. - patch_size (int): Patch size. - in_chans (int): Number of input image channels. - embed_dim (int): Patch embedding dimension. - depth (int): Depth of ViT. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_abs_pos (bool): If True, use absolute positional embeddings. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. - global_attn_indexes (list): Indexes for blocks using global attention. - """ - super().__init__() - self.img_size = img_size - - self.patch_embed = PatchEmbed( - kernel_size=(patch_size, patch_size), - stride=(patch_size, patch_size), - in_chans=in_chans, - embed_dim=embed_dim, - ) - - self.pos_embed: Optional[nn.Parameter] = None - if use_abs_pos: - # Initialize absolute positional embedding with pretrain image size. - self.pos_embed = nn.Parameter( - torch.zeros(1, img_size // patch_size, img_size // patch_size, embed_dim) - ) - - self.blocks = nn.ModuleList() - for i in range(depth): - block = Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - norm_layer=norm_layer, - act_layer=act_layer, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - window_size=window_size if i not in global_attn_indexes else 0, - input_size=(img_size // patch_size, img_size // patch_size), - ) - self.blocks.append(block) - - self.neck = nn.Sequential( - nn.Conv2d( - embed_dim, - out_chans, - kernel_size=1, - bias=False, - ), - LayerNorm2d(out_chans), - nn.Conv2d( - out_chans, - out_chans, - kernel_size=3, - padding=1, - bias=False, - ), - LayerNorm2d(out_chans), - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.patch_embed(x) - if self.pos_embed is not None: - x = x + self.pos_embed - - for blk in self.blocks: - x = blk(x) - - x = self.neck(x.permute(0, 3, 1, 2)) - - return x - - -class Block(nn.Module): - """Transformer blocks with support of window attention and residual propagation blocks""" - - def __init__( - self, - dim: int, - num_heads: int, - mlp_ratio: float = 4.0, - qkv_bias: bool = True, - norm_layer: Type[nn.Module] = nn.LayerNorm, - act_layer: Type[nn.Module] = nn.GELU, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - window_size: int = 0, - input_size: Optional[Tuple[int, int]] = None, - ) -> None: - """ - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads in each ViT block. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool): If True, add a learnable bias to query, key, value. - norm_layer (nn.Module): Normalization layer. - act_layer (nn.Module): Activation layer. - use_rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - window_size (int): Window size for window attention blocks. If it equals 0, then - use global attention. - input_size (int or None): Input resolution for calculating the relative positional - parameter size. - """ - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, - qkv_bias=qkv_bias, - use_rel_pos=use_rel_pos, - rel_pos_zero_init=rel_pos_zero_init, - input_size=input_size if window_size == 0 else (window_size, window_size), - ) - - self.norm2 = norm_layer(dim) - self.mlp = MLPBlock(embedding_dim=dim, mlp_dim=int(dim * mlp_ratio), act=act_layer) - - self.window_size = window_size - - def forward(self, x: torch.Tensor) -> torch.Tensor: - shortcut = x - x = self.norm1(x) - # Window partition - if self.window_size > 0: - H, W = x.shape[1], x.shape[2] - x, pad_hw = window_partition(x, self.window_size) - - x = self.attn(x) - # Reverse window partition - if self.window_size > 0: - x = window_unpartition(x, self.window_size, pad_hw, (H, W)) - - x = shortcut + x - x = x + self.mlp(self.norm2(x)) - - return x - - -class Attention(nn.Module): - """Multi-head Attention block with relative position embeddings.""" - - def __init__( - self, - dim: int, - num_heads: int = 8, - qkv_bias: bool = True, - use_rel_pos: bool = False, - rel_pos_zero_init: bool = True, - input_size: Optional[Tuple[int, int]] = None, - ) -> None: - """ - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - qkv_bias (bool: If True, add a learnable bias to query, key, value. - rel_pos (bool): If True, add relative positional embeddings to the attention map. - rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. - input_size (int or None): Input resolution for calculating the relative positional - parameter size. - """ - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.proj = nn.Linear(dim, dim) - - self.use_rel_pos = use_rel_pos - if self.use_rel_pos: - assert ( - input_size is not None - ), "Input size must be provided if using relative positional encoding." - # initialize relative positional embeddings - self.rel_pos_h = nn.Parameter(torch.zeros(2 * input_size[0] - 1, head_dim)) - self.rel_pos_w = nn.Parameter(torch.zeros(2 * input_size[1] - 1, head_dim)) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - B, H, W, _ = x.shape - # qkv with shape (3, B, nHead, H * W, C) - qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - # q, k, v with shape (B * nHead, H * W, C) - q, k, v = qkv.reshape(3, B * self.num_heads, H * W, -1).unbind(0) - - attn = (q * self.scale) @ k.transpose(-2, -1) - - if self.use_rel_pos: - attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W)) - - attn = attn.softmax(dim=-1) - x = (attn @ v).view(B, self.num_heads, H, W, -1).permute(0, 2, 3, 1, 4).reshape(B, H, W, -1) - x = self.proj(x) - - return x - - -def window_partition(x: torch.Tensor, window_size: int) -> Tuple[torch.Tensor, Tuple[int, int]]: - """ - Partition into non-overlapping windows with padding if needed. - Args: - x (tensor): input tokens with [B, H, W, C]. - window_size (int): window size. - - Returns: - windows: windows after partition with [B * num_windows, window_size, window_size, C]. - (Hp, Wp): padded height and width before partition - """ - B, H, W, C = x.shape - - pad_h = (window_size - H % window_size) % window_size - pad_w = (window_size - W % window_size) % window_size - if pad_h > 0 or pad_w > 0: - x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h)) - Hp, Wp = H + pad_h, W + pad_w - - x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows, (Hp, Wp) - - -def window_unpartition( - windows: torch.Tensor, window_size: int, pad_hw: Tuple[int, int], hw: Tuple[int, int] -) -> torch.Tensor: - """ - Window unpartition into original sequences and removing padding. - Args: - x (tensor): input tokens with [B * num_windows, window_size, window_size, C]. - window_size (int): window size. - pad_hw (Tuple): padded height and width (Hp, Wp). - hw (Tuple): original height and width (H, W) before padding. - - Returns: - x: unpartitioned sequences with [B, H, W, C]. - """ - Hp, Wp = pad_hw - H, W = hw - B = windows.shape[0] // (Hp * Wp // window_size // window_size) - x = windows.view(B, Hp // window_size, Wp // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1) - - if Hp > H or Wp > W: - x = x[:, :H, :W, :].contiguous() - return x - - -def get_rel_pos(q_size: int, k_size: int, rel_pos: torch.Tensor) -> torch.Tensor: - """ - Get relative positional embeddings according to the relative positions of - query and key sizes. - Args: - q_size (int): size of query q. - k_size (int): size of key k. - rel_pos (Tensor): relative position embeddings (L, C). - - Returns: - Extracted positional embeddings according to relative positions. - """ - max_rel_dist = int(2 * max(q_size, k_size) - 1) - # Interpolate rel pos if needed. - if rel_pos.shape[0] != max_rel_dist: - # Interpolate rel pos. - rel_pos_resized = F.interpolate( - rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1), - size=max_rel_dist, - mode="linear", - ) - rel_pos_resized = rel_pos_resized.reshape(-1, max_rel_dist).permute(1, 0) - else: - rel_pos_resized = rel_pos - - # Scale the coords with short length if shapes for q and k are different. - q_coords = torch.arange(q_size)[:, None] * max(k_size / q_size, 1.0) - k_coords = torch.arange(k_size)[None, :] * max(q_size / k_size, 1.0) - relative_coords = (q_coords - k_coords) + (k_size - 1) * max(q_size / k_size, 1.0) - - return rel_pos_resized[relative_coords.long()] - - -def add_decomposed_rel_pos( - attn: torch.Tensor, - q: torch.Tensor, - rel_pos_h: torch.Tensor, - rel_pos_w: torch.Tensor, - q_size: Tuple[int, int], - k_size: Tuple[int, int], -) -> torch.Tensor: - """ - Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`. - https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950 - Args: - attn (Tensor): attention map. - q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C). - rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis. - rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis. - q_size (Tuple): spatial sequence size of query q with (q_h, q_w). - k_size (Tuple): spatial sequence size of key k with (k_h, k_w). - - Returns: - attn (Tensor): attention map with added relative positional embeddings. - """ - q_h, q_w = q_size - k_h, k_w = k_size - Rh = get_rel_pos(q_h, k_h, rel_pos_h) - Rw = get_rel_pos(q_w, k_w, rel_pos_w) - - B, _, dim = q.shape - r_q = q.reshape(B, q_h, q_w, dim) - rel_h = torch.einsum("bhwc,hkc->bhwk", r_q, Rh) - rel_w = torch.einsum("bhwc,wkc->bhwk", r_q, Rw) - - attn = ( - attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :] - ).view(B, q_h * q_w, k_h * k_w) - - return attn - - -class PatchEmbed(nn.Module): - """ - Image to Patch Embedding. - """ - - def __init__( - self, - kernel_size: Tuple[int, int] = (16, 16), - stride: Tuple[int, int] = (16, 16), - padding: Tuple[int, int] = (0, 0), - in_chans: int = 3, - embed_dim: int = 768, - ) -> None: - """ - Args: - kernel_size (Tuple): kernel size of the projection layer. - stride (Tuple): stride of the projection layer. - padding (Tuple): padding size of the projection layer. - in_chans (int): Number of input image channels. - embed_dim (int): embed_dim (int): Patch embedding dimension. - """ - super().__init__() - - self.proj = nn.Conv2d( - in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.proj(x) - # B C H W -> B H W C - x = x.permute(0, 2, 3, 1) - return x diff --git a/spaces/CikeyQI/meme-api/meme_generator/meme.py b/spaces/CikeyQI/meme-api/meme_generator/meme.py deleted file mode 100644 index 632a622ce11b312cd150dc1738fa8159bd020b12..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/meme.py +++ /dev/null @@ -1,185 +0,0 @@ -import copy -from argparse import ArgumentError, ArgumentParser -from contextvars import ContextVar -from dataclasses import dataclass, field -from io import BytesIO -from pathlib import Path -from typing import ( - IO, - Any, - Awaitable, - Callable, - Dict, - List, - Literal, - Optional, - Type, - TypeVar, - Union, - cast, -) - -from pil_utils import BuildImage -from pydantic import BaseModel, ValidationError - -from .exception import ( - ArgModelMismatch, - ArgParserExit, - ImageNumberMismatch, - OpenImageFailed, - ParserExit, - TextNumberMismatch, - TextOrNameNotEnough, -) -from .utils import is_coroutine_callable, random_image, random_text, run_sync - - -class UserInfo(BaseModel): - name: str = "" - gender: Literal["male", "female", "unknown"] = "unknown" - - -class MemeArgsModel(BaseModel): - user_infos: List[UserInfo] = [] - - -ArgsModel = TypeVar("ArgsModel", bound=MemeArgsModel) - -MemeFunction = Union[ - Callable[[List[BuildImage], List[str], ArgsModel], BytesIO], - Callable[[List[BuildImage], List[str], ArgsModel], Awaitable[BytesIO]], -] - - -parser_message: ContextVar[str] = ContextVar("parser_message") - - -class MemeArgsParser(ArgumentParser): - """`shell_like` 命令参数解析器,解析出错时不会退出程序。 - - 用法: - 用法与 `argparse.ArgumentParser` 相同, - 参考文档: [argparse](https://docs.python.org/3/library/argparse.html) - """ - - def _print_message(self, message: str, file: Optional[IO[str]] = None): - if (msg := parser_message.get(None)) is not None: - parser_message.set(msg + message) - else: - super()._print_message(message, file) - - def exit(self, status: int = 0, message: Optional[str] = None): - if message: - self._print_message(message) - raise ParserExit(status=status, error_message=parser_message.get(None)) - - -@dataclass -class MemeArgsType: - parser: MemeArgsParser - model: Type[MemeArgsModel] - instances: List[MemeArgsModel] = field(default_factory=list) - - -@dataclass -class MemeParamsType: - min_images: int = 0 - max_images: int = 0 - min_texts: int = 0 - max_texts: int = 0 - default_texts: List[str] = field(default_factory=list) - args_type: Optional[MemeArgsType] = None - - -@dataclass -class Meme: - key: str - function: MemeFunction - params_type: MemeParamsType - keywords: List[str] = field(default_factory=list) - patterns: List[str] = field(default_factory=list) - - async def __call__( - self, - *, - images: Union[List[str], List[Path], List[bytes], List[BytesIO]] = [], - texts: List[str] = [], - args: Dict[str, Any] = {}, - ) -> BytesIO: - if not ( - self.params_type.min_images <= len(images) <= self.params_type.max_images - ): - raise ImageNumberMismatch( - self.key, self.params_type.min_images, self.params_type.max_images - ) - - if not (self.params_type.min_texts <= len(texts) <= self.params_type.max_texts): - raise TextNumberMismatch( - self.key, self.params_type.min_texts, self.params_type.max_texts - ) - - if args_type := self.params_type.args_type: - args_model = args_type.model - else: - args_model = MemeArgsModel - - try: - model = args_model.parse_obj(args) - except ValidationError as e: - raise ArgModelMismatch(self.key, str(e)) - - imgs: List[BuildImage] = [] - try: - for image in images: - if isinstance(image, bytes): - image = BytesIO(image) - imgs.append(BuildImage.open(image)) - except Exception as e: - raise OpenImageFailed(str(e)) - - values = {"images": imgs, "texts": texts, "args": model} - - if is_coroutine_callable(self.function): - return await cast(Callable[..., Awaitable[BytesIO]], self.function)( - **values - ) - else: - return await run_sync(cast(Callable[..., BytesIO], self.function))(**values) - - def parse_args(self, args: List[str] = []) -> Dict[str, Any]: - parser = ( - copy.deepcopy(self.params_type.args_type.parser) - if self.params_type.args_type - else MemeArgsParser() - ) - parser.add_argument("texts", nargs="*", default=[]) - t = parser_message.set("") - try: - return vars(parser.parse_args(args)) - except ArgumentError as e: - raise ArgParserExit(self.key, str(e)) - except ParserExit as e: - raise ArgParserExit(self.key, e.error_message) - finally: - parser_message.reset(t) - - async def generate_preview(self, *, args: Dict[str, Any] = {}) -> BytesIO: - default_images = [random_image() for _ in range(self.params_type.min_images)] - default_texts = ( - self.params_type.default_texts.copy() - if ( - self.params_type.min_texts - <= len(self.params_type.default_texts) - <= self.params_type.max_texts - ) - else [random_text() for _ in range(self.params_type.min_texts)] - ) - - async def _generate_preview(images: List[BytesIO], texts: List[str]): - try: - return await self.__call__(images=images, texts=texts, args=args) - except TextOrNameNotEnough: - texts.append(random_text()) - return await _generate_preview(images, texts) - - return await _generate_preview(default_images, default_texts) diff --git a/spaces/ConorDY/feedback-chatbot/README.md b/spaces/ConorDY/feedback-chatbot/README.md deleted file mode 100644 index 8d75921471863d5dbfb2cdc88dd2eb0e374713b4..0000000000000000000000000000000000000000 --- a/spaces/ConorDY/feedback-chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Feedback Chatbot -emoji: 📚 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.0.11 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/__init__.py deleted file mode 100644 index 537ebe56e683f4c665bb9b60fed9a1811645d8e5..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/backbone/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .backbone import build_backbone -from . import fbnet diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/logger.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/logger.py deleted file mode 100644 index 13847a3a76b481e132190ee0757b3539fb8981ae..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/logger.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import logging -import os -import sys - - -def setup_logger(name, save_dir, distributed_rank, filename="log.txt"): - logger = logging.getLogger(name) - logger.setLevel(logging.DEBUG) - # don't log results for the non-master process - if distributed_rank > 0: - return logger - ch = logging.StreamHandler(stream=sys.stdout) - ch.setLevel(logging.DEBUG) - formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s: %(message)s") - ch.setFormatter(formatter) - logger.addHandler(ch) - - if save_dir: - fh = logging.FileHandler(os.path.join(save_dir, filename)) - fh.setLevel(logging.DEBUG) - fh.setFormatter(formatter) - logger.addHandler(fh) - - return logger diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImImagePlugin.py deleted file mode 100644 index 746743f658cf3fa2e0022ae049808eb68d3d1221..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImImagePlugin.py +++ /dev/null @@ -1,371 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IFUNC IM file handling for PIL -# -# history: -# 1995-09-01 fl Created. -# 1997-01-03 fl Save palette images -# 1997-01-08 fl Added sequence support -# 1997-01-23 fl Added P and RGB save support -# 1997-05-31 fl Read floating point images -# 1997-06-22 fl Save floating point images -# 1997-08-27 fl Read and save 1-bit images -# 1998-06-25 fl Added support for RGB+LUT images -# 1998-07-02 fl Added support for YCC images -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 1998-12-29 fl Added I;16 support -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7) -# 2003-09-26 fl Added LA/PA support -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -import os -import re - -from . import Image, ImageFile, ImagePalette - -# -------------------------------------------------------------------- -# Standard tags - -COMMENT = "Comment" -DATE = "Date" -EQUIPMENT = "Digitalization equipment" -FRAMES = "File size (no of images)" -LUT = "Lut" -NAME = "Name" -SCALE = "Scale (x,y)" -SIZE = "Image size (x*y)" -MODE = "Image type" - -TAGS = { - COMMENT: 0, - DATE: 0, - EQUIPMENT: 0, - FRAMES: 0, - LUT: 0, - NAME: 0, - SCALE: 0, - SIZE: 0, - MODE: 0, -} - -OPEN = { - # ifunc93/p3cfunc formats - "0 1 image": ("1", "1"), - "L 1 image": ("1", "1"), - "Greyscale image": ("L", "L"), - "Grayscale image": ("L", "L"), - "RGB image": ("RGB", "RGB;L"), - "RLB image": ("RGB", "RLB"), - "RYB image": ("RGB", "RLB"), - "B1 image": ("1", "1"), - "B2 image": ("P", "P;2"), - "B4 image": ("P", "P;4"), - "X 24 image": ("RGB", "RGB"), - "L 32 S image": ("I", "I;32"), - "L 32 F image": ("F", "F;32"), - # old p3cfunc formats - "RGB3 image": ("RGB", "RGB;T"), - "RYB3 image": ("RGB", "RYB;T"), - # extensions - "LA image": ("LA", "LA;L"), - "PA image": ("LA", "PA;L"), - "RGBA image": ("RGBA", "RGBA;L"), - "RGBX image": ("RGBX", "RGBX;L"), - "CMYK image": ("CMYK", "CMYK;L"), - "YCC image": ("YCbCr", "YCbCr;L"), -} - -# ifunc95 extensions -for i in ["8", "8S", "16", "16S", "32", "32F"]: - OPEN[f"L {i} image"] = ("F", f"F;{i}") - OPEN[f"L*{i} image"] = ("F", f"F;{i}") -for i in ["16", "16L", "16B"]: - OPEN[f"L {i} image"] = (f"I;{i}", f"I;{i}") - OPEN[f"L*{i} image"] = (f"I;{i}", f"I;{i}") -for i in ["32S"]: - OPEN[f"L {i} image"] = ("I", f"I;{i}") - OPEN[f"L*{i} image"] = ("I", f"I;{i}") -for i in range(2, 33): - OPEN[f"L*{i} image"] = ("F", f"F;{i}") - - -# -------------------------------------------------------------------- -# Read IM directory - -split = re.compile(rb"^([A-Za-z][^:]*):[ \t]*(.*)[ \t]*$") - - -def number(s): - try: - return int(s) - except ValueError: - return float(s) - - -## -# Image plugin for the IFUNC IM file format. - - -class ImImageFile(ImageFile.ImageFile): - format = "IM" - format_description = "IFUNC Image Memory" - _close_exclusive_fp_after_loading = False - - def _open(self): - # Quick rejection: if there's not an LF among the first - # 100 bytes, this is (probably) not a text header. - - if b"\n" not in self.fp.read(100): - msg = "not an IM file" - raise SyntaxError(msg) - self.fp.seek(0) - - n = 0 - - # Default values - self.info[MODE] = "L" - self.info[SIZE] = (512, 512) - self.info[FRAMES] = 1 - - self.rawmode = "L" - - while True: - s = self.fp.read(1) - - # Some versions of IFUNC uses \n\r instead of \r\n... - if s == b"\r": - continue - - if not s or s == b"\0" or s == b"\x1A": - break - - # FIXME: this may read whole file if not a text file - s = s + self.fp.readline() - - if len(s) > 100: - msg = "not an IM file" - raise SyntaxError(msg) - - if s[-2:] == b"\r\n": - s = s[:-2] - elif s[-1:] == b"\n": - s = s[:-1] - - try: - m = split.match(s) - except re.error as e: - msg = "not an IM file" - raise SyntaxError(msg) from e - - if m: - k, v = m.group(1, 2) - - # Don't know if this is the correct encoding, - # but a decent guess (I guess) - k = k.decode("latin-1", "replace") - v = v.decode("latin-1", "replace") - - # Convert value as appropriate - if k in [FRAMES, SCALE, SIZE]: - v = v.replace("*", ",") - v = tuple(map(number, v.split(","))) - if len(v) == 1: - v = v[0] - elif k == MODE and v in OPEN: - v, self.rawmode = OPEN[v] - - # Add to dictionary. Note that COMMENT tags are - # combined into a list of strings. - if k == COMMENT: - if k in self.info: - self.info[k].append(v) - else: - self.info[k] = [v] - else: - self.info[k] = v - - if k in TAGS: - n += 1 - - else: - msg = "Syntax error in IM header: " + s.decode("ascii", "replace") - raise SyntaxError(msg) - - if not n: - msg = "Not an IM file" - raise SyntaxError(msg) - - # Basic attributes - self._size = self.info[SIZE] - self.mode = self.info[MODE] - - # Skip forward to start of image data - while s and s[:1] != b"\x1A": - s = self.fp.read(1) - if not s: - msg = "File truncated" - raise SyntaxError(msg) - - if LUT in self.info: - # convert lookup table to palette or lut attribute - palette = self.fp.read(768) - greyscale = 1 # greyscale palette - linear = 1 # linear greyscale palette - for i in range(256): - if palette[i] == palette[i + 256] == palette[i + 512]: - if palette[i] != i: - linear = 0 - else: - greyscale = 0 - if self.mode in ["L", "LA", "P", "PA"]: - if greyscale: - if not linear: - self.lut = list(palette[:256]) - else: - if self.mode in ["L", "P"]: - self.mode = self.rawmode = "P" - elif self.mode in ["LA", "PA"]: - self.mode = "PA" - self.rawmode = "PA;L" - self.palette = ImagePalette.raw("RGB;L", palette) - elif self.mode == "RGB": - if not greyscale or not linear: - self.lut = list(palette) - - self.frame = 0 - - self.__offset = offs = self.fp.tell() - - self._fp = self.fp # FIXME: hack - - if self.rawmode[:2] == "F;": - # ifunc95 formats - try: - # use bit decoder (if necessary) - bits = int(self.rawmode[2:]) - if bits not in [8, 16, 32]: - self.tile = [("bit", (0, 0) + self.size, offs, (bits, 8, 3, 0, -1))] - return - except ValueError: - pass - - if self.rawmode in ["RGB;T", "RYB;T"]: - # Old LabEye/3PC files. Would be very surprised if anyone - # ever stumbled upon such a file ;-) - size = self.size[0] * self.size[1] - self.tile = [ - ("raw", (0, 0) + self.size, offs, ("G", 0, -1)), - ("raw", (0, 0) + self.size, offs + size, ("R", 0, -1)), - ("raw", (0, 0) + self.size, offs + 2 * size, ("B", 0, -1)), - ] - else: - # LabEye/IFUNC files - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - @property - def n_frames(self): - return self.info[FRAMES] - - @property - def is_animated(self): - return self.info[FRAMES] > 1 - - def seek(self, frame): - if not self._seek_check(frame): - return - - self.frame = frame - - if self.mode == "1": - bits = 1 - else: - bits = 8 * len(self.mode) - - size = ((self.size[0] * bits + 7) // 8) * self.size[1] - offs = self.__offset + frame * size - - self.fp = self._fp - - self.tile = [("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1))] - - def tell(self): - return self.frame - - -# -# -------------------------------------------------------------------- -# Save IM files - - -SAVE = { - # mode: (im type, raw mode) - "1": ("0 1", "1"), - "L": ("Greyscale", "L"), - "LA": ("LA", "LA;L"), - "P": ("Greyscale", "P"), - "PA": ("LA", "PA;L"), - "I": ("L 32S", "I;32S"), - "I;16": ("L 16", "I;16"), - "I;16L": ("L 16L", "I;16L"), - "I;16B": ("L 16B", "I;16B"), - "F": ("L 32F", "F;32F"), - "RGB": ("RGB", "RGB;L"), - "RGBA": ("RGBA", "RGBA;L"), - "RGBX": ("RGBX", "RGBX;L"), - "CMYK": ("CMYK", "CMYK;L"), - "YCbCr": ("YCC", "YCbCr;L"), -} - - -def _save(im, fp, filename): - try: - image_type, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as IM" - raise ValueError(msg) from e - - frames = im.encoderinfo.get("frames", 1) - - fp.write(f"Image type: {image_type} image\r\n".encode("ascii")) - if filename: - # Each line must be 100 characters or less, - # or: SyntaxError("not an IM file") - # 8 characters are used for "Name: " and "\r\n" - # Keep just the filename, ditch the potentially overlong path - name, ext = os.path.splitext(os.path.basename(filename)) - name = "".join([name[: 92 - len(ext)], ext]) - - fp.write(f"Name: {name}\r\n".encode("ascii")) - fp.write(("Image size (x*y): %d*%d\r\n" % im.size).encode("ascii")) - fp.write(f"File size (no of images): {frames}\r\n".encode("ascii")) - if im.mode in ["P", "PA"]: - fp.write(b"Lut: 1\r\n") - fp.write(b"\000" * (511 - fp.tell()) + b"\032") - if im.mode in ["P", "PA"]: - im_palette = im.im.getpalette("RGB", "RGB;L") - colors = len(im_palette) // 3 - palette = b"" - for i in range(3): - palette += im_palette[colors * i : colors * (i + 1)] - palette += b"\x00" * (256 - colors) - fp.write(palette) # 768 bytes - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, -1))]) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(ImImageFile.format, ImImageFile) -Image.register_save(ImImageFile.format, _save) - -Image.register_extension(ImImageFile.format, ".im") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/fontBuilder.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/fontBuilder.py deleted file mode 100644 index 8f83ea80034c431b39aa38b2fc28b67957c71fb9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/fontBuilder.py +++ /dev/null @@ -1,993 +0,0 @@ -__all__ = ["FontBuilder"] - -""" -This module is *experimental*, meaning it still may evolve and change. - -The `FontBuilder` class is a convenient helper to construct working TTF or -OTF fonts from scratch. - -Note that the various setup methods cannot be called in arbitrary order, -due to various interdependencies between OpenType tables. Here is an order -that works: - - fb = FontBuilder(...) - fb.setupGlyphOrder(...) - fb.setupCharacterMap(...) - fb.setupGlyf(...) --or-- fb.setupCFF(...) - fb.setupHorizontalMetrics(...) - fb.setupHorizontalHeader() - fb.setupNameTable(...) - fb.setupOS2() - fb.addOpenTypeFeatures(...) - fb.setupPost() - fb.save(...) - -Here is how to build a minimal TTF: - -```python -from fontTools.fontBuilder import FontBuilder -from fontTools.pens.ttGlyphPen import TTGlyphPen - - -def drawTestGlyph(pen): - pen.moveTo((100, 100)) - pen.lineTo((100, 1000)) - pen.qCurveTo((200, 900), (400, 900), (500, 1000)) - pen.lineTo((500, 100)) - pen.closePath() - - -fb = FontBuilder(1024, isTTF=True) -fb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"]) -fb.setupCharacterMap({32: "space", 65: "A", 97: "a"}) -advanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0} - -familyName = "HelloTestFont" -styleName = "TotallyNormal" -version = "0.1" - -nameStrings = dict( - familyName=dict(en=familyName, nl="HalloTestFont"), - styleName=dict(en=styleName, nl="TotaalNormaal"), - uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName, - fullName=familyName + "-" + styleName, - psName=familyName + "-" + styleName, - version="Version " + version, -) - -pen = TTGlyphPen(None) -drawTestGlyph(pen) -glyph = pen.glyph() -glyphs = {".notdef": glyph, "space": glyph, "A": glyph, "a": glyph, ".null": glyph} -fb.setupGlyf(glyphs) -metrics = {} -glyphTable = fb.font["glyf"] -for gn, advanceWidth in advanceWidths.items(): - metrics[gn] = (advanceWidth, glyphTable[gn].xMin) -fb.setupHorizontalMetrics(metrics) -fb.setupHorizontalHeader(ascent=824, descent=-200) -fb.setupNameTable(nameStrings) -fb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200) -fb.setupPost() -fb.save("test.ttf") -``` - -And here's how to build a minimal OTF: - -```python -from fontTools.fontBuilder import FontBuilder -from fontTools.pens.t2CharStringPen import T2CharStringPen - - -def drawTestGlyph(pen): - pen.moveTo((100, 100)) - pen.lineTo((100, 1000)) - pen.curveTo((200, 900), (400, 900), (500, 1000)) - pen.lineTo((500, 100)) - pen.closePath() - - -fb = FontBuilder(1024, isTTF=False) -fb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"]) -fb.setupCharacterMap({32: "space", 65: "A", 97: "a"}) -advanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0} - -familyName = "HelloTestFont" -styleName = "TotallyNormal" -version = "0.1" - -nameStrings = dict( - familyName=dict(en=familyName, nl="HalloTestFont"), - styleName=dict(en=styleName, nl="TotaalNormaal"), - uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName, - fullName=familyName + "-" + styleName, - psName=familyName + "-" + styleName, - version="Version " + version, -) - -pen = T2CharStringPen(600, None) -drawTestGlyph(pen) -charString = pen.getCharString() -charStrings = { - ".notdef": charString, - "space": charString, - "A": charString, - "a": charString, - ".null": charString, -} -fb.setupCFF(nameStrings["psName"], {"FullName": nameStrings["psName"]}, charStrings, {}) -lsb = {gn: cs.calcBounds(None)[0] for gn, cs in charStrings.items()} -metrics = {} -for gn, advanceWidth in advanceWidths.items(): - metrics[gn] = (advanceWidth, lsb[gn]) -fb.setupHorizontalMetrics(metrics) -fb.setupHorizontalHeader(ascent=824, descent=200) -fb.setupNameTable(nameStrings) -fb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200) -fb.setupPost() -fb.save("test.otf") -``` -""" - -from .ttLib import TTFont, newTable -from .ttLib.tables._c_m_a_p import cmap_classes -from .ttLib.tables._g_l_y_f import flagCubic -from .ttLib.tables.O_S_2f_2 import Panose -from .misc.timeTools import timestampNow -import struct -from collections import OrderedDict - - -_headDefaults = dict( - tableVersion=1.0, - fontRevision=1.0, - checkSumAdjustment=0, - magicNumber=0x5F0F3CF5, - flags=0x0003, - unitsPerEm=1000, - created=0, - modified=0, - xMin=0, - yMin=0, - xMax=0, - yMax=0, - macStyle=0, - lowestRecPPEM=3, - fontDirectionHint=2, - indexToLocFormat=0, - glyphDataFormat=0, -) - -_maxpDefaultsTTF = dict( - tableVersion=0x00010000, - numGlyphs=0, - maxPoints=0, - maxContours=0, - maxCompositePoints=0, - maxCompositeContours=0, - maxZones=2, - maxTwilightPoints=0, - maxStorage=0, - maxFunctionDefs=0, - maxInstructionDefs=0, - maxStackElements=0, - maxSizeOfInstructions=0, - maxComponentElements=0, - maxComponentDepth=0, -) -_maxpDefaultsOTF = dict( - tableVersion=0x00005000, - numGlyphs=0, -) - -_postDefaults = dict( - formatType=3.0, - italicAngle=0, - underlinePosition=0, - underlineThickness=0, - isFixedPitch=0, - minMemType42=0, - maxMemType42=0, - minMemType1=0, - maxMemType1=0, -) - -_hheaDefaults = dict( - tableVersion=0x00010000, - ascent=0, - descent=0, - lineGap=0, - advanceWidthMax=0, - minLeftSideBearing=0, - minRightSideBearing=0, - xMaxExtent=0, - caretSlopeRise=1, - caretSlopeRun=0, - caretOffset=0, - reserved0=0, - reserved1=0, - reserved2=0, - reserved3=0, - metricDataFormat=0, - numberOfHMetrics=0, -) - -_vheaDefaults = dict( - tableVersion=0x00010000, - ascent=0, - descent=0, - lineGap=0, - advanceHeightMax=0, - minTopSideBearing=0, - minBottomSideBearing=0, - yMaxExtent=0, - caretSlopeRise=0, - caretSlopeRun=0, - reserved0=0, - reserved1=0, - reserved2=0, - reserved3=0, - reserved4=0, - metricDataFormat=0, - numberOfVMetrics=0, -) - -_nameIDs = dict( - copyright=0, - familyName=1, - styleName=2, - uniqueFontIdentifier=3, - fullName=4, - version=5, - psName=6, - trademark=7, - manufacturer=8, - designer=9, - description=10, - vendorURL=11, - designerURL=12, - licenseDescription=13, - licenseInfoURL=14, - # reserved = 15, - typographicFamily=16, - typographicSubfamily=17, - compatibleFullName=18, - sampleText=19, - postScriptCIDFindfontName=20, - wwsFamilyName=21, - wwsSubfamilyName=22, - lightBackgroundPalette=23, - darkBackgroundPalette=24, - variationsPostScriptNamePrefix=25, -) - -# to insert in setupNameTable doc string: -# print("\n".join(("%s (nameID %s)" % (k, v)) for k, v in sorted(_nameIDs.items(), key=lambda x: x[1]))) - -_panoseDefaults = Panose() - -_OS2Defaults = dict( - version=3, - xAvgCharWidth=0, - usWeightClass=400, - usWidthClass=5, - fsType=0x0004, # default: Preview & Print embedding - ySubscriptXSize=0, - ySubscriptYSize=0, - ySubscriptXOffset=0, - ySubscriptYOffset=0, - ySuperscriptXSize=0, - ySuperscriptYSize=0, - ySuperscriptXOffset=0, - ySuperscriptYOffset=0, - yStrikeoutSize=0, - yStrikeoutPosition=0, - sFamilyClass=0, - panose=_panoseDefaults, - ulUnicodeRange1=0, - ulUnicodeRange2=0, - ulUnicodeRange3=0, - ulUnicodeRange4=0, - achVendID="????", - fsSelection=0, - usFirstCharIndex=0, - usLastCharIndex=0, - sTypoAscender=0, - sTypoDescender=0, - sTypoLineGap=0, - usWinAscent=0, - usWinDescent=0, - ulCodePageRange1=0, - ulCodePageRange2=0, - sxHeight=0, - sCapHeight=0, - usDefaultChar=0, # .notdef - usBreakChar=32, # space - usMaxContext=0, - usLowerOpticalPointSize=0, - usUpperOpticalPointSize=0, -) - - -class FontBuilder(object): - def __init__(self, unitsPerEm=None, font=None, isTTF=True, glyphDataFormat=0): - """Initialize a FontBuilder instance. - - If the `font` argument is not given, a new `TTFont` will be - constructed, and `unitsPerEm` must be given. If `isTTF` is True, - the font will be a glyf-based TTF; if `isTTF` is False it will be - a CFF-based OTF. - - The `glyphDataFormat` argument corresponds to the `head` table field - that defines the format of the TrueType `glyf` table (default=0). - TrueType glyphs historically can only contain quadratic splines and static - components, but there's a proposal to add support for cubic Bezier curves as well - as variable composites/components at - https://github.com/harfbuzz/boring-expansion-spec/blob/main/glyf1.md - You can experiment with the new features by setting `glyphDataFormat` to 1. - A ValueError is raised if `glyphDataFormat` is left at 0 but glyphs are added - that contain cubic splines or varcomposites. This is to prevent accidentally - creating fonts that are incompatible with existing TrueType implementations. - - If `font` is given, it must be a `TTFont` instance and `unitsPerEm` - must _not_ be given. The `isTTF` and `glyphDataFormat` arguments will be ignored. - """ - if font is None: - self.font = TTFont(recalcTimestamp=False) - self.isTTF = isTTF - now = timestampNow() - assert unitsPerEm is not None - self.setupHead( - unitsPerEm=unitsPerEm, - create=now, - modified=now, - glyphDataFormat=glyphDataFormat, - ) - self.setupMaxp() - else: - assert unitsPerEm is None - self.font = font - self.isTTF = "glyf" in font - - def save(self, file): - """Save the font. The 'file' argument can be either a pathname or a - writable file object. - """ - self.font.save(file) - - def _initTableWithValues(self, tableTag, defaults, values): - table = self.font[tableTag] = newTable(tableTag) - for k, v in defaults.items(): - setattr(table, k, v) - for k, v in values.items(): - setattr(table, k, v) - return table - - def _updateTableWithValues(self, tableTag, values): - table = self.font[tableTag] - for k, v in values.items(): - setattr(table, k, v) - - def setupHead(self, **values): - """Create a new `head` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("head", _headDefaults, values) - - def updateHead(self, **values): - """Update the head table with the fields and values passed as - keyword arguments. - """ - self._updateTableWithValues("head", values) - - def setupGlyphOrder(self, glyphOrder): - """Set the glyph order for the font.""" - self.font.setGlyphOrder(glyphOrder) - - def setupCharacterMap(self, cmapping, uvs=None, allowFallback=False): - """Build the `cmap` table for the font. The `cmapping` argument should - be a dict mapping unicode code points as integers to glyph names. - - The `uvs` argument, when passed, must be a list of tuples, describing - Unicode Variation Sequences. These tuples have three elements: - (unicodeValue, variationSelector, glyphName) - `unicodeValue` and `variationSelector` are integer code points. - `glyphName` may be None, to indicate this is the default variation. - Text processors will then use the cmap to find the glyph name. - Each Unicode Variation Sequence should be an officially supported - sequence, but this is not policed. - """ - subTables = [] - highestUnicode = max(cmapping) if cmapping else 0 - if highestUnicode > 0xFFFF: - cmapping_3_1 = dict((k, v) for k, v in cmapping.items() if k < 0x10000) - subTable_3_10 = buildCmapSubTable(cmapping, 12, 3, 10) - subTables.append(subTable_3_10) - else: - cmapping_3_1 = cmapping - format = 4 - subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1) - try: - subTable_3_1.compile(self.font) - except struct.error: - # format 4 overflowed, fall back to format 12 - if not allowFallback: - raise ValueError( - "cmap format 4 subtable overflowed; sort glyph order by unicode to fix." - ) - format = 12 - subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1) - subTables.append(subTable_3_1) - subTable_0_3 = buildCmapSubTable(cmapping_3_1, format, 0, 3) - subTables.append(subTable_0_3) - - if uvs is not None: - uvsDict = {} - for unicodeValue, variationSelector, glyphName in uvs: - if cmapping.get(unicodeValue) == glyphName: - # this is a default variation - glyphName = None - if variationSelector not in uvsDict: - uvsDict[variationSelector] = [] - uvsDict[variationSelector].append((unicodeValue, glyphName)) - uvsSubTable = buildCmapSubTable({}, 14, 0, 5) - uvsSubTable.uvsDict = uvsDict - subTables.append(uvsSubTable) - - self.font["cmap"] = newTable("cmap") - self.font["cmap"].tableVersion = 0 - self.font["cmap"].tables = subTables - - def setupNameTable(self, nameStrings, windows=True, mac=True): - """Create the `name` table for the font. The `nameStrings` argument must - be a dict, mapping nameIDs or descriptive names for the nameIDs to name - record values. A value is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - - By default, both Windows (platformID=3) and Macintosh (platformID=1) name - records are added, unless any of `windows` or `mac` arguments is False. - - The following descriptive names are available for nameIDs: - - copyright (nameID 0) - familyName (nameID 1) - styleName (nameID 2) - uniqueFontIdentifier (nameID 3) - fullName (nameID 4) - version (nameID 5) - psName (nameID 6) - trademark (nameID 7) - manufacturer (nameID 8) - designer (nameID 9) - description (nameID 10) - vendorURL (nameID 11) - designerURL (nameID 12) - licenseDescription (nameID 13) - licenseInfoURL (nameID 14) - typographicFamily (nameID 16) - typographicSubfamily (nameID 17) - compatibleFullName (nameID 18) - sampleText (nameID 19) - postScriptCIDFindfontName (nameID 20) - wwsFamilyName (nameID 21) - wwsSubfamilyName (nameID 22) - lightBackgroundPalette (nameID 23) - darkBackgroundPalette (nameID 24) - variationsPostScriptNamePrefix (nameID 25) - """ - nameTable = self.font["name"] = newTable("name") - nameTable.names = [] - - for nameName, nameValue in nameStrings.items(): - if isinstance(nameName, int): - nameID = nameName - else: - nameID = _nameIDs[nameName] - if isinstance(nameValue, str): - nameValue = dict(en=nameValue) - nameTable.addMultilingualName( - nameValue, ttFont=self.font, nameID=nameID, windows=windows, mac=mac - ) - - def setupOS2(self, **values): - """Create a new `OS/2` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("OS/2", _OS2Defaults, values) - if "xAvgCharWidth" not in values: - assert ( - "hmtx" in self.font - ), "the 'hmtx' table must be setup before the 'OS/2' table" - self.font["OS/2"].recalcAvgCharWidth(self.font) - if not ( - "ulUnicodeRange1" in values - or "ulUnicodeRange2" in values - or "ulUnicodeRange3" in values - or "ulUnicodeRange3" in values - ): - assert ( - "cmap" in self.font - ), "the 'cmap' table must be setup before the 'OS/2' table" - self.font["OS/2"].recalcUnicodeRanges(self.font) - - def setupCFF(self, psName, fontInfo, charStringsDict, privateDict): - from .cffLib import ( - CFFFontSet, - TopDictIndex, - TopDict, - CharStrings, - GlobalSubrsIndex, - PrivateDict, - ) - - assert not self.isTTF - self.font.sfntVersion = "OTTO" - fontSet = CFFFontSet() - fontSet.major = 1 - fontSet.minor = 0 - fontSet.otFont = self.font - fontSet.fontNames = [psName] - fontSet.topDictIndex = TopDictIndex() - - globalSubrs = GlobalSubrsIndex() - fontSet.GlobalSubrs = globalSubrs - private = PrivateDict() - for key, value in privateDict.items(): - setattr(private, key, value) - fdSelect = None - fdArray = None - - topDict = TopDict() - topDict.charset = self.font.getGlyphOrder() - topDict.Private = private - topDict.GlobalSubrs = fontSet.GlobalSubrs - for key, value in fontInfo.items(): - setattr(topDict, key, value) - if "FontMatrix" not in fontInfo: - scale = 1 / self.font["head"].unitsPerEm - topDict.FontMatrix = [scale, 0, 0, scale, 0, 0] - - charStrings = CharStrings( - None, topDict.charset, globalSubrs, private, fdSelect, fdArray - ) - for glyphName, charString in charStringsDict.items(): - charString.private = private - charString.globalSubrs = globalSubrs - charStrings[glyphName] = charString - topDict.CharStrings = charStrings - - fontSet.topDictIndex.append(topDict) - - self.font["CFF "] = newTable("CFF ") - self.font["CFF "].cff = fontSet - - def setupCFF2(self, charStringsDict, fdArrayList=None, regions=None): - from .cffLib import ( - CFFFontSet, - TopDictIndex, - TopDict, - CharStrings, - GlobalSubrsIndex, - PrivateDict, - FDArrayIndex, - FontDict, - ) - - assert not self.isTTF - self.font.sfntVersion = "OTTO" - fontSet = CFFFontSet() - fontSet.major = 2 - fontSet.minor = 0 - - cff2GetGlyphOrder = self.font.getGlyphOrder - fontSet.topDictIndex = TopDictIndex(None, cff2GetGlyphOrder, None) - - globalSubrs = GlobalSubrsIndex() - fontSet.GlobalSubrs = globalSubrs - - if fdArrayList is None: - fdArrayList = [{}] - fdSelect = None - fdArray = FDArrayIndex() - fdArray.strings = None - fdArray.GlobalSubrs = globalSubrs - for privateDict in fdArrayList: - fontDict = FontDict() - fontDict.setCFF2(True) - private = PrivateDict() - for key, value in privateDict.items(): - setattr(private, key, value) - fontDict.Private = private - fdArray.append(fontDict) - - topDict = TopDict() - topDict.cff2GetGlyphOrder = cff2GetGlyphOrder - topDict.FDArray = fdArray - scale = 1 / self.font["head"].unitsPerEm - topDict.FontMatrix = [scale, 0, 0, scale, 0, 0] - - private = fdArray[0].Private - charStrings = CharStrings(None, None, globalSubrs, private, fdSelect, fdArray) - for glyphName, charString in charStringsDict.items(): - charString.private = private - charString.globalSubrs = globalSubrs - charStrings[glyphName] = charString - topDict.CharStrings = charStrings - - fontSet.topDictIndex.append(topDict) - - self.font["CFF2"] = newTable("CFF2") - self.font["CFF2"].cff = fontSet - - if regions: - self.setupCFF2Regions(regions) - - def setupCFF2Regions(self, regions): - from .varLib.builder import buildVarRegionList, buildVarData, buildVarStore - from .cffLib import VarStoreData - - assert "fvar" in self.font, "fvar must to be set up first" - assert "CFF2" in self.font, "CFF2 must to be set up first" - axisTags = [a.axisTag for a in self.font["fvar"].axes] - varRegionList = buildVarRegionList(regions, axisTags) - varData = buildVarData(list(range(len(regions))), None, optimize=False) - varStore = buildVarStore(varRegionList, [varData]) - vstore = VarStoreData(otVarStore=varStore) - topDict = self.font["CFF2"].cff.topDictIndex[0] - topDict.VarStore = vstore - for fontDict in topDict.FDArray: - fontDict.Private.vstore = vstore - - def setupGlyf(self, glyphs, calcGlyphBounds=True, validateGlyphFormat=True): - """Create the `glyf` table from a dict, that maps glyph names - to `fontTools.ttLib.tables._g_l_y_f.Glyph` objects, for example - as made by `fontTools.pens.ttGlyphPen.TTGlyphPen`. - - If `calcGlyphBounds` is True, the bounds of all glyphs will be - calculated. Only pass False if your glyph objects already have - their bounding box values set. - - If `validateGlyphFormat` is True, raise ValueError if any of the glyphs contains - cubic curves or is a variable composite but head.glyphDataFormat=0. - Set it to False to skip the check if you know in advance all the glyphs are - compatible with the specified glyphDataFormat. - """ - assert self.isTTF - - if validateGlyphFormat and self.font["head"].glyphDataFormat == 0: - for name, g in glyphs.items(): - if g.isVarComposite(): - raise ValueError( - f"Glyph {name!r} is a variable composite, but glyphDataFormat=0" - ) - elif g.numberOfContours > 0 and any(f & flagCubic for f in g.flags): - raise ValueError( - f"Glyph {name!r} has cubic Bezier outlines, but glyphDataFormat=0; " - "either convert to quadratics with cu2qu or set glyphDataFormat=1." - ) - - self.font["loca"] = newTable("loca") - self.font["glyf"] = newTable("glyf") - self.font["glyf"].glyphs = glyphs - if hasattr(self.font, "glyphOrder"): - self.font["glyf"].glyphOrder = self.font.glyphOrder - if calcGlyphBounds: - self.calcGlyphBounds() - - def setupFvar(self, axes, instances): - """Adds an font variations table to the font. - - Args: - axes (list): See below. - instances (list): See below. - - ``axes`` should be a list of axes, with each axis either supplied as - a py:class:`.designspaceLib.AxisDescriptor` object, or a tuple in the - format ```tupletag, minValue, defaultValue, maxValue, name``. - The ``name`` is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - - ```instances`` should be a list of instances, with each instance either - supplied as a py:class:`.designspaceLib.InstanceDescriptor` object, or a - dict with keys ``location`` (mapping of axis tags to float values), - ``stylename`` and (optionally) ``postscriptfontname``. - The ``stylename`` is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - """ - - addFvar(self.font, axes, instances) - - def setupAvar(self, axes, mappings=None): - """Adds an axis variations table to the font. - - Args: - axes (list): A list of py:class:`.designspaceLib.AxisDescriptor` objects. - """ - from .varLib import _add_avar - - if "fvar" not in self.font: - raise KeyError("'fvar' table is missing; can't add 'avar'.") - - axisTags = [axis.axisTag for axis in self.font["fvar"].axes] - axes = OrderedDict(enumerate(axes)) # Only values are used - _add_avar(self.font, axes, mappings, axisTags) - - def setupGvar(self, variations): - gvar = self.font["gvar"] = newTable("gvar") - gvar.version = 1 - gvar.reserved = 0 - gvar.variations = variations - - def calcGlyphBounds(self): - """Calculate the bounding boxes of all glyphs in the `glyf` table. - This is usually not called explicitly by client code. - """ - glyphTable = self.font["glyf"] - for glyph in glyphTable.glyphs.values(): - glyph.recalcBounds(glyphTable) - - def setupHorizontalMetrics(self, metrics): - """Create a new `hmtx` table, for horizontal metrics. - - The `metrics` argument must be a dict, mapping glyph names to - `(width, leftSidebearing)` tuples. - """ - self.setupMetrics("hmtx", metrics) - - def setupVerticalMetrics(self, metrics): - """Create a new `vmtx` table, for horizontal metrics. - - The `metrics` argument must be a dict, mapping glyph names to - `(height, topSidebearing)` tuples. - """ - self.setupMetrics("vmtx", metrics) - - def setupMetrics(self, tableTag, metrics): - """See `setupHorizontalMetrics()` and `setupVerticalMetrics()`.""" - assert tableTag in ("hmtx", "vmtx") - mtxTable = self.font[tableTag] = newTable(tableTag) - roundedMetrics = {} - for gn in metrics: - w, lsb = metrics[gn] - roundedMetrics[gn] = int(round(w)), int(round(lsb)) - mtxTable.metrics = roundedMetrics - - def setupHorizontalHeader(self, **values): - """Create a new `hhea` table initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("hhea", _hheaDefaults, values) - - def setupVerticalHeader(self, **values): - """Create a new `vhea` table initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("vhea", _vheaDefaults, values) - - def setupVerticalOrigins(self, verticalOrigins, defaultVerticalOrigin=None): - """Create a new `VORG` table. The `verticalOrigins` argument must be - a dict, mapping glyph names to vertical origin values. - - The `defaultVerticalOrigin` argument should be the most common vertical - origin value. If omitted, this value will be derived from the actual - values in the `verticalOrigins` argument. - """ - if defaultVerticalOrigin is None: - # find the most frequent vorg value - bag = {} - for gn in verticalOrigins: - vorg = verticalOrigins[gn] - if vorg not in bag: - bag[vorg] = 1 - else: - bag[vorg] += 1 - defaultVerticalOrigin = sorted( - bag, key=lambda vorg: bag[vorg], reverse=True - )[0] - self._initTableWithValues( - "VORG", - {}, - dict(VOriginRecords={}, defaultVertOriginY=defaultVerticalOrigin), - ) - vorgTable = self.font["VORG"] - vorgTable.majorVersion = 1 - vorgTable.minorVersion = 0 - for gn in verticalOrigins: - vorgTable[gn] = verticalOrigins[gn] - - def setupPost(self, keepGlyphNames=True, **values): - """Create a new `post` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - isCFF2 = "CFF2" in self.font - postTable = self._initTableWithValues("post", _postDefaults, values) - if (self.isTTF or isCFF2) and keepGlyphNames: - postTable.formatType = 2.0 - postTable.extraNames = [] - postTable.mapping = {} - else: - postTable.formatType = 3.0 - - def setupMaxp(self): - """Create a new `maxp` table. This is called implicitly by FontBuilder - itself and is usually not called by client code. - """ - if self.isTTF: - defaults = _maxpDefaultsTTF - else: - defaults = _maxpDefaultsOTF - self._initTableWithValues("maxp", defaults, {}) - - def setupDummyDSIG(self): - """This adds an empty DSIG table to the font to make some MS applications - happy. This does not properly sign the font. - """ - values = dict( - ulVersion=1, - usFlag=0, - usNumSigs=0, - signatureRecords=[], - ) - self._initTableWithValues("DSIG", {}, values) - - def addOpenTypeFeatures(self, features, filename=None, tables=None, debug=False): - """Add OpenType features to the font from a string containing - Feature File syntax. - - The `filename` argument is used in error messages and to determine - where to look for "include" files. - - The optional `tables` argument can be a list of OTL tables tags to - build, allowing the caller to only build selected OTL tables. See - `fontTools.feaLib` for details. - - The optional `debug` argument controls whether to add source debugging - information to the font in the `Debg` table. - """ - from .feaLib.builder import addOpenTypeFeaturesFromString - - addOpenTypeFeaturesFromString( - self.font, features, filename=filename, tables=tables, debug=debug - ) - - def addFeatureVariations(self, conditionalSubstitutions, featureTag="rvrn"): - """Add conditional substitutions to a Variable Font. - - See `fontTools.varLib.featureVars.addFeatureVariations`. - """ - from .varLib import featureVars - - if "fvar" not in self.font: - raise KeyError("'fvar' table is missing; can't add FeatureVariations.") - - featureVars.addFeatureVariations( - self.font, conditionalSubstitutions, featureTag=featureTag - ) - - def setupCOLR( - self, - colorLayers, - version=None, - varStore=None, - varIndexMap=None, - clipBoxes=None, - allowLayerReuse=True, - ): - """Build new COLR table using color layers dictionary. - - Cf. `fontTools.colorLib.builder.buildCOLR`. - """ - from fontTools.colorLib.builder import buildCOLR - - glyphMap = self.font.getReverseGlyphMap() - self.font["COLR"] = buildCOLR( - colorLayers, - version=version, - glyphMap=glyphMap, - varStore=varStore, - varIndexMap=varIndexMap, - clipBoxes=clipBoxes, - allowLayerReuse=allowLayerReuse, - ) - - def setupCPAL( - self, - palettes, - paletteTypes=None, - paletteLabels=None, - paletteEntryLabels=None, - ): - """Build new CPAL table using list of palettes. - - Optionally build CPAL v1 table using paletteTypes, paletteLabels and - paletteEntryLabels. - - Cf. `fontTools.colorLib.builder.buildCPAL`. - """ - from fontTools.colorLib.builder import buildCPAL - - self.font["CPAL"] = buildCPAL( - palettes, - paletteTypes=paletteTypes, - paletteLabels=paletteLabels, - paletteEntryLabels=paletteEntryLabels, - nameTable=self.font.get("name"), - ) - - def setupStat(self, axes, locations=None, elidedFallbackName=2): - """Build a new 'STAT' table. - - See `fontTools.otlLib.builder.buildStatTable` for details about - the arguments. - """ - from .otlLib.builder import buildStatTable - - buildStatTable(self.font, axes, locations, elidedFallbackName) - - -def buildCmapSubTable(cmapping, format, platformID, platEncID): - subTable = cmap_classes[format](format) - subTable.cmap = cmapping - subTable.platformID = platformID - subTable.platEncID = platEncID - subTable.language = 0 - return subTable - - -def addFvar(font, axes, instances): - from .ttLib.tables._f_v_a_r import Axis, NamedInstance - - assert axes - - fvar = newTable("fvar") - nameTable = font["name"] - - for axis_def in axes: - axis = Axis() - - if isinstance(axis_def, tuple): - ( - axis.axisTag, - axis.minValue, - axis.defaultValue, - axis.maxValue, - name, - ) = axis_def - else: - (axis.axisTag, axis.minValue, axis.defaultValue, axis.maxValue, name) = ( - axis_def.tag, - axis_def.minimum, - axis_def.default, - axis_def.maximum, - axis_def.name, - ) - if axis_def.hidden: - axis.flags = 0x0001 # HIDDEN_AXIS - - if isinstance(name, str): - name = dict(en=name) - - axis.axisNameID = nameTable.addMultilingualName(name, ttFont=font) - fvar.axes.append(axis) - - for instance in instances: - if isinstance(instance, dict): - coordinates = instance["location"] - name = instance["stylename"] - psname = instance.get("postscriptfontname") - else: - coordinates = instance.location - name = instance.localisedStyleName or instance.styleName - psname = instance.postScriptFontName - - if isinstance(name, str): - name = dict(en=name) - - inst = NamedInstance() - inst.subfamilyNameID = nameTable.addMultilingualName(name, ttFont=font) - if psname is not None: - inst.postscriptNameID = nameTable.addName(psname) - inst.coordinates = coordinates - fvar.instances.append(inst) - - font["fvar"] = fvar diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/S_I_N_G_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/S_I_N_G_.py deleted file mode 100644 index 7420da7e5dcec81b835ab0e8e2c775dbce860cbd..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/S_I_N_G_.py +++ /dev/null @@ -1,93 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytechr, byteord, tobytes, tostr, safeEval -from . import DefaultTable - -SINGFormat = """ - > # big endian - tableVersionMajor: H - tableVersionMinor: H - glyphletVersion: H - permissions: h - mainGID: H - unitsPerEm: H - vertAdvance: h - vertOrigin: h - uniqueName: 28s - METAMD5: 16s - nameLength: 1s -""" -# baseGlyphName is a byte string which follows the record above. - - -class table_S_I_N_G_(DefaultTable.DefaultTable): - - dependencies = [] - - def decompile(self, data, ttFont): - dummy, rest = sstruct.unpack2(SINGFormat, data, self) - self.uniqueName = self.decompileUniqueName(self.uniqueName) - self.nameLength = byteord(self.nameLength) - assert len(rest) == self.nameLength - self.baseGlyphName = tostr(rest) - - rawMETAMD5 = self.METAMD5 - self.METAMD5 = "[" + hex(byteord(self.METAMD5[0])) - for char in rawMETAMD5[1:]: - self.METAMD5 = self.METAMD5 + ", " + hex(byteord(char)) - self.METAMD5 = self.METAMD5 + "]" - - def decompileUniqueName(self, data): - name = "" - for char in data: - val = byteord(char) - if val == 0: - break - if (val > 31) or (val < 128): - name += chr(val) - else: - octString = oct(val) - if len(octString) > 3: - octString = octString[1:] # chop off that leading zero. - elif len(octString) < 3: - octString.zfill(3) - name += "\\" + octString - return name - - def compile(self, ttFont): - d = self.__dict__.copy() - d["nameLength"] = bytechr(len(self.baseGlyphName)) - d["uniqueName"] = self.compilecompileUniqueName(self.uniqueName, 28) - METAMD5List = eval(self.METAMD5) - d["METAMD5"] = b"" - for val in METAMD5List: - d["METAMD5"] += bytechr(val) - assert len(d["METAMD5"]) == 16, "Failed to pack 16 byte MD5 hash in SING table" - data = sstruct.pack(SINGFormat, d) - data = data + tobytes(self.baseGlyphName) - return data - - def compilecompileUniqueName(self, name, length): - nameLen = len(name) - if length <= nameLen: - name = name[: length - 1] + "\000" - else: - name += (nameLen - length) * "\000" - return name - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - formatstring, names, fixes = sstruct.getformat(SINGFormat) - for name in names: - value = getattr(self, name) - writer.simpletag(name, value=value) - writer.newline() - writer.simpletag("baseGlyphName", value=self.baseGlyphName) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - if name in ["uniqueName", "METAMD5", "baseGlyphName"]: - setattr(self, name, value) - else: - setattr(self, name, safeEval(value)) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/r-3ca97919.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/r-3ca97919.js deleted file mode 100644 index e460c951763f569906751f34aed4265f5d719d36..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/r-3ca97919.js +++ /dev/null @@ -1,2 +0,0 @@ -function f(e){for(var n={},r=0;r=!&|~$:]/,t;function p(e,n){t=null;var r=e.next();if(r=="#")return e.skipToEnd(),"comment";if(r=="0"&&e.eat("x"))return e.eatWhile(/[\da-f]/i),"number";if(r=="."&&e.eat(/\d/))return e.match(/\d*(?:e[+\-]?\d+)?/),"number";if(/\d/.test(r))return e.match(/\d*(?:\.\d+)?(?:e[+\-]\d+)?L?/),"number";if(r=="'"||r=='"')return n.tokenize=E(r),"string";if(r=="`")return e.match(/[^`]+`/),"string.special";if(r=="."&&e.match(/.(?:[.]|\d+)/))return"keyword";if(/[a-zA-Z\.]/.test(r)){e.eatWhile(/[\w\.]/);var i=e.current();return h.propertyIsEnumerable(i)?"atom":N.propertyIsEnumerable(i)?(A.propertyIsEnumerable(i)&&!e.match(/\s*if(\s+|$)/,!1)&&(t="block"),"keyword"):m.propertyIsEnumerable(i)?"builtin":"variable"}else return r=="%"?(e.skipTo("%")&&e.next(),"variableName.special"):r=="<"&&e.eat("-")||r=="<"&&e.match("<-")||r=="-"&&e.match(/>>?/)||r=="="&&n.ctx.argList?"operator":k.test(r)?(r=="$"||e.eatWhile(k),"operator"):/[\(\){}\[\];]/.test(r)?(t=r,r==";"?"punctuation":null):null}function E(e){return function(n,r){if(n.eat("\\")){var i=n.next();return i=="x"?n.match(/^[a-f0-9]{2}/i):(i=="u"||i=="U")&&n.eat("{")&&n.skipTo("}")?n.next():i=="u"?n.match(/^[a-f0-9]{4}/i):i=="U"?n.match(/^[a-f0-9]{8}/i):/[0-7]/.test(i)&&n.match(/^[0-7]{1,2}/),"string.special"}else{for(var l;(l=n.next())!=null;){if(l==e){r.tokenize=p;break}if(l=="\\"){n.backUp(1);break}}return"string"}}}var v=1,u=2,c=4;function o(e,n,r){e.ctx={type:n,indent:e.indent,flags:0,column:r.column(),prev:e.ctx}}function x(e,n){var r=e.ctx;e.ctx={type:r.type,indent:r.indent,flags:r.flags|n,column:r.column,prev:r.prev}}function a(e){e.indent=e.ctx.indent,e.ctx=e.ctx.prev}const I={name:"r",startState:function(e){return{tokenize:p,ctx:{type:"top",indent:-e,flags:u},indent:0,afterIdent:!1}},token:function(e,n){if(e.sol()&&(n.ctx.flags&3||(n.ctx.flags|=u),n.ctx.flags&c&&a(n),n.indent=e.indentation()),e.eatSpace())return null;var r=n.tokenize(e,n);return r!="comment"&&!(n.ctx.flags&u)&&x(n,v),(t==";"||t=="{"||t=="}")&&n.ctx.type=="block"&&a(n),t=="{"?o(n,"}",e):t=="("?(o(n,")",e),n.afterIdent&&(n.ctx.argList=!0)):t=="["?o(n,"]",e):t=="block"?o(n,"block",e):t==n.ctx.type?a(n):n.ctx.type=="block"&&r!="comment"&&x(n,c),n.afterIdent=r=="variable"||r=="keyword",r},indent:function(e,n,r){if(e.tokenize!=p)return 0;var i=n&&n.charAt(0),l=e.ctx,d=i==l.type;return l.flags&c&&(l=l.prev),l.type=="block"?l.indent+(i=="{"?0:r.unit):l.flags&v?l.column+(d?0:1):l.indent+(d?0:r.unit)},languageData:{wordChars:".",commentTokens:{line:"#"},autocomplete:b.concat(g,s)}};export{I as r}; -//# sourceMappingURL=r-3ca97919.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/modelisation_dict_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/modelisation_dict_tab.py deleted file mode 100644 index 038e3255a7de23112b0d30683c9e0c9c37141530..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/modelisation_dict_tab.py +++ /dev/null @@ -1,263 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import os -from sacrebleu import corpus_bleu -# from sklearn.cluster import KMeans -# from sklearn.neighbors import KNeighborsClassifier -# from sklearn.ensemble import RandomForestClassifier - -title = "Traduction mot à mot" -sidebar_name = "Traduction mot à mot" - -@st.cache_data -def load_corpus(path): - input_file = os.path.join(path) - with open(input_file, "r", encoding="utf-8") as f: - data = f.read() - data = data.split('\n') - data=data[:-1] - return pd.DataFrame(data) - -df_data_en = load_corpus('data/preprocess_txt_en') -df_data_fr = load_corpus('data/preprocess_txt_fr') -n1 = 0 -""" -nb_mots_en = 199 # len(corpus_en) -nb_mots_fr = 330 # len(corpus_fr) - - -# @st.cache_data(ttl='1h00s') -def load_BOW(path, l): - input_file = os.path.join(path) - df1 = pd.read_csv(input_file+'1_'+l, encoding="utf-8", index_col=0) - df2 = pd.read_csv(input_file+'2_'+l, encoding="utf-8", index_col=0) - df_count_word = pd.concat([df1, df2]) - return df_count_word - - -df_count_word_en = load_BOW('../data/preprocess_df_count_word', 'en') -df_count_word_fr = load_BOW('../data/preprocess_df_count_word', 'fr') -""" - -def accuracy(dict_ref,dict): - correct_words = 0 - - for t in dict.columns: - if t in dict_ref.columns: - if str(dict[t]) == str(dict_ref[t]): - correct_words +=1 - else: print("dict ref: manque:",t) - print(correct_words," mots corrects / ",min(dict.shape[1],dict_ref.shape[1])) - return correct_words/min(dict.shape[1],dict_ref.shape[1]) - -""" -# On modifie df_count_word en indiquant la présence d'un mot par 1 (au lieu du nombre d'occurences) -df_count_word_en = df_count_word_en[df_count_word_en==0].fillna(1) -df_count_word_fr = df_count_word_fr[df_count_word_fr==0].fillna(1) - -# On triche un peu parce que new et jersey sont toujours dans la même phrase et donc dans la même classe -if ('new' in df_count_word_en.columns): - df_count_word_en['new']=df_count_word_en['new']*2 - df_count_word_fr['new']=df_count_word_fr['new']*2 - - - -# ============ - -def calc_kmeans(l_src,l_tgt): - global df_count_word_src, df_count_word_tgt, nb_mots_src, nb_mots_tgt - - # Algorithme de K-means - init_centroids = df_count_word_tgt.T - kmeans = KMeans(n_clusters = nb_mots_tgt, n_init=1, max_iter=1, init=init_centroids, verbose=0) - - kmeans.fit(df_count_word_tgt.T) - - # Centroids and labels - centroids= kmeans.cluster_centers_ - labels = kmeans.labels_ - - # Création et affichage du dictionnaire - df_dic = pd.DataFrame(data=df_count_word_tgt.columns[kmeans.predict(df_count_word_src.T)],index=df_count_word_src.T.index,columns=[l_tgt]) - df_dic.index.name= l_src - df_dic = df_dic.T - # print("Dictionnaire Anglais -> Français:") - # translation_quality['Précision du dictionnaire'].loc['K-Means EN->FR'] =round(accuracy(dict_EN_FR_ref,dict_EN_FR)*100, 2) - # print(f"Précision du dictionnaire = {translation_quality['Précision du dictionnaire'].loc['K-Means EN->FR']}%") - # display(dict_EN_FR) - return df_dic - -def calc_knn(l_src,l_tgt, metric): - global df_count_word_src, df_count_word_tgt, nb_mots_src, nb_mots_tgt - - #Définition de la metrique (pour les 2 dictionnaires - knn_metric = metric # minkowski, cosine, chebyshev, manhattan, euclidean - - # Algorithme de KNN - X_train = df_count_word_tgt.T - y_train = range(nb_mots_tgt) - - # Création du classifieur et construction du modèle sur les données d'entraînement - knn = KNeighborsClassifier(n_neighbors=1, metric=knn_metric) - knn.fit(X_train, y_train) - - # Création et affichage du dictionnaire - df_dic = pd.DataFrame(data=df_count_word_tgt.columns[knn.predict(df_count_word_src.T)],index=df_count_word_src.T.index,columns=[l_tgt]) - df_dic.index.name = l_src - df_dic = df_dic.T - - # print("Dictionnaire Anglais -> Français:") - # translation_quality['Précision du dictionnaire'].loc['KNN EN->FR'] =round(accuracy(dict_EN_FR_ref,knn_dict_EN_FR)*100, 2) - # print(f"Précision du dictionnaire = {translation_quality['Précision du dictionnaire'].loc['KNN EN->FR']}%") - # display(knn_dict_EN_FR) - return df_dic - -def calc_rf(l_src,l_tgt): - - # Algorithme de Random Forest - X_train = df_count_word_tgt.T - y_train = range(nb_mots_tgt) - - # Création du classifieur et construction du modèle sur les données d'entraînement - rf = RandomForestClassifier(n_jobs=-1, random_state=321) - rf.fit(X_train, y_train) - - # Création et affichage du dictionnaire - df_dic = pd.DataFrame(data=df_count_word_tgt.columns[rf.predict(df_count_word_src.T)],index=df_count_word_src.T.index,columns=[l_tgt]) - df_dic.index.name= l_src - df_dic = df_dic.T - - # print("Dictionnaire Anglais -> Français:") - # translation_quality['Précision du dictionnaire'].loc['RF EN->FR'] = round(accuracy(dict_EN_FR_ref,rf_dict_EN_FR)*100, 2) - # print(f"Précision du dictionnaire = {translation_quality['Précision du dictionnaire'].loc['RF EN->FR']}%") - # display(rf_dict_EN_FR) - return df_dic - -def calcul_dic(Lang,Algo,Metrique): - - if Lang[:2]=='en': - l_src = 'Anglais' - l_tgt = 'Francais' - else: - l_src = 'Francais' - l_tgt = 'Anglais' - - if Algo=='Manuel': - df_dic = pd.read_csv('../data/dict_ref_'+Lang+'.csv',header=0,index_col=0, encoding ="utf-8", sep=';',keep_default_na=False).T.sort_index(axis=1) - elif Algo=='KMeans': - df_dic = calc_kmeans(l_src,l_tgt) - elif Algo=='KNN': - df_dic = calc_knn(l_src,l_tgt, Metrique) - elif Algo=='Random Forest': - df_dic = calc_rf(l_src,l_tgt) - else: - df_dic = pd.read_csv('../data/dict_ref_'+Lang+'.csv',header=0,index_col=0, encoding ="utf-8", sep=';',keep_default_na=False).T.sort_index(axis=1) - return df_dic -""" -def load_dic(Lang,Algo,Metrique): - - Algo = Algo.lower() - if Algo=='random forest' : Algo = "rf" - else: - if Algo=='word embedding' : Algo = "we" - else: - if Algo!='knn': Metrique = '' - else: Metrique = Metrique+'_' - input_file = os.path.join('data/dict_'+Algo+'_'+Metrique+Lang) - return pd.read_csv(input_file, encoding="utf-8", index_col=0).T.sort_index(axis=1) -# ============ - -def display_translation(n1,dict, Lang): - global df_data_src, df_data_tgt, placeholder - - s = df_data_src.iloc[n1:n1+5][0].tolist() - s_trad = [] - s_trad_ref = df_data_tgt.iloc[n1:n1+5][0].tolist() - source = Lang[:2] - target = Lang[-2:] - for i in range(5): - # for col in s.split(): - # st.write('col: '+col) - # st.write('dict[col]! '+dict[col]) - s_trad.append((' '.join(dict[col].iloc[0] for col in s[i].split()))) - st.write("**"+source+" :** :blue["+ s[i]+"]") - st.write("**"+target+" :** "+s_trad[-1]) - st.write("**ref. :** "+s_trad_ref[i]) - st.write("") - with placeholder: - st.write("

    Score Bleu = "+str(int(round(corpus_bleu(s_trad,[s_trad_ref]).score,0)))+"%

    ", \ - unsafe_allow_html=True) - -def display_dic(df_dic): - st.dataframe(df_dic.T, height=600) - - -def run(): - global n1, df_data_src, df_data_tgt, df_data_en, df_data_fr, placeholder # , df_count_word_src, df_count_word_tgt, nb_mots_src, nb_mots_tgt - # global nb_mots_en, df_count_word_en, df_count_word_fr, nb_mots_en, nb_mots_fr - - st.title(title) - - # - st.write("## **Explications :**\n") - st.markdown( - """ - Dans une première approche naïve, nous avons implémenté un système de traduction mot à mot. - Cette traduction est réalisée grâce à un dictionnaire qui associe un mot de la langue source à un mot de la langue cible, dans small_vocab - Ce dictionnaire est calculé de 3 manières: - * :red[**Manuellement**] en choisissant pour chaque mot source le mot cible. Ceci nous a permis de définir un dictionnaire de référence - * Avec le :red[**Bag Of World**] (chaque mot dans la langue cible = une classe, BOW = features) - """) - st.image("assets/BOW.jpg",use_column_width=True) - st.markdown( - """ - * Avec le :red[**Word Embedding**], c'est à dire en associant chaque mot à un vecteur "sémantique" de dimensions=300, et en selectionnant le vecteur de langue cible - le plus proche du vecteur de langue source. - - Enfin nous calculons: - * la :red[**précision**] du dictionnaire par rapport à notre dictionnaire de réference (manuel) - * le :red[**score BLEU**] ("BiLingual Evaluation Understudy"), qui mesure la précision de notre traduction par rapport à celle de notre corpus référence. - """ - ) - # - st.write("## **Paramètres :**\n") - Sens = st.radio('Sens :',('Anglais -> Français','Français -> Anglais'), horizontal=True) - Lang = ('en_fr' if Sens=='Anglais -> Français' else 'fr_en') - Algo = st.radio('Algorithme :',('Manuel', 'KMeans','KNN','Random Forest','Word Embedding'), horizontal=True) - Metrique = '' - if (Algo == 'KNN'): - Metrique = st.radio('Metrique:',('minkowski', 'cosine', 'chebyshev', 'manhattan', 'euclidean'), horizontal=True) - - if (Lang=='en_fr'): - df_data_src = df_data_en - df_data_tgt = df_data_fr - # df_count_word_src = df_count_word_en - # df_count_word_tgt = df_count_word_fr - # nb_mots_src = nb_mots_en - # nb_mots_tgt = nb_mots_fr - else: - df_data_src = df_data_fr - df_data_tgt = df_data_en - # df_count_word_src = df_count_word_fr - # df_count_word_tgt = df_count_word_en - # nb_mots_src = nb_mots_fr - # nb_mots_tgt = nb_mots_en - - # df_data_src.columns = ['Phrase'] - sentence1 = st.selectbox("Selectionnez la 1ere des 5 phrases à traduire avec le dictionnaire sélectionné", df_data_src.iloc[:-4],index=int(n1) ) - n1 = df_data_src[df_data_src[0]==sentence1].index.values[0] - - df_dic = load_dic(Lang,Algo,Metrique) - df_dic_ref = load_dic(Lang,'Manuel',Metrique) - st.write("## **Dictionnaire calculé et traduction mot à mot :**\n") - col1, col2 = st.columns([0.25, 0.75]) - with col1: - st.write("#### **Dictionnaire**") - precision = int(round(accuracy(df_dic_ref,df_dic)*100, 0)) - st.write("

    Précision = {:2d}%

    ".format(precision), unsafe_allow_html=True) - display_dic(df_dic) - with col2: - st.write("#### **Traduction**") - placeholder = st.empty() - display_translation(n1, df_dic, Lang) diff --git a/spaces/Dragonnnext/Unicorn-proxy/Dockerfile b/spaces/Dragonnnext/Unicorn-proxy/Dockerfile deleted file mode 100644 index 97eed882cd9fb47d4d06f4ca56ef3517e29baa19..0000000000000000000000000000000000000000 --- a/spaces/Dragonnnext/Unicorn-proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/Drago/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/ECCV2022/bytetrack/tutorials/ctracker/byte_tracker.py b/spaces/ECCV2022/bytetrack/tutorials/ctracker/byte_tracker.py deleted file mode 100644 index 0a6ae80119025c0b9b35419ab4ccb5a107b25c0e..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/ctracker/byte_tracker.py +++ /dev/null @@ -1,343 +0,0 @@ -import numpy as np -from collections import deque -import os -import os.path as osp -import copy -import torch -import torch.nn.functional as F - -from mot_online.kalman_filter import KalmanFilter -from mot_online.basetrack import BaseTrack, TrackState -from mot_online import matching - - - -class STrack(BaseTrack): - shared_kalman = KalmanFilter() - def __init__(self, tlwh, score): - - # wait activate - self._tlwh = np.asarray(tlwh, dtype=np.float) - self.kalman_filter = None - self.mean, self.covariance = None, None - self.is_activated = False - - self.score = score - self.tracklet_len = 0 - - def predict(self): - mean_state = self.mean.copy() - if self.state != TrackState.Tracked: - mean_state[7] = 0 - self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance) - - @staticmethod - def multi_predict(stracks): - if len(stracks) > 0: - multi_mean = np.asarray([st.mean.copy() for st in stracks]) - multi_covariance = np.asarray([st.covariance for st in stracks]) - for i, st in enumerate(stracks): - if st.state != TrackState.Tracked: - multi_mean[i][7] = 0 - multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance) - for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)): - stracks[i].mean = mean - stracks[i].covariance = cov - - def activate(self, kalman_filter, frame_id): - """Start a new tracklet""" - self.kalman_filter = kalman_filter - self.track_id = self.next_id() - self.mean, self.covariance = self.kalman_filter.initiate(self.tlwh_to_xyah(self._tlwh)) - - self.tracklet_len = 0 - self.state = TrackState.Tracked - if frame_id == 1: - self.is_activated = True - # self.is_activated = True - self.frame_id = frame_id - self.start_frame = frame_id - - def re_activate(self, new_track, frame_id, new_id=False): - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh) - ) - self.tracklet_len = 0 - self.state = TrackState.Tracked - self.is_activated = True - self.frame_id = frame_id - if new_id: - self.track_id = self.next_id() - self.score = new_track.score - - def update(self, new_track, frame_id): - """ - Update a matched track - :type new_track: STrack - :type frame_id: int - :type update_feature: bool - :return: - """ - self.frame_id = frame_id - self.tracklet_len += 1 - - new_tlwh = new_track.tlwh - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)) - self.state = TrackState.Tracked - self.is_activated = True - - self.score = new_track.score - - @property - # @jit(nopython=True) - def tlwh(self): - """Get current position in bounding box format `(top left x, top left y, - width, height)`. - """ - if self.mean is None: - return self._tlwh.copy() - ret = self.mean[:4].copy() - ret[2] *= ret[3] - ret[:2] -= ret[2:] / 2 - return ret - - @property - # @jit(nopython=True) - def tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_xyah(tlwh): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = np.asarray(tlwh).copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret - - def to_xyah(self): - return self.tlwh_to_xyah(self.tlwh) - - @staticmethod - # @jit(nopython=True) - def tlbr_to_tlwh(tlbr): - ret = np.asarray(tlbr).copy() - ret[2:] -= ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_tlbr(tlwh): - ret = np.asarray(tlwh).copy() - ret[2:] += ret[:2] - return ret - - def __repr__(self): - return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame) - - -class BYTETracker(object): - def __init__(self, frame_rate=30): - self.tracked_stracks = [] # type: list[STrack] - self.lost_stracks = [] # type: list[STrack] - self.removed_stracks = [] # type: list[STrack] - - self.frame_id = 0 - - self.low_thresh = 0.2 - self.track_thresh = 0.4 - self.det_thresh = self.track_thresh + 0.1 - - - self.buffer_size = int(frame_rate / 30.0 * 30) - self.max_time_lost = self.buffer_size - self.kalman_filter = KalmanFilter() - -# def update(self, output_results): - def update(self, det_bboxes, scores): - - self.frame_id += 1 - activated_starcks = [] - refind_stracks = [] - lost_stracks = [] - removed_stracks = [] - -# scores = output_results[:, 4] -# bboxes = output_results[:, :4] # x1y1x2y2 - scores = scores - bboxes = det_bboxes - - remain_inds = scores > self.track_thresh - dets = bboxes[remain_inds] - scores_keep = scores[remain_inds] - - - inds_low = scores > self.low_thresh - inds_high = scores < self.track_thresh - inds_second = np.logical_and(inds_low, inds_high) - dets_second = bboxes[inds_second] - scores_second = scores[inds_second] - - - if len(dets) > 0: - '''Detections''' - detections = [STrack(STrack.tlbr_to_tlwh(tlbr), s) for - (tlbr, s) in zip(dets, scores_keep)] - else: - detections = [] - - ''' Add newly detected tracklets to tracked_stracks''' - unconfirmed = [] - tracked_stracks = [] # type: list[STrack] - for track in self.tracked_stracks: - if not track.is_activated: - unconfirmed.append(track) - else: - tracked_stracks.append(track) - - ''' Step 2: First association, with Kalman and IOU''' - strack_pool = joint_stracks(tracked_stracks, self.lost_stracks) - # Predict the current location with KF - STrack.multi_predict(strack_pool) - dists = matching.iou_distance(strack_pool, detections) - matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.8) - - for itracked, idet in matches: - track = strack_pool[itracked] - det = detections[idet] - if track.state == TrackState.Tracked: - track.update(detections[idet], self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - ''' Step 3: Second association, with IOU''' - # association the untrack to the low score detections - if len(dets_second) > 0: - '''Detections''' - detections_second = [STrack(STrack.tlbr_to_tlwh(tlbr), s) for - (tlbr, s) in zip(dets_second, scores_second)] - else: - detections_second = [] - r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked] - dists = matching.iou_distance(r_tracked_stracks, detections_second) - matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5) - for itracked, idet in matches: - track = r_tracked_stracks[itracked] - det = detections_second[idet] - if track.state == TrackState.Tracked: - track.update(det, self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - for it in u_track: - #track = strack_pool[it] - track = r_tracked_stracks[it] - if not track.state == TrackState.Lost: - track.mark_lost() - lost_stracks.append(track) - - '''Deal with unconfirmed tracks, usually tracks with only one beginning frame''' - detections = [detections[i] for i in u_detection] - dists = matching.iou_distance(unconfirmed, detections) - matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7) - for itracked, idet in matches: - unconfirmed[itracked].update(detections[idet], self.frame_id) - activated_starcks.append(unconfirmed[itracked]) - for it in u_unconfirmed: - track = unconfirmed[it] - track.mark_removed() - removed_stracks.append(track) - - """ Step 4: Init new stracks""" - for inew in u_detection: - track = detections[inew] - if track.score < self.det_thresh: - continue - track.activate(self.kalman_filter, self.frame_id) - activated_starcks.append(track) - """ Step 5: Update state""" - for track in self.lost_stracks: - if self.frame_id - track.end_frame > self.max_time_lost: - track.mark_removed() - removed_stracks.append(track) - - # print('Ramained match {} s'.format(t4-t3)) - - self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked] - self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_starcks) - self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks) - self.lost_stracks.extend(lost_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks) - self.removed_stracks.extend(removed_stracks) - self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks) - # get scores of lost tracks - output_stracks = [track for track in self.tracked_stracks if track.is_activated] - - return output_stracks - - - -def joint_stracks(tlista, tlistb): - exists = {} - res = [] - for t in tlista: - exists[t.track_id] = 1 - res.append(t) - for t in tlistb: - tid = t.track_id - if not exists.get(tid, 0): - exists[tid] = 1 - res.append(t) - return res - - -def sub_stracks(tlista, tlistb): - stracks = {} - for t in tlista: - stracks[t.track_id] = t - for t in tlistb: - tid = t.track_id - if stracks.get(tid, 0): - del stracks[tid] - return list(stracks.values()) - - -def remove_duplicate_stracks(stracksa, stracksb): - pdist = matching.iou_distance(stracksa, stracksb) - pairs = np.where(pdist < 0.15) - dupa, dupb = list(), list() - for p, q in zip(*pairs): - timep = stracksa[p].frame_id - stracksa[p].start_frame - timeq = stracksb[q].frame_id - stracksb[q].start_frame - if timep > timeq: - dupb.append(q) - else: - dupa.append(p) - resa = [t for i, t in enumerate(stracksa) if not i in dupa] - resb = [t for i, t in enumerate(stracksb) if not i in dupb] - return resa, resb - - -def remove_fp_stracks(stracksa, n_frame=10): - remain = [] - for t in stracksa: - score_5 = t.score_list[-n_frame:] - score_5 = np.array(score_5, dtype=np.float32) - index = score_5 < 0.45 - num = np.sum(index) - if num < n_frame: - remain.append(t) - return remain diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/models.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/models.py deleted file mode 100644 index 65f9ae5255616efa19a4f28bc0a840d4c453a060..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/models.py +++ /dev/null @@ -1,722 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class TextEncoder_lora(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels, r=4) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder_lora( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - - -class SynthesizerTrn_lora(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder_lora(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/colab_for_mdx.py b/spaces/FridaZuley/RVC_HFKawaii/colab_for_mdx.py deleted file mode 100644 index 274846d0b5395865a05fce0da86b96d26ac06999..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/colab_for_mdx.py +++ /dev/null @@ -1,71 +0,0 @@ -import json -import os -import gc -import psutil -import requests -import subprocess -import time -import logging -import sys -import shutil -now_dir = os.getcwd() -sys.path.append(now_dir) -first_cell_executed = False -file_folder = "Colab-for-MDX_B" -def first_cell_ran(): - global first_cell_executed - if first_cell_executed: - #print("The 'first_cell_ran' function has already been executed.") - return - - - - first_cell_executed = True - os.makedirs("tmp_models", exist_ok=True) - - - - class hide_opt: # hide outputs - def __enter__(self): - self._original_stdout = sys.stdout - sys.stdout = open(os.devnull, "w") - - def __exit__(self, exc_type, exc_val, exc_tb): - sys.stdout.close() - sys.stdout = self._original_stdout - - def get_size(bytes, suffix="B"): # read ram - global svmem - factor = 1024 - for unit in ["", "K", "M", "G", "T", "P"]: - if bytes < factor: - return f"{bytes:.2f}{unit}{suffix}" - bytes /= factor - svmem = psutil.virtual_memory() - - - def use_uvr_without_saving(): - print("Notice: files won't be saved to personal drive.") - print(f"Downloading {file_folder}...", end=" ") - with hide_opt(): - #os.chdir(mounting_path) - items_to_move = ["demucs", "diffq","julius","model","separated","tracks","mdx.py","MDX-Net_Colab.ipynb"] - subprocess.run(["git", "clone", "https://github.com/NaJeongMo/Colab-for-MDX_B.git"]) - for item_name in items_to_move: - item_path = os.path.join(file_folder, item_name) - if os.path.exists(item_path): - if os.path.isfile(item_path): - shutil.move(item_path, now_dir) - elif os.path.isdir(item_path): - shutil.move(item_path, now_dir) - try: - shutil.rmtree(file_folder) - except PermissionError: - print(f"No se pudo eliminar la carpeta {file_folder}. Puede estar relacionada con Git.") - - - use_uvr_without_saving() - print("done!") - if not os.path.exists("tracks"): - os.mkdir("tracks") -first_cell_ran() \ No newline at end of file diff --git a/spaces/Frorozcol/mariposas/utils.py b/spaces/Frorozcol/mariposas/utils.py deleted file mode 100644 index ddafe6f774e0c9ba0ade822d7a90360ef1bebe79..0000000000000000000000000000000000000000 --- a/spaces/Frorozcol/mariposas/utils.py +++ /dev/null @@ -1,15 +0,0 @@ -import numpy as np -import torch -from huggan.pytorch.lightweight_gan.lightweight_gan import LightweightGAN - -def cargar_mdoel(model_name = "ceyda/butterfly_cropped_uniq1K_512", model_version = None): - gan = LightweightGAN.from_pretrained(model_name, version = model_version) - gan.eval() - return gan - -def general(gan, bach_size=1): - with torch.no_grad(): - ims = gan.G(torch.rand(bach_size, gan.latent_dim)).clamp_(0.0,1.0) * 255 - ims = ims.permute(0,2,3,1).detach().cpu().numpy().astype(np.uint8) - return ims - \ No newline at end of file diff --git a/spaces/Frorozcol/music_recommedation/src/__init__.py b/spaces/Frorozcol/music_recommedation/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GFXY/Maseshi-Anything-v3.0/README.md b/spaces/GFXY/Maseshi-Anything-v3.0/README.md deleted file mode 100644 index 97fb91d42f5375717ce4d432bb941efc2075d9e0..0000000000000000000000000000000000000000 --- a/spaces/GFXY/Maseshi-Anything-v3.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Maseshi Anything V3.0 -emoji: 🏃 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: agpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GXSA/bingo/src/components/ui/alert-dialog.tsx b/spaces/GXSA/bingo/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
    - {children} -
    -
    -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/GZZYYP/bingo/Dockerfile b/spaces/GZZYYP/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/GZZYYP/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/utility/core_version_utility.py b/spaces/GaenKoki/voicevox/voicevox_engine/utility/core_version_utility.py deleted file mode 100644 index 25f2d3a3e7e7ed3a25e52075eb74be08c96451db..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/utility/core_version_utility.py +++ /dev/null @@ -1,14 +0,0 @@ -from typing import Iterable - -from semver.version import Version - - -def parse_core_version(version: str) -> Version: - return Version.parse(version) - - -def get_latest_core_version(versions: Iterable[str]) -> str: - if len(versions) == 0: - raise Exception("versions must be non-empty.") - - return str(max(map(parse_core_version, versions))) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/sweeping_piles.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/sweeping_piles.py deleted file mode 100644 index 36096e16333b10ac5fdd5c0fed2093eff1b658f5..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/sweeping_piles.py +++ /dev/null @@ -1,33 +0,0 @@ -import numpy as np -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils - - -class SweepingPiles(Task): - """Push piles of small objects into a target goal zone marked on the tabletop.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "push the pile of blocks into the green square" - self.task_completed_desc = "done sweeping." - self.primitive = primitives.push - self.ee = Spatula - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add goal zone. - zone_size = (0.12, 0.12, 0) - zone_pose = self.get_random_pose(env, zone_size) - env.add_object('zone/zone.urdf', zone_pose, 'fixed') - - # Add pile of small blocks with `make_piles` function - obj_ids = self.make_piles(env) - - # Add goal - self.add_goal(objs=obj_ids, matches=np.ones((50, 1)), targ_poses=[zone_pose], replace=True, - rotations=False, metric='zone', params=[(zone_pose, zone_size)], step_max_reward=1, language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/Anime-BigGAN/model.py b/spaces/Gradio-Blocks/Anime-BigGAN/model.py deleted file mode 100644 index 7edfd2e369c4eab24b4a3cb3ff662cbd55c75b61..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Anime-BigGAN/model.py +++ /dev/null @@ -1,395 +0,0 @@ -#@title Define Generator and Discriminator model -import numpy as np -import torch -from torch import nn -from torch.nn import Parameter -from torch.nn import functional as F - - -def l2_normalize(v, dim=None, eps=1e-12): - return v / (v.norm(dim=dim, keepdim=True) + eps) - - -def unpool(value): - """Unpooling operation. - N-dimensional version of the unpooling operation from - https://www.robots.ox.ac.uk/~vgg/rg/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf - Taken from: https://github.com/tensorflow/tensorflow/issues/2169 - Args: - value: a Tensor of shape [b, d0, d1, ..., dn, ch] - name: name of the op - Returns: - A Tensor of shape [b, 2*d0, 2*d1, ..., 2*dn, ch] - """ - value = torch.Tensor.permute(value, [0,2,3,1]) - sh = list(value.shape) - dim = len(sh[1:-1]) - out = (torch.reshape(value, [-1] + sh[-dim:])) - for i in range(dim, 0, -1): - out = torch.cat([out, torch.zeros_like(out)], i) - out_size = [-1] + [s * 2 for s in sh[1:-1]] + [sh[-1]] - out = torch.reshape(out, out_size) - out = torch.Tensor.permute(out, [0,3,1,2]) - return out - - -class BatchNorm2d(nn.BatchNorm2d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.initialized = False - self.accumulating = False - self.accumulated_mean = Parameter(torch.zeros(args[0]), requires_grad=False) - self.accumulated_var = Parameter(torch.zeros(args[0]), requires_grad=False) - self.accumulated_counter = Parameter(torch.zeros(1)+1e-12, requires_grad=False) - - def forward(self, inputs, *args, **kwargs): - if not self.initialized: - self.check_accumulation() - self.set_initialized(True) - if self.accumulating: - self.eval() - with torch.no_grad(): - axes = [0] + ([] if len(inputs.shape) == 2 else list(range(2,len(inputs.shape)))) - _mean = torch.mean(inputs, axes, keepdim=True) - mean = torch.mean(inputs, axes, keepdim=False) - var = torch.mean((inputs-_mean)**2, axes) - self.accumulated_mean.copy_(self.accumulated_mean + mean) - self.accumulated_var.copy_(self.accumulated_var + var) - self.accumulated_counter.copy_(self.accumulated_counter + 1) - _mean = self.running_mean*1.0 - _variance = self.running_var*1.0 - self._mean.copy_(self.accumulated_mean / self.accumulated_counter) - self._variance.copy_(self.accumulated_var / self.accumulated_counter) - out = super().forward(inputs, *args, **kwargs) - self.running_mean.copy_(_mean) - self.running_var.copy_(_variance) - return out - out = super().forward(inputs, *args, **kwargs) - return out - - def check_accumulation(self): - if self.accumulated_counter.detach().cpu().numpy().mean() > 1-1e-12: - self.running_mean.copy_(self.accumulated_mean / self.accumulated_counter) - self.running_var.copy_(self.accumulated_var / self.accumulated_counter) - return True - return False - - def clear_accumulated(self): - self.accumulated_mean.copy_(self.accumulated_mean*0.0) - self.accumulated_var.copy_(self.accumulated_var*0.0) - self.accumulated_counter.copy_(self.accumulated_counter*0.0+1e-2) - - def set_accumulating(self, status=True): - if status: - self.accumulating = True - else: - self.accumulating = False - - def set_initialized(self, status=False): - if not status: - self.initialized = False - else: - self.initialized = True - - -class SpectralNorm(nn.Module): - def __init__(self, module, name='weight', power_iterations=2): - super().__init__() - self.module = module - self.name = name - self.power_iterations = power_iterations - if not self._made_params(): - self._make_params() - - def _update_u(self): - w = self.weight - u = self.weight_u - - if len(w.shape) == 4: - _w = torch.Tensor.permute(w, [2,3,1,0]) - _w = torch.reshape(_w, [-1, _w.shape[-1]]) - elif isinstance(self.module, nn.Linear) or isinstance(self.module, nn.Embedding): - _w = torch.Tensor.permute(w, [1,0]) - _w = torch.reshape(_w, [-1, _w.shape[-1]]) - else: - _w = torch.reshape(w, [-1, w.shape[-1]]) - _w = torch.reshape(_w, [-1, _w.shape[-1]]) - singular_value = "left" if _w.shape[0] <= _w.shape[1] else "right" - norm_dim = 0 if _w.shape[0] <= _w.shape[1] else 1 - for _ in range(self.power_iterations): - if singular_value == "left": - v = l2_normalize(torch.matmul(_w.t(), u), dim=norm_dim) - u = l2_normalize(torch.matmul(_w, v), dim=norm_dim) - else: - v = l2_normalize(torch.matmul(u, _w.t()), dim=norm_dim) - u = l2_normalize(torch.matmul(v, _w), dim=norm_dim) - - if singular_value == "left": - sigma = torch.matmul(torch.matmul(u.t(), _w), v) - else: - sigma = torch.matmul(torch.matmul(v, _w), u.t()) - _w = w / sigma.detach() - setattr(self.module, self.name, _w) - self.weight_u.copy_(u.detach()) - - def _made_params(self): - try: - self.weight - self.weight_u - return True - except AttributeError: - return False - - def _make_params(self): - w = getattr(self.module, self.name) - - if len(w.shape) == 4: - _w = torch.Tensor.permute(w, [2,3,1,0]) - _w = torch.reshape(_w, [-1, _w.shape[-1]]) - elif isinstance(self.module, nn.Linear) or isinstance(self.module, nn.Embedding): - _w = torch.Tensor.permute(w, [1,0]) - _w = torch.reshape(_w, [-1, _w.shape[-1]]) - else: - _w = torch.reshape(w, [-1, w.shape[-1]]) - singular_value = "left" if _w.shape[0] <= _w.shape[1] else "right" - norm_dim = 0 if _w.shape[0] <= _w.shape[1] else 1 - u_shape = (_w.shape[0], 1) if singular_value == "left" else (1, _w.shape[-1]) - - u = Parameter(w.data.new(*u_shape).normal_(0, 1), requires_grad=False) - u.copy_(l2_normalize(u, dim=norm_dim).detach()) - - del self.module._parameters[self.name] - self.weight = w - self.weight_u = u - - def forward(self, *args, **kwargs): - self._update_u() - return self.module.forward(*args, **kwargs) - - -class SelfAttention(nn.Module): - def __init__(self, in_dim, activation=torch.relu): - super().__init__() - self.chanel_in = in_dim - self.activation = activation - - self.theta = SpectralNorm(nn.Conv2d(in_dim, in_dim // 8, 1, bias=False)) - self.phi = SpectralNorm(nn.Conv2d(in_dim, in_dim // 8, 1, bias=False)) - self.pool = nn.MaxPool2d(2, 2) - self.g = SpectralNorm(nn.Conv2d(in_dim, in_dim // 2, 1, bias=False)) - self.o_conv = SpectralNorm(nn.Conv2d(in_dim // 2, in_dim, 1, bias=False)) - self.gamma = Parameter(torch.zeros(1)) - - def forward(self, x): - m_batchsize, C, width, height = x.shape - N = height * width - - theta = self.theta(x) - phi = self.phi(x) - phi = self.pool(phi) - phi = torch.reshape(phi,(m_batchsize, -1, N // 4)) - theta = torch.reshape(theta,(m_batchsize, -1, N)) - theta = torch.Tensor.permute(theta,(0, 2, 1)) - attention = torch.softmax(torch.bmm(theta, phi), -1) - g = self.g(x) - g = torch.reshape(self.pool(g),(m_batchsize, -1, N // 4)) - attn_g = torch.reshape(torch.bmm(g, torch.Tensor.permute(attention,(0, 2, 1))),(m_batchsize, -1, width, height)) - out = self.o_conv(attn_g) - return self.gamma * out + x - - -class ConditionalBatchNorm2d(nn.Module): - def __init__(self, num_features, num_classes, eps=1e-5, momentum=0.1): - super().__init__() - self.bn_in_cond = BatchNorm2d(num_features, affine=False, eps=eps, momentum=momentum) - self.gamma_embed = SpectralNorm(nn.Linear(num_classes, num_features, bias=False)) - self.beta_embed = SpectralNorm(nn.Linear(num_classes, num_features, bias=False)) - - def forward(self, x, y): - out = self.bn_in_cond(x) - - if isinstance(y, list): - gamma, beta = y - out = torch.reshape(gamma, (gamma.shape[0], -1, 1, 1)) * out + torch.reshape(beta, (beta.shape[0], -1, 1, 1)) - return out - - gamma = self.gamma_embed(y) - # gamma = gamma + 1 - beta = self.beta_embed(y) - out = torch.reshape(gamma, (gamma.shape[0], -1, 1, 1)) * out + torch.reshape(beta, (beta.shape[0], -1, 1, 1)) - return out - - -class ResBlock(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size=[3, 3], - padding=1, - stride=1, - n_class=None, - conditional=True, - activation=torch.relu, - upsample=True, - downsample=False, - z_dim=128, - use_attention=False, - skip_proj=None - ): - super().__init__() - - if conditional: - self.cond_norm1 = ConditionalBatchNorm2d(in_channel, z_dim) - - self.conv0 = SpectralNorm( - nn.Conv2d(in_channel, out_channel, kernel_size, stride, padding) - ) - - if conditional: - self.cond_norm2 = ConditionalBatchNorm2d(out_channel, z_dim) - - self.conv1 = SpectralNorm( - nn.Conv2d(out_channel, out_channel, kernel_size, stride, padding) - ) - - self.skip_proj = False - if skip_proj is not True and (upsample or downsample): - self.conv_sc = SpectralNorm(nn.Conv2d(in_channel, out_channel, 1, 1, 0)) - self.skip_proj = True - - if use_attention: - self.attention = SelfAttention(out_channel) - - self.upsample = upsample - self.downsample = downsample - self.activation = activation - self.conditional = conditional - self.use_attention = use_attention - - def forward(self, input, condition=None): - out = input - - if self.conditional: - out = self.cond_norm1(out, condition if not isinstance(condition, list) else condition[0]) - out = self.activation(out) - if self.upsample: - out = unpool(out) # out = F.interpolate(out, scale_factor=2) - out = self.conv0(out) - if self.conditional: - out = self.cond_norm2(out, condition if not isinstance(condition, list) else condition[1]) - out = self.activation(out) - out = self.conv1(out) - - if self.downsample: - out = F.avg_pool2d(out, 2, 2) - - if self.skip_proj: - skip = input - if self.upsample: - skip = unpool(skip) # skip = F.interpolate(skip, scale_factor=2) - skip = self.conv_sc(skip) - if self.downsample: - skip = F.avg_pool2d(skip, 2, 2) - out = out + skip - else: - skip = input - - if self.use_attention: - out = self.attention(out) - - return out - - -class Generator(nn.Module): - def __init__(self, code_dim=128, n_class=1000, chn=96, blocks_with_attention="B4", resolution=512): - super().__init__() - - def GBlock(in_channel, out_channel, n_class, z_dim, use_attention): - return ResBlock(in_channel, out_channel, n_class=n_class, z_dim=z_dim, use_attention=use_attention) - - self.embed_y = nn.Linear(n_class, 128, bias=False) - - self.chn = chn - self.resolution = resolution - self.blocks_with_attention = set(blocks_with_attention.split(",")) - self.blocks_with_attention.discard('') - - gblock = [] - in_channels, out_channels = self.get_in_out_channels() - self.num_split = len(in_channels) + 1 - - z_dim = code_dim//self.num_split + 128 - self.noise_fc = SpectralNorm(nn.Linear(code_dim//self.num_split, 4 * 4 * in_channels[0])) - - self.sa_ids = [int(s.split('B')[-1]) for s in self.blocks_with_attention] - - for i, (nc_in, nc_out) in enumerate(zip(in_channels, out_channels)): - gblock.append(GBlock(nc_in, nc_out, n_class=n_class, z_dim=z_dim, use_attention=(i+1) in self.sa_ids)) - self.blocks = nn.ModuleList(gblock) - - self.output_layer_bn = BatchNorm2d(1 * chn, eps=1e-5) - self.output_layer_conv = SpectralNorm(nn.Conv2d(1 * chn, 3, [3, 3], padding=1)) - - self.z_dim = code_dim - self.c_dim = n_class - self.n_level = self.num_split - - def get_in_out_channels(self): - resolution = self.resolution - if resolution == 1024: - channel_multipliers = [16, 16, 8, 8, 4, 2, 1, 1, 1] - elif resolution == 512: - channel_multipliers = [16, 16, 8, 8, 4, 2, 1, 1] - elif resolution == 256: - channel_multipliers = [16, 16, 8, 8, 4, 2, 1] - elif resolution == 128: - channel_multipliers = [16, 16, 8, 4, 2, 1] - elif resolution == 64: - channel_multipliers = [16, 16, 8, 4, 2] - elif resolution == 32: - channel_multipliers = [4, 4, 4, 4] - else: - raise ValueError("Unsupported resolution: {}".format(resolution)) - in_channels = [self.chn * c for c in channel_multipliers[:-1]] - out_channels = [self.chn * c for c in channel_multipliers[1:]] - return in_channels, out_channels - - def forward(self, input, class_id): - codes = torch.chunk(input, self.num_split, 1) - class_emb = self.embed_y(class_id) # 128 - out = self.noise_fc(codes[0]) - out = torch.Tensor.permute(torch.reshape(out,(out.shape[0], 4, 4, -1)),(0, 3, 1, 2)) - for i, (code, gblock) in enumerate(zip(codes[1:], self.blocks)): - condition = torch.cat([code, class_emb], 1) - out = gblock(out, condition) - - out = self.output_layer_bn(out) - out = torch.relu(out) - out = self.output_layer_conv(out) - - return (torch.tanh(out) + 1) / 2 - - def forward_w(self, ws): - out = self.noise_fc(ws[0]) - out = torch.Tensor.permute(torch.reshape(out,(out.shape[0], 4, 4, -1)),(0, 3, 1, 2)) - for i, (w, gblock) in enumerate(zip(ws[1:], self.blocks)): - out = gblock(out, w) - - out = self.output_layer_bn(out) - out = torch.relu(out) - out = self.output_layer_conv(out) - - return (torch.tanh(out) + 1) / 2 - - def forward_wp(self, z0, gammas, betas): - out = self.noise_fc(z0) - out = torch.Tensor.permute(torch.reshape(out,(out.shape[0], 4, 4, -1)),(0, 3, 1, 2)) - for i, (gamma, beta, gblock) in enumerate(zip(gammas, betas, self.blocks)): - out = gblock(out, [[gamma[0], beta[0]], [gamma[1], beta[1]]]) - - out = self.output_layer_bn(out) - out = torch.relu(out) - out = self.output_layer_conv(out) - - return (torch.tanh(out) + 1) / 2 diff --git a/spaces/Gradio-Blocks/EmojiGAN/dnnlib/util.py b/spaces/Gradio-Blocks/EmojiGAN/dnnlib/util.py deleted file mode 100644 index 76725336d01e75e1c68daa88be47f4fde0bbc63b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EmojiGAN/dnnlib/util.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError("Google Drive download quota exceeded -- please try again later") - - match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.h b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index c9e2032bcac9d2abde7a75eea4d812da348afadd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Gradio-Blocks/anime-colorization/datasets/README.md b/spaces/Gradio-Blocks/anime-colorization/datasets/README.md deleted file mode 100644 index 148cfea9a04f0361543b471772f94a9ce3d4c484..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/anime-colorization/datasets/README.md +++ /dev/null @@ -1,37 +0,0 @@ -# Downloading datasets - -This directory includes instructions and scripts for downloading ImageNet, LSUN bedrooms, and CIFAR-10 for use in this codebase. - -## ImageNet-64 - -To download unconditional ImageNet-64, go to [this page on image-net.org](http://www.image-net.org/small/download.php) and click on "Train (64x64)". Simply download the file and unzip it, and use the resulting directory as the data directory (the `--data_dir` argument for the training script). - -## Class-conditional ImageNet - -For our class-conditional models, we use the official ILSVRC2012 dataset with manual center cropping and downsampling. To obtain this dataset, navigate to [this page on image-net.org](http://www.image-net.org/challenges/LSVRC/2012/downloads) and sign in (or create an account if you do not already have one). Then click on the link reading "Training images (Task 1 & 2)". This is a 138GB tar file containing 1000 sub-tar files, one per class. - -Once the file is downloaded, extract it and look inside. You should see 1000 `.tar` files. You need to extract each of these, which may be impractical to do by hand on your operating system. To automate the process on a Unix-based system, you can `cd` into the directory and run this short shell script: - -``` -for file in *.tar; do tar xf "$file"; rm "$file"; done -``` - -This will extract and remove each tar file in turn. - -Once all of the images have been extracted, the resulting directory should be usable as a data directory (the `--data_dir` argument for the training script). The filenames should all start with WNID (class ids) followed by underscores, like `n01440764_2708.JPEG`. Conveniently (but not by accident) this is how the automated data-loader expects to discover class labels. - -## CIFAR-10 - -For CIFAR-10, we created a script [cifar10.py](cifar10.py) that creates `cifar_train` and `cifar_test` directories. These directories contain files named like `truck_49997.png`, so that the class name is discernable to the data loader. - -The `cifar_train` and `cifar_test` directories can be passed directly to the training scripts via the `--data_dir` argument. - -## LSUN bedroom - -To download and pre-process LSUN bedroom, clone [fyu/lsun](https://github.com/fyu/lsun) on GitHub and run their download script `python3 download.py bedroom`. The result will be an "lmdb" database named like `bedroom_train_lmdb`. You can pass this to our [lsun_bedroom.py](lsun_bedroom.py) script like so: - -``` -python lsun_bedroom.py bedroom_train_lmdb lsun_train_output_dir -``` - -This creates a directory called `lsun_train_output_dir`. This directory can be passed to the training scripts via the `--data_dir` argument. diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py deleted file mode 100644 index 9cb3581910f74063eb1c62b9345a6493098d4a4a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_20e_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/retinanet_r50_fpg-chn128_crop640_50e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/retinanet_r50_fpg-chn128_crop640_50e_coco.py deleted file mode 100644 index 9a6cf7e56a4f23a42d3905560a9b8035d6d935ff..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fpg/retinanet_r50_fpg-chn128_crop640_50e_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = 'retinanet_r50_fpg_crop640_50e_coco.py' - -model = dict( - neck=dict(out_channels=128, inter_channels=128), - bbox_head=dict(in_channels=128)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_1x_coco.py deleted file mode 100644 index dd42cba7ca95c008218e966aca6becb2a2dabc8d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_1x_coco.py +++ /dev/null @@ -1,80 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_swin_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.1, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[8, 11]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=12) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/HLasse/textdescriptives/app.py b/spaces/HLasse/textdescriptives/app.py deleted file mode 100644 index 677cbe9d0dcba8e986b9ce089e7d655ceda0d5e3..0000000000000000000000000000000000000000 --- a/spaces/HLasse/textdescriptives/app.py +++ /dev/null @@ -1,257 +0,0 @@ -""" -Dashboard for showcasing extraction of text metrics with textdescriptives. - -""" - -from io import StringIO - -import pandas as pd -import streamlit as st -import textdescriptives as td - -from data_viewer import DataViewer -from process_text import text_to_metrics -from options import ( - all_model_size_options_pretty_to_short, - available_model_size_options, - language_options, - metrics_options, -) - -################ -# Introduction # -################ - - -col1, col2 = st.columns([9, 2]) -with col1: - st.title("Extract Text Statistics") -with col2: - st.image( - "https://github.com/HLasse/TextDescriptives/raw/main/docs/_static/icon.png", - width=125, - ) - -st.write( - "Calculate a large variety of statistics from text via the " - "[**TextDescriptives**](https://github.com/HLasse/TextDescriptives) python package " - f"(v/{td.__version__}) and download the results as a .csv file. " - "Includes descriptive statistics and metrics related to readability, " - "information theory, text coherence and text quality." -) - -st.write( - "The source code for this application can be found on [**GitHub**](https://github.com/HLasse/TextDescriptives_app). " - "If you have feedback, please open an [issue](https://github.com/HLasse/textdescriptives_app/issues)." -) - -st.caption( - "Hansen, L., Olsen, L. R., & Enevoldsen, K. (2023). TextDescriptives: A Python package for " - "calculating a large variety of metrics from text. [Journal of Open Source Software, 8(84), " - "5153, https://doi.org/10.21105/joss.05153](https://doi.org/10.21105/joss.05153)" -) - - -############ -# Settings # -############ - - -input_choice = st.radio( - label="Input", options=["Enter text", "Upload file(s)"], index=0, horizontal=True -) - -with st.form(key="settings_form"): - split_by_line = st.checkbox(label="Split by newline", value=True) - - file_name_to_text_string = {} - - if input_choice == "Upload file(s)": - uploaded_files = st.file_uploader( - label="Choose a .txt file", type=["txt"], accept_multiple_files=True - ) - - if uploaded_files is not None and len(uploaded_files) > 0: - # To convert to a string based IO: - file_name_to_text_string = { - file.name: StringIO(file.getvalue().decode("utf-8")).read() - for file in uploaded_files - } - - else: - default_text = """Hello, morning dew. The grass whispers low. -I'm here to dance. The gentle breeze does show. -Good morning, world. The birds sing in delight. -Let's spread our wings. The butterflies take flight. -Nature's chorus sings, a symphony of light.""" - - file_name_to_text_string = { - "input": st.text_area( - label="Enter text", value=default_text, height=145, max_chars=None - ) - } - - # Row of selectors - col1, col2 = st.columns([1, 1]) - - with col1: - # Selection of language - language_pretty = st.selectbox( - label="Language", - options=list(language_options().keys()), - index=5, - key="language_selector", - ) - - language_short = language_options()[language_pretty] - - with col2: - # Selection of model size - model_size_pretty = st.selectbox( - label="Model Size", - options=available_model_size_options(lang="all"), - index=0, - key="size_selector", - ) - - model_size_short = all_model_size_options_pretty_to_short()[model_size_pretty] - - # Multiselection of metrics - metrics = st.multiselect( - label="Metrics", options=metrics_options(), default=metrics_options() - ) - - st.write( - "See the [**documentation**](https://hlasse.github.io/TextDescriptives/) for " - "information on the available metrics." - ) - - # This shouldn't happen but better safe than sorry - if isinstance(metrics, list) and not metrics: - metrics = None - - apply_settings_button = st.form_submit_button(label="Apply") - - -############# -# Apply NLP # -############# - - -if apply_settings_button and len(file_name_to_text_string) > 0: - if model_size_pretty not in available_model_size_options(lang=language_short): - st.write( - "**Sorry!** The chosen *model size* is not available in this language. Please try another." - ) - else: - # Extract metrics for each text - output_df = pd.concat( - [ - text_to_metrics( - string=string, - language_short=language_short, - model_size_short=model_size_short, - metrics=metrics, - split_by_line=split_by_line, - filename=filename if "Upload" in input_choice else None, - ) - for filename, string in file_name_to_text_string.items() - ], - ignore_index=True, - ) - - ################### - # Present Results # - ################### - - # Create 2 columns with 1) the output header - # and 2) a download button - DataViewer()._header_and_download( - header="The calculated metrics", - data=output_df, - file_name="text_metrics.csv", - ) - - st.write("**Note**: This data frame has been transposed for readability.") - output_df = output_df.transpose().reset_index() - output_df.columns = ["Metric"] + [str(c) for c in list(output_df.columns)[1:]] - st.dataframe(data=output_df, use_container_width=True) - - -############################ -# Code For Reproducibility # -############################ - - -with st.expander("See python code"): - st.code( - """ -# Note: This is the code for a single text file -# The actual code is slightly more complex -# to allow processing multiple files at once - -import textdescriptives as td - -# Given a string of text and the settings -text = "..." -language = "..." -model_size = "..." -metrics = [...] -split_by_newline = True - -# Remove whitespace from both ends of the string -text = text.strip() - -# When asked, split by newlines -if split_by_newline: - lines = text.split("\\n") -else: - lines = [text] - -# Remove empty lines -# E.g. due to consecutive newlines -lines = [l for l in lines if l] - -# Extract metrics for each line -extracted_metrics = td.extract_metrics( - text=lines, - lang=language, - spacy_model_size=model_size, - metrics=metrics -) - -""", - language="python", - ) - -####### -# FAQ # -####### - -st.subheader("Frequently Asked Questions (FAQ)") - -with st.expander("What does the 'Split by newline' option do?"): - st.write( - """ - When the `Split by newline` option is `enabled`, the metrics calculation is - performed separately for each paragraph. I.e. whenever there's a line break, - we split the text. - - When this option is `disabled`, the entire text is processed at once. - """ - ) - -with st.expander( - "Why do I get a warning/error message for certain languages or model sizes?" -): - st.write( - """ - Some combinations of languages, model sizes, and metrics are not currently supported in the app. - While we *are* working on this, you may currently see a red box - with an error message after clicking `Apply`. - - If you need this language and/or model size to work for your project, - please open an [issue](https://github.com/HLasse/textdescriptives_app/issues). - This may cause us to prioritize supporting your use case. - """ - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py deleted file mode 100644 index 6a825301a452bd935deafdaf78fa2427ca9a469e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict, Optional - -import torch.nn as nn -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.transformer import TransformerDecoder, TransformerEncoder -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer -from torch import Tensor - -from ..modules.latent_layers import LayerSelect - - -class LatentTransformerEncoder(TransformerEncoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerEncoder. - """ - - def __init__(self, args, dictionary, embed_tokens, num_logits=1): - self.num_logits = num_logits - self.num_layers = args.encoder_layers - super().__init__(args, dictionary, embed_tokens) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [self._build_encoder_layer(args, idx) for idx in range(args.encoder_layers)] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_encoder_layer(self, args, idx=None): - return LatentTransformerEncoderLayer(args, idx, layer_select=self.layer_select) - - def forward(self, src_tokens, src_lengths, return_all_hiddens: bool = False): - self.layer_select.sample(self.lang_idx) - return super().forward(src_tokens, src_lengths, return_all_hiddens) - - -class LatentTransformerEncoderLayer(TransformerEncoderLayer): - """Encoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerEncoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - """ - - def __init__(self, args, idx, layer_select=None): - super().__init__(args) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) - - -class LatentTransformerDecoder(TransformerDecoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerDecoder. - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, num_logits=1 - ): - self.num_logits = num_logits - self.num_layers = args.decoder_layers - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [ - self._build_decoder_layer(args, no_encoder_attn, idx) - for idx in range(args.decoder_layers) - ] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_decoder_layer(self, args, no_encoder_attn=False, idx=None): - return LatentTransformerDecoderLayer( - args, idx, layer_select=self.layer_select, no_encoder_attn=no_encoder_attn - ) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[EncoderOut] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - self.layer_select.sample(self.lang_idx) - return super().forward( - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - features_only=features_only, - alignment_layer=alignment_layer, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - - -class LatentTransformerDecoderLayer(TransformerDecoderLayer): - """Decoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerDecoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - - """ - - def __init__( - self, - args, - idx, - layer_select=None, - no_encoder_attn=False, - add_bias_kv=False, - add_zero_attn=False, - ): - super().__init__(args, no_encoder_attn, add_bias_kv, add_zero_attn) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qconv.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qconv.py deleted file mode 100644 index d15ec192e8cda6265a198e583a9bf7fb194dd129..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/pq/modules/qconv.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair - - -class PQConv2d(nn.Module): - """ - Quantized counterpart of nn.Conv2d module. Stores the centroid, the assignments - and the non-quantized biases. The full weight is re-instantiated at each forward - pass and autograd automatically computes the gradients with respect to the - centroids. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_channels x n_blocks - - bias: the non-quantized bias, must be either torch.Tensor or None - - Remarks: - - We refer the reader to the official documentation of the nn.Conv2d module - for the other arguments and the behavior of the module. - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Conv2d module for a standard training loop. - - During the backward, the gradients are averaged by cluster and not summed. - This explains the hook registered to the centroids. - """ - - def __init__( - self, - centroids, - assignments, - bias, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - padding_mode="zeros", - ): - super(PQConv2d, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.padding_mode = padding_mode - # check compatibility - if in_channels // groups * np.prod(self.kernel_size) % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % out_channels != 0: - raise ValueError("Wrong PQ sizes") - if in_channels % groups != 0: - raise ValueError("in_channels must be divisible by groups") - if out_channels % groups != 0: - raise ValueError("out_channels must be divisible by groups") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - if bias is not None: - self.bias = nn.Parameter(bias) - else: - self.register_parameter("bias", None) - # register hook for averaging gradients per centroids instead of summing - self.centroids.register_hook(lambda x: x / self.counts[:, None]) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape( - self.out_channels, self.in_channels // self.groups, *self.kernel_size - ) - ) - - def forward(self, x): - return F.conv2d( - x, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - - def extra_repr(self): - s = "{in_channels}, {out_channels}, kernel_size={kernel_size}, stride={stride}" - if self.padding != (0,) * len(self.padding): - s += ", padding={padding}" - if self.dilation != (1,) * len(self.dilation): - s += ", dilation={dilation}" - if self.groups != 1: - s += ", groups={groups}" - if self.bias is None: - s += ", bias=False" - if self.padding_mode != "zeros": - s += ", padding_mode={padding_mode}" - s += ", n_centroids={n_centroids}, block_size={block_size}" - return s.format(**self.__dict__) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_cross_entropy.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_cross_entropy.py deleted file mode 100644 index b05400ed95e22762c3e3e5e8fd3ebfa6caf1e325..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/speech_recognition/test_cross_entropy.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from examples.speech_recognition.criterions.cross_entropy_acc import ( - CrossEntropyWithAccCriterion, -) - -from .asr_test_base import CrossEntropyCriterionTestBase - - -class CrossEntropyWithAccCriterionTest(CrossEntropyCriterionTestBase): - def setUp(self): - self.criterion_cls = CrossEntropyWithAccCriterion - super().setUp() - - def test_cross_entropy_all_correct(self): - sample = self.get_test_sample(correct=True, soft_target=False, aggregate=False) - loss, sample_size, logging_output = self.criterion( - self.model, sample, "sum", log_probs=True - ) - assert logging_output["correct"] == 20 - assert logging_output["total"] == 20 - assert logging_output["sample_size"] == 20 - assert logging_output["ntokens"] == 20 - - def test_cross_entropy_all_wrong(self): - sample = self.get_test_sample(correct=False, soft_target=False, aggregate=False) - loss, sample_size, logging_output = self.criterion( - self.model, sample, "sum", log_probs=True - ) - assert logging_output["correct"] == 0 - assert logging_output["total"] == 20 - assert logging_output["sample_size"] == 20 - assert logging_output["ntokens"] == 20 diff --git a/spaces/Hina4867/bingo/postcss.config.js b/spaces/Hina4867/bingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/ldm/modules/attention.py b/spaces/Hoodady/3DFuse/ldm/modules/attention.py deleted file mode 100644 index 509cd873768f0dd75a75ab3fcdd652822b12b59f..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/modules/attention.py +++ /dev/null @@ -1,341 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from typing import Optional, Any - -from ldm.modules.diffusionmodules.util import checkpoint - - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - -# CrossAttn precision handling -import os -_ATTN_PRECISION = os.environ.get("ATTN_PRECISION", "fp32") - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - # force cast to fp32 to avoid overflowing - if _ATTN_PRECISION =="fp32": - with torch.autocast(enabled=False, device_type = 'cuda'): - q, k = q.float(), k.float() - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - else: - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - del q, k - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', sim, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class MemoryEfficientCrossAttention(nn.Module): - # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0): - super().__init__() - print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using " - f"{heads} heads.") - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.heads = heads - self.dim_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - self.attention_op: Optional[Any] = None - - def forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - "softmax-xformers": MemoryEfficientCrossAttention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - diff --git a/spaces/HuggingAlgorithms/PDF-TextExtractor/README.md b/spaces/HuggingAlgorithms/PDF-TextExtractor/README.md deleted file mode 100644 index a3f376a8189e8567376c00048859a6e341b1ad91..0000000000000000000000000000000000000000 --- a/spaces/HuggingAlgorithms/PDF-TextExtractor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PDF TextExtractor -emoji: 🏢 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HugoDzz/spaceship_drift/static/game/index.audio.worklet.js b/spaces/HugoDzz/spaceship_drift/static/game/index.audio.worklet.js deleted file mode 100644 index d9330c735f3da52a20f5e54e0b463ac03b7dff70..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/static/game/index.audio.worklet.js +++ /dev/null @@ -1,211 +0,0 @@ -/**************************************************************************/ -/* audio.worklet.js */ -/**************************************************************************/ -/* This file is part of: */ -/* GODOT ENGINE */ -/* https://godotengine.org */ -/**************************************************************************/ -/* Copyright (c) 2014-present Godot Engine contributors (see AUTHORS.md). */ -/* Copyright (c) 2007-2014 Juan Linietsky, Ariel Manzur. */ -/* */ -/* Permission is hereby granted, free of charge, to any person obtaining */ -/* a copy of this software and associated documentation files (the */ -/* "Software"), to deal in the Software without restriction, including */ -/* without limitation the rights to use, copy, modify, merge, publish, */ -/* distribute, sublicense, and/or sell copies of the Software, and to */ -/* permit persons to whom the Software is furnished to do so, subject to */ -/* the following conditions: */ -/* */ -/* The above copyright notice and this permission notice shall be */ -/* included in all copies or substantial portions of the Software. */ -/* */ -/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */ -/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */ -/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. */ -/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */ -/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */ -/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */ -/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -/**************************************************************************/ - -class RingBuffer { - constructor(p_buffer, p_state, p_threads) { - this.buffer = p_buffer; - this.avail = p_state; - this.threads = p_threads; - this.rpos = 0; - this.wpos = 0; - } - - data_left() { - return this.threads ? Atomics.load(this.avail, 0) : this.avail; - } - - space_left() { - return this.buffer.length - this.data_left(); - } - - read(output) { - const size = this.buffer.length; - let from = 0; - let to_write = output.length; - if (this.rpos + to_write > size) { - const high = size - this.rpos; - output.set(this.buffer.subarray(this.rpos, size)); - from = high; - to_write -= high; - this.rpos = 0; - } - if (to_write) { - output.set(this.buffer.subarray(this.rpos, this.rpos + to_write), from); - } - this.rpos += to_write; - if (this.threads) { - Atomics.add(this.avail, 0, -output.length); - Atomics.notify(this.avail, 0); - } else { - this.avail -= output.length; - } - } - - write(p_buffer) { - const to_write = p_buffer.length; - const mw = this.buffer.length - this.wpos; - if (mw >= to_write) { - this.buffer.set(p_buffer, this.wpos); - this.wpos += to_write; - if (mw === to_write) { - this.wpos = 0; - } - } else { - const high = p_buffer.subarray(0, mw); - const low = p_buffer.subarray(mw); - this.buffer.set(high, this.wpos); - this.buffer.set(low); - this.wpos = low.length; - } - if (this.threads) { - Atomics.add(this.avail, 0, to_write); - Atomics.notify(this.avail, 0); - } else { - this.avail += to_write; - } - } -} - -class GodotProcessor extends AudioWorkletProcessor { - constructor() { - super(); - this.threads = false; - this.running = true; - this.lock = null; - this.notifier = null; - this.output = null; - this.output_buffer = new Float32Array(); - this.input = null; - this.input_buffer = new Float32Array(); - this.port.onmessage = (event) => { - const cmd = event.data['cmd']; - const data = event.data['data']; - this.parse_message(cmd, data); - }; - } - - process_notify() { - if (this.notifier) { - Atomics.add(this.notifier, 0, 1); - Atomics.notify(this.notifier, 0); - } - } - - parse_message(p_cmd, p_data) { - if (p_cmd === 'start' && p_data) { - const state = p_data[0]; - let idx = 0; - this.threads = true; - this.lock = state.subarray(idx, ++idx); - this.notifier = state.subarray(idx, ++idx); - const avail_in = state.subarray(idx, ++idx); - const avail_out = state.subarray(idx, ++idx); - this.input = new RingBuffer(p_data[1], avail_in, true); - this.output = new RingBuffer(p_data[2], avail_out, true); - } else if (p_cmd === 'stop') { - this.running = false; - this.output = null; - this.input = null; - } else if (p_cmd === 'start_nothreads') { - this.output = new RingBuffer(p_data[0], p_data[0].length, false); - } else if (p_cmd === 'chunk') { - this.output.write(p_data); - } - } - - static array_has_data(arr) { - return arr.length && arr[0].length && arr[0][0].length; - } - - process(inputs, outputs, parameters) { - if (!this.running) { - return false; // Stop processing. - } - if (this.output === null) { - return true; // Not ready yet, keep processing. - } - const process_input = GodotProcessor.array_has_data(inputs); - if (process_input) { - const input = inputs[0]; - const chunk = input[0].length * input.length; - if (this.input_buffer.length !== chunk) { - this.input_buffer = new Float32Array(chunk); - } - if (!this.threads) { - GodotProcessor.write_input(this.input_buffer, input); - this.port.postMessage({ 'cmd': 'input', 'data': this.input_buffer }); - } else if (this.input.space_left() >= chunk) { - GodotProcessor.write_input(this.input_buffer, input); - this.input.write(this.input_buffer); - } else { - this.port.postMessage('Input buffer is full! Skipping input frame.'); - } - } - const process_output = GodotProcessor.array_has_data(outputs); - if (process_output) { - const output = outputs[0]; - const chunk = output[0].length * output.length; - if (this.output_buffer.length !== chunk) { - this.output_buffer = new Float32Array(chunk); - } - if (this.output.data_left() >= chunk) { - this.output.read(this.output_buffer); - GodotProcessor.write_output(output, this.output_buffer); - if (!this.threads) { - this.port.postMessage({ 'cmd': 'read', 'data': chunk }); - } - } else { - this.port.postMessage('Output buffer has not enough frames! Skipping output frame.'); - } - } - this.process_notify(); - return true; - } - - static write_output(dest, source) { - const channels = dest.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < dest[ch].length; sample++) { - dest[ch][sample] = source[sample * channels + ch]; - } - } - } - - static write_input(dest, source) { - const channels = source.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < source[ch].length; sample++) { - dest[sample * channels + ch] = source[ch][sample]; - } - } - } -} - -registerProcessor('godot-processor', GodotProcessor); diff --git a/spaces/Illumotion/Koboldcpp/otherarch/gptj_v3.cpp b/spaces/Illumotion/Koboldcpp/otherarch/gptj_v3.cpp deleted file mode 100644 index cfe6fe99dabc1bae1072e8c488139f7c16bdb095..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/gptj_v3.cpp +++ /dev/null @@ -1,663 +0,0 @@ -#include "ggml.h" -#include "otherarch.h" - -#include "utils.h" - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "model_adapter.h" - -#ifdef GGML_USE_CUBLAS -#include "ggml-cuda.h" -#endif -#if defined(GGML_USE_CLBLAST) -#include "ggml-opencl.h" -#endif - -// load the model's weights from a file -ModelLoadResult gptj_model_load(const std::string & fname, gptj_model & model, gpt_vocab & vocab, int gpulayers) { - printf("%s: loading model from '%s' - please wait ...\n", __func__, fname.c_str()); - - auto fin = std::ifstream(fname, std::ios::binary); - if (!fin) { - fprintf(stderr, "%s: failed to open '%s'\n", __func__, fname.c_str()); - return ModelLoadResult::FAIL; - } - - // verify magic - { - uint32_t magic; - fin.read((char *) &magic, sizeof(magic)); - if (magic != 0x67676d6c) { - fprintf(stderr, "%s: invalid model file '%s' (bad magic)\n", __func__, fname.c_str()); - return ModelLoadResult::FAIL; - } - } - - int32_t origmaxctx = model.hparams.n_ctx; - - // load hparams - { - auto & hparams = model.hparams; - - fin.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab)); - fin.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx)); - fin.read((char *) &hparams.n_embd, sizeof(hparams.n_embd)); - fin.read((char *) &hparams.n_head, sizeof(hparams.n_head)); - fin.read((char *) &hparams.n_layer, sizeof(hparams.n_layer)); - fin.read((char *) &hparams.n_rot, sizeof(hparams.n_rot)); - fin.read((char *) &hparams.ftype, sizeof(hparams.ftype)); - - const int32_t qntvr = hparams.ftype / GGML_QNT_VERSION_FACTOR; - - printf("%s: n_vocab = %d\n", __func__, hparams.n_vocab); - printf("%s: n_ctx = %d (%d)\n", __func__, hparams.n_ctx,origmaxctx); - printf("%s: n_embd = %d\n", __func__, hparams.n_embd); - printf("%s: n_head = %d\n", __func__, hparams.n_head); - printf("%s: n_layer = %d\n", __func__, hparams.n_layer); - printf("%s: n_rot = %d\n", __func__, hparams.n_rot); - printf("%s: ftype = %d\n", __func__, hparams.ftype); - printf("%s: qntvr = %d\n", __func__, qntvr); - - hparams.n_ctx = std::max(origmaxctx,hparams.n_ctx); - - hparams.ftype %= GGML_QNT_VERSION_FACTOR; - } - - // load vocab - { - int32_t n_vocab = 0; - fin.read((char *) &n_vocab, sizeof(n_vocab)); - - if (n_vocab != model.hparams.n_vocab) { - fprintf(stderr, "%s: invalid model file '%s' (bad vocab size %d != %d)\n", - __func__, fname.c_str(), n_vocab, model.hparams.n_vocab); - return ModelLoadResult::FAIL; - } - - std::string word; - std::vector buf(128); - - for (int i = 0; i < n_vocab; i++) { - uint32_t len; - fin.read((char *) &len, sizeof(len)); - - buf.resize(len); - fin.read((char *) buf.data(), len); - word.assign(buf.data(), len); - - vocab.token_to_id[word] = i; - vocab.id_to_token[i] = word; - } - } - - // for the big tensors, we have the option to store the data in 16-bit floats or quantized - // in order to save memory and also to speed up the computation - ggml_type wtype = ggml_ftype_to_ggml_type((ggml_ftype) (model.hparams.ftype)); - if (wtype == GGML_TYPE_COUNT) { - fprintf(stderr, "%s: invalid model file '%s' (bad ftype value %d)\n", - __func__, fname.c_str(), model.hparams.ftype); - return ModelLoadResult::FAIL; - } - - auto & ctx = model.ctx; - - auto memory_type = GGML_TYPE_F16; - - size_t ctx_size = 0; - - { - const auto & hparams = model.hparams; - - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_ctx = hparams.n_ctx; - const int n_vocab = hparams.n_vocab; - - ctx_size += n_embd*ggml_type_sizef(GGML_TYPE_F32); // ln_f_g - ctx_size += n_embd*ggml_type_sizef(GGML_TYPE_F32); // ln_f_b - - ctx_size += n_embd*n_vocab*ggml_type_sizef(wtype); // wte - - ctx_size += n_embd*n_vocab*ggml_type_sizef(wtype); // lmh_g - ctx_size += n_vocab*ggml_type_sizef(GGML_TYPE_F32); // lmh_b - - ctx_size += n_layer*(n_embd*ggml_type_sizef(GGML_TYPE_F32)); // ln_1_g - ctx_size += n_layer*(n_embd*ggml_type_sizef(GGML_TYPE_F32)); // ln_1_b - - ctx_size += n_layer*(n_embd*n_embd*ggml_type_sizef(wtype)); // c_attn_q_proj_w - ctx_size += n_layer*(n_embd*n_embd*ggml_type_sizef(wtype)); // c_attn_k_proj_w - ctx_size += n_layer*(n_embd*n_embd*ggml_type_sizef(wtype)); // c_attn_v_proj_w - - ctx_size += n_layer*(n_embd*n_embd*ggml_type_sizef(wtype)); // c_attn_proj_w - - ctx_size += n_layer*(4*n_embd*n_embd*ggml_type_sizef(wtype)); // c_mlp_fc_w - ctx_size += n_layer*( 4*n_embd*ggml_type_sizef(GGML_TYPE_F32)); // c_mlp_fc_b - - ctx_size += n_layer*(4*n_embd*n_embd*ggml_type_sizef(wtype)); // c_mlp_proj_w - ctx_size += n_layer*( n_embd*ggml_type_sizef(GGML_TYPE_F32)); // c_mlp_proj_b - - ctx_size += std::max(origmaxctx,n_ctx)*n_layer*n_embd*ggml_type_sizef(memory_type); // memory_k - ctx_size += std::max(origmaxctx,n_ctx)*n_layer*n_embd*ggml_type_sizef(memory_type); // memory_v - - ctx_size += (5 + 10*n_layer)*512; // object overhead - - printf("%s: ggml ctx size = %6.2f MB\n", __func__, ctx_size/(1024.0*1024.0)); - } - - // create the ggml context - { - struct ggml_init_params params; - params.mem_size = ctx_size; - params.mem_buffer = NULL; - params.no_alloc = false; - - - model.ctx = ggml_init(params); - if (!model.ctx) { - fprintf(stderr, "%s: ggml_init() failed\n", __func__); - return ModelLoadResult::FAIL; - } - } - - // prepare memory for the weights - { - const auto & hparams = model.hparams; - - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_vocab = hparams.n_vocab; - - model.layers.resize(n_layer); - - model.wte = ggml_new_tensor_2d(ctx, wtype, n_embd, n_vocab); - - model.ln_f_g = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); - model.ln_f_b = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); - - model.lmh_g = ggml_new_tensor_2d(ctx, wtype, n_embd, n_vocab); - model.lmh_b = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_vocab); - - // map by name - model.tensors["transformer.wte.weight"] = model.wte; - - model.tensors["transformer.ln_f.weight"] = model.ln_f_g; - model.tensors["transformer.ln_f.bias"] = model.ln_f_b; - - model.tensors["lm_head.weight"] = model.lmh_g; - model.tensors["lm_head.bias"] = model.lmh_b; - - for (int i = 0; i < n_layer; ++i) { - auto & layer = model.layers[i]; - - layer.ln_1_g = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); - layer.ln_1_b = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); - - layer.c_attn_q_proj_w = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); - layer.c_attn_k_proj_w = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); - layer.c_attn_v_proj_w = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); - - layer.c_attn_proj_w = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); - - layer.c_mlp_fc_w = ggml_new_tensor_2d(ctx, wtype, n_embd, 4*n_embd); - layer.c_mlp_fc_b = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, 4*n_embd); - - layer.c_mlp_proj_w = ggml_new_tensor_2d(ctx, wtype, 4*n_embd, n_embd); - layer.c_mlp_proj_b = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); - - // map by name - model.tensors["transformer.h." + std::to_string(i) + ".ln_1.weight"] = layer.ln_1_g; - model.tensors["transformer.h." + std::to_string(i) + ".ln_1.bias"] = layer.ln_1_b; - - model.tensors["transformer.h." + std::to_string(i) + ".attn.q_proj.weight"] = layer.c_attn_q_proj_w; - model.tensors["transformer.h." + std::to_string(i) + ".attn.k_proj.weight"] = layer.c_attn_k_proj_w; - model.tensors["transformer.h." + std::to_string(i) + ".attn.v_proj.weight"] = layer.c_attn_v_proj_w; - - model.tensors["transformer.h." + std::to_string(i) + ".attn.out_proj.weight"] = layer.c_attn_proj_w; - - model.tensors["transformer.h." + std::to_string(i) + ".mlp.fc_in.weight"] = layer.c_mlp_fc_w; - model.tensors["transformer.h." + std::to_string(i) + ".mlp.fc_in.bias"] = layer.c_mlp_fc_b; - - model.tensors["transformer.h." + std::to_string(i) + ".mlp.fc_out.weight"] = layer.c_mlp_proj_w; - model.tensors["transformer.h." + std::to_string(i) + ".mlp.fc_out.bias"] = layer.c_mlp_proj_b; - } - } - - // key + value memory - { - const auto & hparams = model.hparams; - - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_ctx = hparams.n_ctx; - - const int n_mem = n_layer*std::max(origmaxctx,n_ctx); - const int n_elements = n_embd*n_mem; - - model.memory_k = ggml_new_tensor_1d(ctx, memory_type, n_elements); - model.memory_v = ggml_new_tensor_1d(ctx, memory_type, n_elements); - - const size_t memory_size = ggml_nbytes(model.memory_k) + ggml_nbytes(model.memory_v); - - printf("%s: memory_size = %8.2f MB, n_mem = %d\n", __func__, memory_size/1024.0/1024.0, n_mem); - } - - // load weights - { - int n_tensors = 0; - size_t total_size = 0; - - printf("%s: ", __func__); - - while (true) { - int32_t n_dims; - int32_t length; - int32_t ttype; - - fin.read(reinterpret_cast(&n_dims), sizeof(n_dims)); - fin.read(reinterpret_cast(&length), sizeof(length)); - fin.read(reinterpret_cast(&ttype), sizeof(ttype)); - - if (fin.eof()) { - break; - } - - int32_t nelements = 1; - int32_t ne[2] = { 1, 1 }; - for (int i = 0; i < n_dims; ++i) { - fin.read(reinterpret_cast(&ne[i]), sizeof(ne[i])); - nelements *= ne[i]; - } - - std::string name(length, 0); - fin.read(&name[0], length); - - if (model.tensors.find(name.data()) == model.tensors.end()) { - fprintf(stderr, "%s: unknown tensor '%s' in model file\n", __func__, name.data()); - return ModelLoadResult::FAIL; - } - - auto tensor = model.tensors[name.data()]; - if (ggml_nelements(tensor) != nelements) { - fprintf(stderr, "%s: tensor '%s' has wrong size in model file\n", __func__, name.data()); - return ModelLoadResult::FAIL; - } - - - if (tensor->ne[0] != ne[0] || tensor->ne[1] != ne[1]) { - - //test for transposition and retry older loader - if(tensor->ne[0]==ne[1] && tensor->ne[1]==ne[0] && should_transpose_layer(name)) - { - printf("\nFound a transposed tensor. This could be an older or newer model. Retrying load..."); - ggml_free(ctx); - return ModelLoadResult::RETRY_LOAD; - } - else - { - fprintf(stderr, "%s: tensor '%s' has wrong shape in model file: got [%ld, %ld], expected [%d, %d]\n", - __func__, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]); - return ModelLoadResult::FAIL; - } - - } - - // for debugging - if (0) { - printf("%24s - [%5d, %5d], type = %6s, %6.2f MB, %9zu bytes\n", name.data(), ne[0], ne[1], ggml_type_name(ggml_type(ttype)), ggml_nbytes(tensor)/1024.0/1024.0, ggml_nbytes(tensor)); - } - - const size_t bpe = ggml_type_size(ggml_type(ttype)); - - if ((nelements*bpe)/ggml_blck_size(tensor->type) != ggml_nbytes(tensor)) { - fprintf(stderr, "%s: tensor '%s' has wrong size in model file: got %zu, expected %zu\n", - __func__, name.data(), ggml_nbytes(tensor), nelements*bpe); - return ModelLoadResult::FAIL; - } - - fin.read(reinterpret_cast(tensor->data), ggml_nbytes(tensor)); - - //printf("%42s - [%5d, %5d], type = %6s, %6.2f MB\n", name.data(), ne[0], ne[1], ttype == 0 ? "float" : "f16", ggml_nbytes(tensor)/1024.0/1024.0); - total_size += ggml_nbytes(tensor); - if (++n_tensors % 8 == 0) { - printf("."); - fflush(stdout); - } - } - - printf(" done\n"); - - printf("%s: model size = %8.2f MB / num tensors = %d\n", __func__, total_size/1024.0/1024.0, n_tensors); - } - - fin.close(); - - //gpu offload - #if defined(GGML_USE_CLBLAST) || defined(GGML_USE_CUBLAS) - if(gpulayers>0) - { - const auto & hparams = model.hparams; - size_t vram_total = 0; - const int n_gpu = std::min(gpulayers, int(hparams.n_layer)); - #if defined(GGML_USE_CLBLAST) - fprintf(stderr, "%s: [opencl] offloading %d layers to GPU\n", __func__, n_gpu); - #else - fprintf(stderr, "%s: [CUDA] offloading %d layers to GPU\n", __func__, n_gpu); - #endif - for (int i = 0; i < n_gpu; ++i) { - const auto & layer = model.layers[i]; - layer.c_attn_q_proj_w->backend = GGML_BACKEND_GPU; - layer.c_attn_k_proj_w->backend = GGML_BACKEND_GPU; - layer.c_attn_v_proj_w->backend = GGML_BACKEND_GPU; - layer.c_attn_proj_w->backend = GGML_BACKEND_GPU; - layer.c_mlp_fc_w->backend = GGML_BACKEND_GPU; - layer.c_mlp_proj_w->backend = GGML_BACKEND_GPU; - #if defined(GGML_USE_CLBLAST) - ggml_cl_transform_tensor(layer.c_attn_q_proj_w->data,layer.c_attn_q_proj_w); vram_total += ggml_nbytes(layer.c_attn_q_proj_w); - ggml_cl_transform_tensor(layer.c_attn_k_proj_w->data,layer.c_attn_k_proj_w); vram_total += ggml_nbytes(layer.c_attn_k_proj_w); - ggml_cl_transform_tensor(layer.c_attn_v_proj_w->data,layer.c_attn_v_proj_w); vram_total += ggml_nbytes(layer.c_attn_v_proj_w); - ggml_cl_transform_tensor(layer.c_attn_proj_w->data,layer.c_attn_proj_w); vram_total += ggml_nbytes(layer.c_attn_proj_w); - ggml_cl_transform_tensor(layer.c_mlp_fc_w->data,layer.c_mlp_fc_w); vram_total += ggml_nbytes(layer.c_mlp_fc_w); - ggml_cl_transform_tensor(layer.c_mlp_proj_w->data,layer.c_mlp_proj_w); vram_total += ggml_nbytes(layer.c_mlp_proj_w); - #else - ggml_cuda_transform_tensor(layer.c_attn_q_proj_w->data,layer.c_attn_q_proj_w); vram_total += ggml_nbytes(layer.c_attn_q_proj_w); - ggml_cuda_transform_tensor(layer.c_attn_k_proj_w->data,layer.c_attn_k_proj_w); vram_total += ggml_nbytes(layer.c_attn_k_proj_w); - ggml_cuda_transform_tensor(layer.c_attn_v_proj_w->data,layer.c_attn_v_proj_w); vram_total += ggml_nbytes(layer.c_attn_v_proj_w); - ggml_cuda_transform_tensor(layer.c_attn_proj_w->data,layer.c_attn_proj_w); vram_total += ggml_nbytes(layer.c_attn_proj_w); - ggml_cuda_transform_tensor(layer.c_mlp_fc_w->data,layer.c_mlp_fc_w); vram_total += ggml_nbytes(layer.c_mlp_fc_w); - ggml_cuda_transform_tensor(layer.c_mlp_proj_w->data,layer.c_mlp_proj_w); vram_total += ggml_nbytes(layer.c_mlp_proj_w); - #endif - } - #if defined(GGML_USE_CLBLAST) - fprintf(stderr, "%s: [opencl] total VRAM used: %zu MB\n", __func__, vram_total / 1024 / 1024); - #else - fprintf(stderr, "%s: [CUDA] total VRAM used: %zu MB\n", __func__, vram_total / 1024 / 1024); - #endif - } - #endif - - return ModelLoadResult::SUCCESS; -} - -// evaluate the transformer -// -// - model: the model -// - n_threads: number of threads to use -// - n_past: the context size so far -// - embd_inp: the embeddings of the tokens in the context -// - embd_w: the predicted logits for the next token -// -// The GPT-J model requires about 16MB of memory per input token. -// -bool gptj_eval( - const gptj_model & model, - const int n_threads, - const int n_past, - const std::vector & embd_inp, - std::vector & embd_w, - size_t & mem_per_token, - bool use_scratch) { - const int N = embd_inp.size(); - - const auto & hparams = model.hparams; - - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_ctx = hparams.n_ctx; - const int n_head = hparams.n_head; - const int n_vocab = hparams.n_vocab; - const int n_rot = hparams.n_rot; - - const float freq_base = hparams.rope_freq_base; - const float freq_scale = hparams.rope_freq_scale; - - static size_t buf_size = 256u*1024*1024; - static void * buf = malloc(buf_size); - - // use 2 scratch buffers - // TODO: very hacky solution - reimplement in a more elegant way - static size_t scr0_size = 512u*1024*1024*(hparams.n_ctx>8192?2:1); - static size_t scr1_size = 512u*1024*1024; - - static void * scr0 = malloc(scr0_size); - static void * scr1 = malloc(scr1_size); - - if (mem_per_token > 0 && (mem_per_token*N*2 + 64u*1024*1024) > buf_size) { - const size_t buf_size_new = 320u*1024*1024 + 1.2*(mem_per_token*N); // add 10% to account for ggml object overhead - //printf("\n%s: reallocating buffer from %zu to %zu bytes\n", __func__, buf_size, buf_size_new); - - // reallocate - if (buf_size_new > buf_size) - { - buf_size = buf_size_new; - buf = realloc(buf, buf_size); - if (buf == nullptr) - { - fprintf(stderr, "%s: failed to allocate %zu bytes. Try reducing batch size.\n", __func__, buf_size); - return false; - } - } - } - - struct ggml_init_params params; - params.mem_size = buf_size; - params.mem_buffer = buf; - params.no_alloc = false; - - - struct ggml_context * ctx0 = ggml_init(params); - struct ggml_cgraph gf = {}; - - struct ggml_tensor * embd = ggml_new_tensor_1d(ctx0, GGML_TYPE_I32, N); - memcpy(embd->data, embd_inp.data(), N*ggml_element_size(embd)); - - // wte - struct ggml_tensor * inpL = ggml_get_rows(ctx0, model.wte, embd); - - for (int il = 0; il < n_layer; ++il) { - struct ggml_tensor * cur; - - if(use_scratch){ - ggml_set_scratch(ctx0, { 0, scr0_size, scr0, }); - } - - // norm - { - cur = ggml_norm(ctx0, inpL, default_norm_eps); - - // cur = ln_1_g*cur + ln_1_b - cur = ggml_add(ctx0, - ggml_mul(ctx0, - ggml_repeat(ctx0, model.layers[il].ln_1_g, cur), - cur), - ggml_repeat(ctx0, model.layers[il].ln_1_b, cur)); - } - - struct ggml_tensor * inpSA = cur; - - // self-attention - { - struct ggml_tensor * KQ_pos = ggml_new_tensor_1d(ctx0, GGML_TYPE_I32, N); - { - int * data = (int *) KQ_pos->data; - for (int i = 0; i < N; ++i) { - data[i] = n_past + i; - } - } - - struct ggml_tensor * Qcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, ggml_mul_mat(ctx0, model.layers[il].c_attn_q_proj_w, cur), n_embd/n_head, n_head, N), KQ_pos, n_rot, 0, n_ctx, freq_base, freq_scale); - struct ggml_tensor * Kcur = ggml_rope_custom_inplace(ctx0, ggml_reshape_3d(ctx0, ggml_mul_mat(ctx0, model.layers[il].c_attn_k_proj_w, cur), n_embd/n_head, n_head, N), KQ_pos, n_rot, 0, n_ctx, freq_base, freq_scale); - - // store key and value to memory - { - struct ggml_tensor * Vcur = ggml_transpose(ctx0, ggml_mul_mat(ctx0, model.layers[il].c_attn_v_proj_w, cur)); - - struct ggml_tensor * k = ggml_view_1d(ctx0, model.memory_k, N*n_embd, (ggml_element_size(model.memory_k)*n_embd)*(il*n_ctx + n_past)); - struct ggml_tensor * v = ggml_view_2d(ctx0, model.memory_v, N, n_embd, - ( n_ctx)*ggml_element_size(model.memory_v), - (il*n_ctx)*ggml_element_size(model.memory_v)*n_embd + n_past*ggml_element_size(model.memory_v)); - - ggml_build_forward_expand(&gf, ggml_cpy(ctx0, Kcur, k)); - ggml_build_forward_expand(&gf, ggml_cpy(ctx0, Vcur, v)); - } - - // Q = Qcur.contiguous().view(n_embd/n_head, n_head, N).permute(0, 2, 1, 3) - struct ggml_tensor * Q = - ggml_permute(ctx0, - Qcur, - 0, 2, 1, 3); - - // K = Kmem.view(n_embd/n_head, n_head, n_past + N).permute(0, 2, 1, 3) - struct ggml_tensor * K = - ggml_permute(ctx0, - ggml_reshape_3d(ctx0, - ggml_view_1d(ctx0, model.memory_k, (n_past + N)*n_embd, il*n_ctx*ggml_element_size(model.memory_k)*n_embd), - n_embd/n_head, n_head, n_past + N), - 0, 2, 1, 3); - - // K * Q - struct ggml_tensor * KQ = ggml_mul_mat(ctx0, K, Q); - - // KQ_scaled = KQ / sqrt(n_embd/n_head) - struct ggml_tensor * KQ_scaled = - ggml_scale_inplace(ctx0, - KQ, - ggml_new_f32(ctx0, 1.0f/sqrt(float(n_embd)/n_head)) - ); - - // KQ_masked = mask_past(KQ_scaled) - struct ggml_tensor * KQ_masked = ggml_diag_mask_inf_inplace(ctx0, KQ_scaled, n_past); - - // KQ = soft_max(KQ_masked) - struct ggml_tensor * KQ_soft_max = ggml_soft_max_inplace(ctx0, KQ_masked); - - // V_trans = Vmem.view(n_embd/n_head, n_head, n_past + N).permute(1, 2, 0, 3).contiguous() - struct ggml_tensor * V = - ggml_view_3d(ctx0, model.memory_v, - n_past + N, n_embd/n_head, n_head, - n_ctx*ggml_element_size(model.memory_v), - n_ctx*ggml_element_size(model.memory_v)*n_embd/n_head, - il*n_ctx*ggml_element_size(model.memory_v)*n_embd); - - // KQV = transpose(V) * KQ_soft_max - struct ggml_tensor * KQV = ggml_mul_mat(ctx0, V, KQ_soft_max); - - // KQV_merged = KQV.permute(0, 2, 1, 3) - struct ggml_tensor * KQV_merged = ggml_permute(ctx0, KQV, 0, 2, 1, 3); - - // cur = KQV_merged.contiguous().view(n_embd, N) - cur = ggml_cpy(ctx0, - KQV_merged, - ggml_new_tensor_2d(ctx0, GGML_TYPE_F32, n_embd, N)); - - // projection (no bias) - cur = ggml_mul_mat(ctx0, - model.layers[il].c_attn_proj_w, - cur); - } - - if(use_scratch){ - ggml_set_scratch(ctx0, { 0, scr1_size, scr1, }); - } - - struct ggml_tensor * inpFF = cur; - - // feed-forward network - // this is independent of the self-attention result, so it could be done in parallel to the self-attention - { - // note here we pass inpSA instead of cur - cur = ggml_mul_mat(ctx0, - model.layers[il].c_mlp_fc_w, - inpSA); - - cur = ggml_add(ctx0, - ggml_repeat(ctx0, model.layers[il].c_mlp_fc_b, cur), - cur); - - // GELU activation - cur = ggml_gelu(ctx0, cur); - - // projection - // cur = proj_w*cur + proj_b - cur = ggml_mul_mat(ctx0, - model.layers[il].c_mlp_proj_w, - cur); - - cur = ggml_add(ctx0, - ggml_repeat(ctx0, model.layers[il].c_mlp_proj_b, cur), - cur); - } - - // self-attention + FF - cur = ggml_add(ctx0, cur, inpFF); - - // input for next layer - inpL = ggml_add(ctx0, cur, inpL); - } - - if(use_scratch){ - ggml_set_scratch(ctx0, { 0, scr0_size, scr0, }); - } - - // norm - { - inpL = ggml_norm(ctx0, inpL, default_norm_eps); - - // inpL = ln_f_g*inpL + ln_f_b - inpL = ggml_add(ctx0, - ggml_mul(ctx0, - ggml_repeat(ctx0, model.ln_f_g, inpL), - inpL), - ggml_repeat(ctx0, model.ln_f_b, inpL)); - } - - if(use_scratch){ - ggml_set_scratch(ctx0, { 0, 0, nullptr, }); - } - - // lm_head - { - inpL = ggml_mul_mat(ctx0, model.lmh_g, inpL); - - inpL = ggml_add(ctx0, - ggml_repeat(ctx0, model.lmh_b, inpL), - inpL); - } - - // logits -> probs - //inpL = ggml_soft_max_inplace(ctx0, inpL); - - // run the computation - ggml_build_forward_expand(&gf, inpL); - kcpp_graph_compute_helper(&gf, n_threads); - - //if (n_past%100 == 0) { - // ggml_graph_print (&gf); - // ggml_graph_dump_dot(&gf, NULL, "gpt-j.dot"); - //} - - //embd_w.resize(n_vocab*N); - //memcpy(embd_w.data(), ggml_get_data(inpL), sizeof(float)*n_vocab*N); - - // return result for just the last token - embd_w.resize(n_vocab); - memcpy(embd_w.data(), (float *) ggml_get_data(inpL) + (n_vocab*(N-1)), sizeof(float)*n_vocab); - - if (mem_per_token == 0) { - mem_per_token = ggml_used_mem(ctx0)/N; - } - //printf("used_mem = %zu\n", ggml_used_mem(ctx0)); - - ggml_free(ctx0); - - return true; -} diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/style_loss.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/style_loss.py deleted file mode 100644 index 0bb42d7fbc5d17a47bec7365889868505f5fdfb5..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/losses/style_loss.py +++ /dev/null @@ -1,155 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.models as models - - -class PerceptualLoss(nn.Module): - r""" - Perceptual loss, VGG-based - https://arxiv.org/abs/1603.08155 - https://github.com/dxyang/StyleTransfer/blob/master/utils.py - """ - - def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]): - super(PerceptualLoss, self).__init__() - self.add_module('vgg', VGG19()) - self.criterion = torch.nn.L1Loss() - self.weights = weights - - def __call__(self, x, y): - # Compute features - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - - content_loss = 0.0 - content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1']) - content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1']) - content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1']) - content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1']) - content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1']) - - - return content_loss - - -class VGG19(torch.nn.Module): - def __init__(self): - super(VGG19, self).__init__() - features = models.vgg19(pretrained=True).features - self.relu1_1 = torch.nn.Sequential() - self.relu1_2 = torch.nn.Sequential() - - self.relu2_1 = torch.nn.Sequential() - self.relu2_2 = torch.nn.Sequential() - - self.relu3_1 = torch.nn.Sequential() - self.relu3_2 = torch.nn.Sequential() - self.relu3_3 = torch.nn.Sequential() - self.relu3_4 = torch.nn.Sequential() - - self.relu4_1 = torch.nn.Sequential() - self.relu4_2 = torch.nn.Sequential() - self.relu4_3 = torch.nn.Sequential() - self.relu4_4 = torch.nn.Sequential() - - self.relu5_1 = torch.nn.Sequential() - self.relu5_2 = torch.nn.Sequential() - self.relu5_3 = torch.nn.Sequential() - self.relu5_4 = torch.nn.Sequential() - - for x in range(2): - self.relu1_1.add_module(str(x), features[x]) - - for x in range(2, 4): - self.relu1_2.add_module(str(x), features[x]) - - for x in range(4, 7): - self.relu2_1.add_module(str(x), features[x]) - - for x in range(7, 9): - self.relu2_2.add_module(str(x), features[x]) - - for x in range(9, 12): - self.relu3_1.add_module(str(x), features[x]) - - for x in range(12, 14): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(14, 16): - self.relu3_2.add_module(str(x), features[x]) - - for x in range(16, 18): - self.relu3_4.add_module(str(x), features[x]) - - for x in range(18, 21): - self.relu4_1.add_module(str(x), features[x]) - - for x in range(21, 23): - self.relu4_2.add_module(str(x), features[x]) - - for x in range(23, 25): - self.relu4_3.add_module(str(x), features[x]) - - for x in range(25, 27): - self.relu4_4.add_module(str(x), features[x]) - - for x in range(27, 30): - self.relu5_1.add_module(str(x), features[x]) - - for x in range(30, 32): - self.relu5_2.add_module(str(x), features[x]) - - for x in range(32, 34): - self.relu5_3.add_module(str(x), features[x]) - - for x in range(34, 36): - self.relu5_4.add_module(str(x), features[x]) - - # don't need the gradients, just want the features - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x): - relu1_1 = self.relu1_1(x) - relu1_2 = self.relu1_2(relu1_1) - - relu2_1 = self.relu2_1(relu1_2) - relu2_2 = self.relu2_2(relu2_1) - - relu3_1 = self.relu3_1(relu2_2) - relu3_2 = self.relu3_2(relu3_1) - relu3_3 = self.relu3_3(relu3_2) - relu3_4 = self.relu3_4(relu3_3) - - relu4_1 = self.relu4_1(relu3_4) - relu4_2 = self.relu4_2(relu4_1) - relu4_3 = self.relu4_3(relu4_2) - relu4_4 = self.relu4_4(relu4_3) - - relu5_1 = self.relu5_1(relu4_4) - relu5_2 = self.relu5_2(relu5_1) - relu5_3 = self.relu5_3(relu5_2) - relu5_4 = self.relu5_4(relu5_3) - - out = { - 'relu1_1': relu1_1, - 'relu1_2': relu1_2, - - 'relu2_1': relu2_1, - 'relu2_2': relu2_2, - - 'relu3_1': relu3_1, - 'relu3_2': relu3_2, - 'relu3_3': relu3_3, - 'relu3_4': relu3_4, - - 'relu4_1': relu4_1, - 'relu4_2': relu4_2, - 'relu4_3': relu4_3, - 'relu4_4': relu4_4, - - 'relu5_1': relu5_1, - 'relu5_2': relu5_2, - 'relu5_3': relu5_3, - 'relu5_4': relu5_4, - } - return out diff --git a/spaces/JMalott/ai_architecture/page/generate.py b/spaces/JMalott/ai_architecture/page/generate.py deleted file mode 100644 index dbd73cb38d4dd302755ef697c3d3237e545744cf..0000000000000000000000000000000000000000 --- a/spaces/JMalott/ai_architecture/page/generate.py +++ /dev/null @@ -1,112 +0,0 @@ -import collections -from numpy.core.defchararray import lower -import streamlit as st -import numpy as np -import pandas as pd -import streamlit as st -import pandas as pd -import numpy as np -import os, random, time -from utils import footer, generate, drawGrid, generate2 -from PIL import Image - -mode = "ai" -#mode = "dummy" - -def app(): - global _prompt - - st.title('AI-Generated Architecture') - - #st.subheader('(beta v1.1)') - - #st.text('This is a working beta version with bugs. Known issues are:\n-Some images will grey out when you change the input parameters') - - st.subheader('Describe a building, interior, or other architecture you would like to see. You can change the prompt and input parameters on the fly.') - - #Modern architecture museum with black brick and large windows. - print("Prompt: "+st.session_state.prompt) - prompt = st.text_input(label="",value=st.session_state.prompt) - - st.text("") - - #with st.expander("Having trouble thinking of something? Click here to view examples."): - # st.write(""" - # • Modern architecture museum with black brick and large windows.\n - # • A prosaic, simple architecture.\n - # • An urban, post-modern architecture with concrete and steel.\n - # • A sleek urban interior design. - # """) - - st.text("") - - crazy = st.slider('Temperature. This controls how "crazy" generated images are, where 0 is the least crazy.', 0.0, 1.0, 0.75) - k = st.slider('Top K. The higher the value, the higher quality the results tend to be at the cost of extra processing time.', 1, 10, 5) - #k = k*400 - - if( 'results' not in st.session_state ): - st.session_state.results = [] - - - c1,c2 = st.columns(2) - - with c1: - holder = st.empty() - with c2: - holder2 = st.empty() - - startButton = holder.button("Start") - - already = [] - - if startButton or hasattr(st.session_state, 'load_state') or st.session_state.prompt is not None: - - with st.spinner("Generating..."): - - holder.empty() - - st.session_state.load_state = True - - - - placeholder = st.empty() - second = st.empty() - - nextButton = False - randomButton = False - f = True - - ii = 0 - while len(st.session_state.results) <= 64: - - ii += 1 - - if(f and len(st.session_state.results) > 0): - f = False - randomButton = holder2.button("Randomize Prompt") - nextButton = holder.button("Finish Generating Images") - - with second.container(): - drawGrid() - - with placeholder.container(): - - st.session_state.bar = placeholder.progress(0) - - if(len(st.session_state.results) > 0 and nextButton): - st.session_state.page = 1 - break - - if(randomButton): - randomButton = False - _1 = ["A modern ","A post-modern ","An ultramodern ", "A classical ", "A parametric ", "A contemporary ", "A minimalist "] - _2 = ["museum architecture","home architecture","landscape architecture","interior design"] - _3 = [""," in the style of I.M. Pei"," in the style of Frank Gehry"," in the style of John Lautner"," in the style of Frank Lloyd Wright"] - _4 = [" photograph",", watercolor painting",", oil painting", ", digital art"] - - prompt = str(random.choice(_1)+random.choice(_2)+random.choice(_3)+random.choice(_4)) - st.session_state.prompt = prompt - - generate(prompt,crazy,k) - - st.session_state.bar = st.container() diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py deleted file mode 100644 index 48d16889a030217b5d203233678a10e3eb7ae9d2..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py +++ /dev/null @@ -1,119 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from typing import Optional, Tuple, Union - -import torch - -from ...pipeline_utils import AudioPipelineOutput, DiffusionPipeline -from ...utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class DanceDiffusionPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded image. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of - [`IPNDMScheduler`]. - """ - - def __init__(self, unet, scheduler): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - num_inference_steps: int = 100, - generator: Optional[torch.Generator] = None, - audio_length_in_s: Optional[float] = None, - return_dict: bool = True, - ) -> Union[AudioPipelineOutput, Tuple]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - The number of audio samples to generate. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality audio sample at - the expense of slower inference. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - audio_length_in_s (`float`, *optional*, defaults to `self.unet.config.sample_size/self.unet.config.sample_rate`): - The length of the generated audio sample in seconds. Note that the output of the pipeline, *i.e.* - `sample_size`, will be `audio_length_in_s` * `self.unet.sample_rate`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipeline_utils.AudioPipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipeline_utils.AudioPipelineOutput`] or `tuple`: [`~pipelines.utils.AudioPipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - - if audio_length_in_s is None: - audio_length_in_s = self.unet.config.sample_size / self.unet.config.sample_rate - - sample_size = audio_length_in_s * self.unet.sample_rate - - down_scale_factor = 2 ** len(self.unet.up_blocks) - if sample_size < 3 * down_scale_factor: - raise ValueError( - f"{audio_length_in_s} is too small. Make sure it's bigger or equal to" - f" {3 * down_scale_factor / self.unet.sample_rate}." - ) - - original_sample_size = int(sample_size) - if sample_size % down_scale_factor != 0: - sample_size = ((audio_length_in_s * self.unet.sample_rate) // down_scale_factor + 1) * down_scale_factor - logger.info( - f"{audio_length_in_s} is increased to {sample_size / self.unet.sample_rate} so that it can be handled" - f" by the model. It will be cut to {original_sample_size / self.unet.sample_rate} after the denoising" - " process." - ) - sample_size = int(sample_size) - - dtype = next(iter(self.unet.parameters())).dtype - audio = torch.randn( - (batch_size, self.unet.in_channels, sample_size), generator=generator, device=self.device, dtype=dtype - ) - - # set step values - self.scheduler.set_timesteps(num_inference_steps, device=audio.device) - self.scheduler.timesteps = self.scheduler.timesteps.to(dtype) - - for t in self.progress_bar(self.scheduler.timesteps): - # 1. predict noise model_output - model_output = self.unet(audio, t).sample - - # 2. compute previous image: x_t -> t_t-1 - audio = self.scheduler.step(model_output, t, audio).prev_sample - - audio = audio.clamp(-1, 1).float().cpu().numpy() - - audio = audio[:, :, :original_sample_size] - - if not return_dict: - return (audio,) - - return AudioPipelineOutput(audios=audio) diff --git a/spaces/James1208/Salesforce-codegen-350M-mono/README.md b/spaces/James1208/Salesforce-codegen-350M-mono/README.md deleted file mode 100644 index e510c7fe67ea16b1e0c3fea62fb01a9083915d49..0000000000000000000000000000000000000000 --- a/spaces/James1208/Salesforce-codegen-350M-mono/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Salesforce Codegen 350M Mono -emoji: ⚡ -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JohnSmith9982/small_and_pretty/app.py b/spaces/JohnSmith9982/small_and_pretty/app.py deleted file mode 100644 index 9049b5749db9640f03aa6b58afaf2d8d2394e4c1..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/small_and_pretty/app.py +++ /dev/null @@ -1,147 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='JohnSmith9982/small_and_pretty') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `small_and_pretty` - To use this theme, set `theme='JohnSmith9982/small_and_pretty'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_dense_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_dense_head.py deleted file mode 100644 index 02a397c62f9154d10fa5ae254b75a76f041e348d..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,577 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from abc import ABCMeta, abstractmethod -from inspect import signature -from typing import List, Optional, Tuple - -import torch -from mmcv.ops import batched_nms -from mmengine.config import ConfigDict -from mmengine.model import BaseModule, constant_init -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.structures import SampleList -from mmdet.structures.bbox import (cat_boxes, get_box_tensor, get_box_wh, - scale_boxes) -from mmdet.utils import InstanceList, OptMultiConfig -from ..test_time_augs import merge_aug_results -from ..utils import (filter_scores_and_topk, select_single_mlvl, - unpack_gt_instances) - - -class BaseDenseHead(BaseModule, metaclass=ABCMeta): - """Base class for DenseHeads. - - 1. The ``init_weights`` method is used to initialize densehead's - model parameters. After detector initialization, ``init_weights`` - is triggered when ``detector.init_weights()`` is called externally. - - 2. The ``loss`` method is used to calculate the loss of densehead, - which includes two steps: (1) the densehead model performs forward - propagation to obtain the feature maps (2) The ``loss_by_feat`` method - is called based on the feature maps to calculate the loss. - - .. code:: text - - loss(): forward() -> loss_by_feat() - - 3. The ``predict`` method is used to predict detection results, - which includes two steps: (1) the densehead model performs forward - propagation to obtain the feature maps (2) The ``predict_by_feat`` method - is called based on the feature maps to predict detection results including - post-processing. - - .. code:: text - - predict(): forward() -> predict_by_feat() - - 4. The ``loss_and_predict`` method is used to return loss and detection - results at the same time. It will call densehead's ``forward``, - ``loss_by_feat`` and ``predict_by_feat`` methods in order. If one-stage is - used as RPN, the densehead needs to return both losses and predictions. - This predictions is used as the proposal of roihead. - - .. code:: text - - loss_and_predict(): forward() -> loss_by_feat() -> predict_by_feat() - """ - - def __init__(self, init_cfg: OptMultiConfig = None) -> None: - super().__init__(init_cfg=init_cfg) - # `_raw_positive_infos` will be used in `get_positive_infos`, which - # can get positive information. - self._raw_positive_infos = dict() - - def init_weights(self) -> None: - """Initialize the weights.""" - super().init_weights() - # avoid init_cfg overwrite the initialization of `conv_offset` - for m in self.modules(): - # DeformConv2dPack, ModulatedDeformConv2dPack - if hasattr(m, 'conv_offset'): - constant_init(m.conv_offset, 0) - - def get_positive_infos(self) -> InstanceList: - """Get positive information from sampling results. - - Returns: - list[:obj:`InstanceData`]: Positive information of each image, - usually including positive bboxes, positive labels, positive - priors, etc. - """ - if len(self._raw_positive_infos) == 0: - return None - - sampling_results = self._raw_positive_infos.get( - 'sampling_results', None) - assert sampling_results is not None - positive_infos = [] - for sampling_result in enumerate(sampling_results): - pos_info = InstanceData() - pos_info.bboxes = sampling_result.pos_gt_bboxes - pos_info.labels = sampling_result.pos_gt_labels - pos_info.priors = sampling_result.pos_priors - pos_info.pos_assigned_gt_inds = \ - sampling_result.pos_assigned_gt_inds - pos_info.pos_inds = sampling_result.pos_inds - positive_infos.append(pos_info) - return positive_infos - - def loss(self, x: Tuple[Tensor], batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - head on the features of the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - - Returns: - dict: A dictionary of loss components. - """ - outs = self(x) - - outputs = unpack_gt_instances(batch_data_samples) - (batch_gt_instances, batch_gt_instances_ignore, - batch_img_metas) = outputs - - loss_inputs = outs + (batch_gt_instances, batch_img_metas, - batch_gt_instances_ignore) - losses = self.loss_by_feat(*loss_inputs) - return losses - - @abstractmethod - def loss_by_feat(self, **kwargs) -> dict: - """Calculate the loss based on the features extracted by the detection - head.""" - pass - - def loss_and_predict( - self, - x: Tuple[Tensor], - batch_data_samples: SampleList, - proposal_cfg: Optional[ConfigDict] = None - ) -> Tuple[dict, InstanceList]: - """Perform forward propagation of the head, then calculate loss and - predictions from the features and data samples. - - Args: - x (tuple[Tensor]): Features from FPN. - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - proposal_cfg (ConfigDict, optional): Test / postprocessing - configuration, if None, test_cfg would be used. - Defaults to None. - - Returns: - tuple: the return value is a tuple contains: - - - losses: (dict[str, Tensor]): A dictionary of loss components. - - predictions (list[:obj:`InstanceData`]): Detection - results of each image after the post process. - """ - outputs = unpack_gt_instances(batch_data_samples) - (batch_gt_instances, batch_gt_instances_ignore, - batch_img_metas) = outputs - - outs = self(x) - - loss_inputs = outs + (batch_gt_instances, batch_img_metas, - batch_gt_instances_ignore) - losses = self.loss_by_feat(*loss_inputs) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas, cfg=proposal_cfg) - return losses, predictions - - def predict(self, - x: Tuple[Tensor], - batch_data_samples: SampleList, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the detection head and predict - detection results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[obj:`InstanceData`]: Detection results of each image - after the post process. - """ - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - outs = self(x) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas, rescale=rescale) - return predictions - - def predict_by_feat(self, - cls_scores: List[Tensor], - bbox_preds: List[Tensor], - score_factors: Optional[List[Tensor]] = None, - batch_img_metas: Optional[List[dict]] = None, - cfg: Optional[ConfigDict] = None, - rescale: bool = False, - with_nms: bool = True) -> InstanceList: - """Transform a batch of output features extracted from the head into - bbox results. - - Note: When score_factors is not None, the cls_scores are - usually multiplied by it then obtain the real score used in NMS, - such as CenterNess in FCOS, IoU branch in ATSS. - - Args: - cls_scores (list[Tensor]): Classification scores for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for all - scale levels, each is a 4D-tensor, has shape - (batch_size, num_priors * 4, H, W). - score_factors (list[Tensor], optional): Score factor for - all scale level, each is a 4D-tensor, has shape - (batch_size, num_priors * 1, H, W). Defaults to None. - batch_img_metas (list[dict], Optional): Batch image meta info. - Defaults to None. - cfg (ConfigDict, optional): Test / postprocessing - configuration, if None, test_cfg would be used. - Defaults to None. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - with_nms (bool): If True, do nms before return boxes. - Defaults to True. - - Returns: - list[:obj:`InstanceData`]: Object detection results of each image - after the post process. Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - assert len(cls_scores) == len(bbox_preds) - - if score_factors is None: - # e.g. Retina, FreeAnchor, Foveabox, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, AutoAssign, etc. - with_score_factors = True - assert len(cls_scores) == len(score_factors) - - num_levels = len(cls_scores) - - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_priors = self.prior_generator.grid_priors( - featmap_sizes, - dtype=cls_scores[0].dtype, - device=cls_scores[0].device) - - result_list = [] - - for img_id in range(len(batch_img_metas)): - img_meta = batch_img_metas[img_id] - cls_score_list = select_single_mlvl( - cls_scores, img_id, detach=True) - bbox_pred_list = select_single_mlvl( - bbox_preds, img_id, detach=True) - if with_score_factors: - score_factor_list = select_single_mlvl( - score_factors, img_id, detach=True) - else: - score_factor_list = [None for _ in range(num_levels)] - - results = self._predict_by_feat_single( - cls_score_list=cls_score_list, - bbox_pred_list=bbox_pred_list, - score_factor_list=score_factor_list, - mlvl_priors=mlvl_priors, - img_meta=img_meta, - cfg=cfg, - rescale=rescale, - with_nms=with_nms) - result_list.append(results) - return result_list - - def _predict_by_feat_single(self, - cls_score_list: List[Tensor], - bbox_pred_list: List[Tensor], - score_factor_list: List[Tensor], - mlvl_priors: List[Tensor], - img_meta: dict, - cfg: ConfigDict, - rescale: bool = False, - with_nms: bool = True) -> InstanceData: - """Transform a single image's features extracted from the head into - bbox results. - - Args: - cls_score_list (list[Tensor]): Box scores from all scale - levels of a single image, each item has shape - (num_priors * num_classes, H, W). - bbox_pred_list (list[Tensor]): Box energies / deltas from - all scale levels of a single image, each item has shape - (num_priors * 4, H, W). - score_factor_list (list[Tensor]): Score factor from all scale - levels of a single image, each item has shape - (num_priors * 1, H, W). - mlvl_priors (list[Tensor]): Each element in the list is - the priors of a single level in feature pyramid. In all - anchor-based methods, it has shape (num_priors, 4). In - all anchor-free methods, it has shape (num_priors, 2) - when `with_stride=True`, otherwise it still has shape - (num_priors, 4). - img_meta (dict): Image meta info. - cfg (mmengine.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - with_nms (bool): If True, do nms before return boxes. - Defaults to True. - - Returns: - :obj:`InstanceData`: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - if score_factor_list[0] is None: - # e.g. Retina, FreeAnchor, etc. - with_score_factors = False - else: - # e.g. FCOS, PAA, ATSS, etc. - with_score_factors = True - - cfg = self.test_cfg if cfg is None else cfg - cfg = copy.deepcopy(cfg) - img_shape = img_meta['img_shape'] - nms_pre = cfg.get('nms_pre', -1) - - mlvl_bbox_preds = [] - mlvl_valid_priors = [] - mlvl_scores = [] - mlvl_labels = [] - if with_score_factors: - mlvl_score_factors = [] - else: - mlvl_score_factors = None - for level_idx, (cls_score, bbox_pred, score_factor, priors) in \ - enumerate(zip(cls_score_list, bbox_pred_list, - score_factor_list, mlvl_priors)): - - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - - dim = self.bbox_coder.encode_size - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, dim) - if with_score_factors: - score_factor = score_factor.permute(1, 2, - 0).reshape(-1).sigmoid() - cls_score = cls_score.permute(1, 2, - 0).reshape(-1, self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - scores = cls_score.softmax(-1)[:, :-1] - - # After https://github.com/open-mmlab/mmdetection/pull/6268/, - # this operation keeps fewer bboxes under the same `nms_pre`. - # There is no difference in performance for most models. If you - # find a slight drop in performance, you can set a larger - # `nms_pre` than before. - score_thr = cfg.get('score_thr', 0) - - results = filter_scores_and_topk( - scores, score_thr, nms_pre, - dict(bbox_pred=bbox_pred, priors=priors)) - scores, labels, keep_idxs, filtered_results = results - - bbox_pred = filtered_results['bbox_pred'] - priors = filtered_results['priors'] - - if with_score_factors: - score_factor = score_factor[keep_idxs] - - mlvl_bbox_preds.append(bbox_pred) - mlvl_valid_priors.append(priors) - mlvl_scores.append(scores) - mlvl_labels.append(labels) - - if with_score_factors: - mlvl_score_factors.append(score_factor) - - bbox_pred = torch.cat(mlvl_bbox_preds) - priors = cat_boxes(mlvl_valid_priors) - bboxes = self.bbox_coder.decode(priors, bbox_pred, max_shape=img_shape) - - results = InstanceData() - results.bboxes = bboxes - results.scores = torch.cat(mlvl_scores) - results.labels = torch.cat(mlvl_labels) - if with_score_factors: - results.score_factors = torch.cat(mlvl_score_factors) - - return self._bbox_post_process( - results=results, - cfg=cfg, - rescale=rescale, - with_nms=with_nms, - img_meta=img_meta) - - def _bbox_post_process(self, - results: InstanceData, - cfg: ConfigDict, - rescale: bool = False, - with_nms: bool = True, - img_meta: Optional[dict] = None) -> InstanceData: - """bbox post-processing method. - - The boxes would be rescaled to the original image scale and do - the nms operation. Usually `with_nms` is False is used for aug test. - - Args: - results (:obj:`InstaceData`): Detection instance results, - each item has shape (num_bboxes, ). - cfg (ConfigDict): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default to False. - with_nms (bool): If True, do nms before return boxes. - Default to True. - img_meta (dict, optional): Image meta info. Defaults to None. - - Returns: - :obj:`InstanceData`: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - if rescale: - assert img_meta.get('scale_factor') is not None - scale_factor = [1 / s for s in img_meta['scale_factor']] - results.bboxes = scale_boxes(results.bboxes, scale_factor) - - if hasattr(results, 'score_factors'): - # TODO: Add sqrt operation in order to be consistent with - # the paper. - score_factors = results.pop('score_factors') - results.scores = results.scores * score_factors - - # filter small size bboxes - if cfg.get('min_bbox_size', -1) >= 0: - w, h = get_box_wh(results.bboxes) - valid_mask = (w > cfg.min_bbox_size) & (h > cfg.min_bbox_size) - if not valid_mask.all(): - results = results[valid_mask] - - # TODO: deal with `with_nms` and `nms_cfg=None` in test_cfg - if with_nms and results.bboxes.numel() > 0: - bboxes = get_box_tensor(results.bboxes) - det_bboxes, keep_idxs = batched_nms(bboxes, results.scores, - results.labels, cfg.nms) - results = results[keep_idxs] - # some nms would reweight the score, such as softnms - results.scores = det_bboxes[:, -1] - results = results[:cfg.max_per_img] - - return results - - def aug_test(self, - aug_batch_feats, - aug_batch_img_metas, - rescale=False, - with_ori_nms=False, - **kwargs): - """Test function with test time augmentation. - - Args: - aug_batch_feats (list[tuple[Tensor]]): The outer list - indicates test-time augmentations and inner tuple - indicate the multi-level feats from - FPN, each Tensor should have a shape (B, C, H, W), - aug_batch_img_metas (list[list[dict]]): Meta information - of images under the different test-time augs - (multiscale, flip, etc.). The outer list indicate - the - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - with_ori_nms (bool): Whether execute the nms in original head. - Defaults to False. It will be `True` when the head is - adopted as `rpn_head`. - - Returns: - list(obj:`InstanceData`): Detection results of the - input images. Each item usually contains\ - following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance,) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances,). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - # TODO: remove this for detr and deformdetr - sig_of_get_results = signature(self.get_results) - get_results_args = [ - p.name for p in sig_of_get_results.parameters.values() - ] - get_results_single_sig = signature(self._get_results_single) - get_results_single_sig_args = [ - p.name for p in get_results_single_sig.parameters.values() - ] - assert ('with_nms' in get_results_args) and \ - ('with_nms' in get_results_single_sig_args), \ - f'{self.__class__.__name__}' \ - 'does not support test-time augmentation ' - - num_imgs = len(aug_batch_img_metas[0]) - aug_batch_results = [] - for x, img_metas in zip(aug_batch_feats, aug_batch_img_metas): - outs = self.forward(x) - batch_instance_results = self.get_results( - *outs, - img_metas=img_metas, - cfg=self.test_cfg, - rescale=False, - with_nms=with_ori_nms, - **kwargs) - aug_batch_results.append(batch_instance_results) - - # after merging, bboxes will be rescaled to the original image - batch_results = merge_aug_results(aug_batch_results, - aug_batch_img_metas) - - final_results = [] - for img_id in range(num_imgs): - results = batch_results[img_id] - det_bboxes, keep_idxs = batched_nms(results.bboxes, results.scores, - results.labels, - self.test_cfg.nms) - results = results[keep_idxs] - # some nms operation may reweight the score such as softnms - results.scores = det_bboxes[:, -1] - results = results[:self.test_cfg.max_per_img] - if rescale: - # all results have been mapped to the original scale - # in `merge_aug_results`, so just pass - pass - else: - # map to the first aug image scale - scale_factor = results.bboxes.new_tensor( - aug_batch_img_metas[0][img_id]['scale_factor']) - results.bboxes = \ - results.bboxes * scale_factor - - final_results.append(results) - - return final_results diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/dense_test_mixins.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/dense_test_mixins.py deleted file mode 100644 index a7526d48430d6bc6b82777980d0bef418e80b91c..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/dense_test_mixins.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -import warnings -from inspect import signature - -import torch -from mmcv.ops import batched_nms -from mmengine.structures import InstanceData - -from mmdet.structures.bbox import bbox_mapping_back -from ..test_time_augs import merge_aug_proposals - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - """Mixin class for testing det bboxes via DenseHead.""" - - def simple_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes without test-time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (tuple[torch.Tensor]): Multi-level features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[obj:`InstanceData`]: Detection results of each - image after the post process. \ - Each item usually contains following keys. \ - - - scores (Tensor): Classification scores, has a shape - (num_instance,) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances,). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - warnings.warn('You are calling `simple_test_bboxes` in ' - '`dense_test_mixins`, but the `dense_test_mixins`' - 'will be deprecated soon. Please use ' - '`simple_test` instead.') - outs = self.forward(feats) - results_list = self.get_results( - *outs, img_metas=img_metas, rescale=rescale) - return results_list - - def aug_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes with test time augmentation, can be applied in - DenseHead except for ``RPNHead`` and its variants, e.g., ``GARPNHead``, - etc. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is ``bboxes`` with shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - The shape of the second tensor in the tuple is ``labels`` - with shape (n,). The length of list should always be 1. - """ - - warnings.warn('You are calling `aug_test_bboxes` in ' - '`dense_test_mixins`, but the `dense_test_mixins`' - 'will be deprecated soon. Please use ' - '`aug_test` instead.') - # check with_nms argument - gb_sig = signature(self.get_results) - gb_args = [p.name for p in gb_sig.parameters.values()] - gbs_sig = signature(self._get_results_single) - gbs_args = [p.name for p in gbs_sig.parameters.values()] - assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \ - f'{self.__class__.__name__}' \ - ' does not support test-time augmentation' - - aug_bboxes = [] - aug_scores = [] - aug_labels = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - outs = self.forward(x) - bbox_outputs = self.get_results( - *outs, - img_metas=img_meta, - cfg=self.test_cfg, - rescale=False, - with_nms=False)[0] - aug_bboxes.append(bbox_outputs.bboxes) - aug_scores.append(bbox_outputs.scores) - if len(bbox_outputs) >= 3: - aug_labels.append(bbox_outputs.labels) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_labels = torch.cat(aug_labels, dim=0) if aug_labels else None - - if merged_bboxes.numel() == 0: - det_bboxes = torch.cat([merged_bboxes, merged_scores[:, None]], -1) - return [ - (det_bboxes, merged_labels), - ] - - det_bboxes, keep_idxs = batched_nms(merged_bboxes, merged_scores, - merged_labels, self.test_cfg.nms) - det_bboxes = det_bboxes[:self.test_cfg.max_per_img] - det_labels = merged_labels[keep_idxs][:self.test_cfg.max_per_img] - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - - results = InstanceData() - results.bboxes = _det_bboxes[:, :4] - results.scores = _det_bboxes[:, 4] - results.labels = det_labels - return [results] - - def aug_test_rpn(self, feats, img_metas): - """Test with augmentation for only for ``RPNHead`` and its variants, - e.g., ``GARPNHead``, etc. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): Meta info of each image. - - Returns: - list[Tensor]: Proposals of each image, each item has shape (n, 5), - where 5 represent (tl_x, tl_y, br_x, br_y, score). - """ - samples_per_gpu = len(img_metas[0]) - aug_proposals = [[] for _ in range(samples_per_gpu)] - for x, img_meta in zip(feats, img_metas): - results_list = self.simple_test_rpn(x, img_meta) - for i, results in enumerate(results_list): - proposals = torch.cat( - [results.bboxes, results.scores[:, None]], dim=-1) - aug_proposals[i].append(proposals) - # reorganize the order of 'img_metas' to match the dimensions - # of 'aug_proposals' - aug_img_metas = [] - for i in range(samples_per_gpu): - aug_img_meta = [] - for j in range(len(img_metas)): - aug_img_meta.append(img_metas[j][i]) - aug_img_metas.append(aug_img_meta) - # after merging, proposals will be rescaled to the original image size - - merged_proposals = [] - for proposals, aug_img_meta in zip(aug_proposals, aug_img_metas): - merged_proposal = merge_aug_proposals(proposals, aug_img_meta, - self.test_cfg) - results = InstanceData() - results.bboxes = merged_proposal[:, :4] - results.scores = merged_proposal[:, 4] - merged_proposals.append(results) - return merged_proposals - - if sys.version_info >= (3, 7): - - async def async_simple_test_rpn(self, x, img_metas): - sleep_interval = self.test_cfg.pop('async_sleep_interval', 0.025) - async with completed( - __name__, 'rpn_head_forward', - sleep_interval=sleep_interval): - rpn_outs = self(x) - - proposal_list = self.get_results(*rpn_outs, img_metas=img_metas) - return proposal_list - - def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - - Returns: - tuple[Tensor]: ``bboxes`` with shape (n,4), where - 4 represent (tl_x, tl_y, br_x, br_y) - and ``scores`` with shape (n,). - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores diff --git a/spaces/Laughify/Among_Us_Logic_AI_Generator/README.md b/spaces/Laughify/Among_Us_Logic_AI_Generator/README.md deleted file mode 100644 index 32e5395e0688cf87bde550f6f1b8b54e9a928c73..0000000000000000000000000000000000000000 --- a/spaces/Laughify/Among_Us_Logic_AI_Generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Among Us Logic AI Generator -emoji: 👨‍🚀 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_33966KB.py deleted file mode 100644 index 9b127bc6427f5c60c8cf85603a3d8a093c3501c4..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_33966KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Lbin123/Lbingo/src/lib/hooks/chat-history.ts b/spaces/Lbin123/Lbingo/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/Maheshiscoding/MAHESH-AI-HELPER/app.py b/spaces/Maheshiscoding/MAHESH-AI-HELPER/app.py deleted file mode 100644 index 8479d0b652a2d74c4a1b76451a23771b51009cce..0000000000000000000000000000000000000000 --- a/spaces/Maheshiscoding/MAHESH-AI-HELPER/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import openai -import gradio - -openai.api_key = "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" - -messages = [{"role": "system", "content": "You are Homework Helper who helps each and every student. You were built by Mahesh - a 13 year old child, studying in india, Haryana. his school name is St. Theresa Convent school which is in karnal.PLS ANSWER LOGICALLY AND GIVE ONLY WHAT THE USER ASKS FOR. TRY TO ANSWER AS YOU ARE A TEACHER EXPLAINING TO A STUDENT "}] - -def CustomChatGPT(user_input): - messages.append({"role": "user", "content": user_input}) - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = messages - ) - ChatGPT_reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": ChatGPT_reply}) - return ChatGPT_reply - -demo = gradio.Interface(fn=CustomChatGPT, inputs = "text", outputs = "text", title = "Mahesh' Teacher Assistance AI", theme= "dark") - -demo.launch() \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/models/moglow/__init__.py b/spaces/Marshalls/testmtd/models/moglow/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/README.md b/spaces/MashiroSA/sovits-emu-voice-transform/README.md deleted file mode 100644 index 5fd8c2cd564b6ad0d939d5ef1d508e824bd990e8..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Sovits Emu Voice Transform -emoji: 🌸 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.32.0 -python_version: 3.9 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mehrdadbn/Movie-recommender-system/app.py b/spaces/Mehrdadbn/Movie-recommender-system/app.py deleted file mode 100644 index 072096197c3d591810e462a1759a38c086583a52..0000000000000000000000000000000000000000 --- a/spaces/Mehrdadbn/Movie-recommender-system/app.py +++ /dev/null @@ -1,561 +0,0 @@ -import base64 -from io import BytesIO - -import numpy as np -from PIL import Image -from rich.theme import Theme -from sklearn.metrics import precision_score, recall_score, f1_score -import pandas as pd -import ast -import math -import requests -from sklearn.feature_extraction.text import CountVectorizer -import gc -from sklearn.metrics import pairwise_distances -import streamlit as st -# import tensorflow as tf -# from tensorflow.keras.models import Model -# from tensorflow.keras.layers import Input, Dense, Dropout, Concatenate -# from tensorflow.keras.optimizers import Adam -from sklearn.preprocessing import StandardScaler -# import matplotlib.pyplot as plt - -# ******************************************************* - -movies_meta = pd.read_csv("movies_metadata.csv") -check = movies_meta['release_date'] -movies_meta = movies_meta[:10000] -copy_movies_meta = movies_meta -credits = pd.read_csv('credits.csv') -copy_credits = pd.read_csv('credits.csv') -keywords = pd.read_csv('keywords.csv') -keywords = keywords[:20000] - -# ******************************************************* - -num_missing = movies_meta[movies_meta['production_companies'].isnull()] -clean_movies_meta = movies_meta.dropna(subset=['production_companies'], inplace=True) - -movies_meta = movies_meta[movies_meta.id != '1997-08-20'] -movies_meta = movies_meta[movies_meta.id != '2012-09-29'] -movies_meta = movies_meta[movies_meta.id != '2014-01-01'] -movies_meta = movies_meta.astype({'id': 'int64'}) - -movies_meta = movies_meta.merge(keywords, on='id') -movies_meta = movies_meta.merge(credits, on='id') - - -def btc_function(data): - if type(data) == str: - return ast.literal_eval(data)['name'].replace(" ", "") - return data - - -# https://www.kaggle.com/hadasik/movies-analysis-visualization-newbie -def get_values(data_str): - if isinstance(data_str, float): - pass - else: - values = [] - data_str = ast.literal_eval(data_str) - if isinstance(data_str, list): - for k_v in data_str: - values.append(k_v['name'].replace(" ", "")) - return str(values)[1:-1] - else: - return None - - -movies_meta['btc_name'] = movies_meta.belongs_to_collection.apply(btc_function) -movies_meta[ - ['genres', 'production_companies', 'production_countries', 'spoken_languages', 'keywords', 'cast', 'crew']] = \ - movies_meta[['genres', 'production_companies', 'production_countries', 'spoken_languages', 'keywords', 'cast', - 'crew']].applymap(get_values) -movies_meta['is_homepage'] = movies_meta['homepage'].isnull() - -movies_meta['status'] = movies_meta['status'].fillna('') -movies_meta['original_language'] = movies_meta['original_language'].fillna('') -movies_meta['btc_name'] = movies_meta['btc_name'].fillna('') - -movies_meta.drop_duplicates(inplace=True) - - -def vector_values(df, columns, min_df_value): - c_vector = CountVectorizer(min_df=min_df_value) - df_1 = pd.DataFrame(index=df.index) - for col in columns: - print(col) - df_1 = df_1.join( - pd.DataFrame(c_vector.fit_transform(df[col]).toarray(), columns=c_vector.get_feature_names_out(), - index=df.index).add_prefix(col + '_')) - return df_1 - - -movies_meta_addon_1 = vector_values(movies_meta, - columns=['status', 'original_language', 'genres', 'production_companies', - 'production_countries', 'spoken_languages', 'keywords', 'cast', 'crew'], - min_df_value=20) -movies_meta_addon_2 = vector_values(movies_meta, columns=['btc_name'], min_df_value=2) - -col = ['belongs_to_collection', 'genres', 'homepage', 'id', 'imdb_id', 'overview', 'poster_path', 'status', - 'original_language', - 'production_companies', 'production_countries', 'spoken_languages', 'keywords', 'cast', 'crew', 'tagline', - 'adult'] -movies_meta.drop(columns=col, inplace=True) -col = ['video', 'is_homepage'] -for c in col: - movies_meta[c] = movies_meta[c].astype(bool) - movies_meta[c] = movies_meta[c].astype(int) - - -def get_year(date): - return str(date).split('-')[0] - - -movies_meta['popularity'] = movies_meta['popularity'].astype(float) -movies_meta['budget'] = movies_meta['budget'].astype(float) -movies_meta['vote_average_group'] = pd.qcut(movies_meta['vote_average'], q=10, precision=2, duplicates='drop') -movies_meta['popularity_group'] = pd.qcut(movies_meta['popularity'], q=10, precision=2, duplicates='drop') -movies_meta['vote_average_group'] = pd.qcut(movies_meta['vote_average'], q=10, precision=2, duplicates='drop') -movies_meta['runtime_group'] = pd.qcut(movies_meta['runtime'], q=10, precision=2, duplicates='drop') -movies_meta['budget_group'] = pd.qcut(movies_meta['budget'], q=10, precision=2, duplicates='drop') -movies_meta['revenue_group'] = pd.qcut(movies_meta['revenue'], q=10, precision=2, duplicates='drop') -movies_meta['vote_count_group'] = pd.qcut(movies_meta['vote_count'], q=10, precision=2, duplicates='drop') -movies_meta['release_year'] = movies_meta['release_date'].apply(get_year) - -movies_meta['release_year'] = movies_meta['release_year'].fillna('') -movies_meta['release_year'] = movies_meta['release_year'].astype(float) -movies_meta['release_year_group'] = pd.qcut(movies_meta['release_year'], q=10, precision=2, duplicates='drop') -movies_meta['title_new'] = movies_meta.apply(lambda x: str(x['title']) + ' (' + str(x['release_date']) + ')', axis=1) - -movies_meta_addon_3 = pd.get_dummies(movies_meta[ - ['vote_average_group', 'popularity_group', 'runtime_group', 'budget_group', - 'revenue_group', 'vote_count_group', 'release_year_group']]) -movies_meta_train = pd.concat( - [movies_meta_addon_1, movies_meta_addon_2, movies_meta_addon_3, movies_meta[['video', 'is_homepage']]], axis=1) - -movies_meta_train.index = movies_meta['title_new'] -gc.collect() - - -def get_similar_movies(movie_title, num_rec=10): - try: - sample_1 = 1 - pairwise_distances([movies_meta_train.loc[movie_title].values], movies_meta_train.values, - metric='cosine') - sample_1 = pd.DataFrame(sample_1.T, index=movies_meta_train.index) - return sample_1.sort_values(by=0, ascending=False).head(num_rec).index - except ValueError as e: - print(e) - -# ******************* Evaluation Part ****************** -def evaluate_recommendations(test_set, get_similar_movies_func, num_rec=10): - true_positives = 0 - false_positives = 0 - false_negatives = 0 - - for user, items in test_set.items(): - recommended_items = get_similar_movies_func(user, num_rec) - recommended_items = set(recommended_items) - relevant_items = set(items) - - true_positives += len(recommended_items.intersection(relevant_items)) - false_positives += len(recommended_items - relevant_items) - false_negatives += len(relevant_items - recommended_items) - - precision = true_positives / (true_positives + false_positives) - recall = true_positives / (true_positives + false_negatives) - f1 = 2 * (precision * recall) / (precision + recall) - - return precision, recall, f1 - -test_set = { - 'Toy Story (1995-10-30)': ['Toy Story 2 (1999-10-30)', 'Chicken Run (2000-06-21)'], - 'The Lion King (1994-06-23)': ['Toy Story (1995-10-30)', 'The Little Mermaid (1989-11-17)'], - 'Papillon (1973-12-13)': ['The Godfather (1972-03-14)'], - 'The Godfather (1972-03-14)': ['Papillon (1973-12-13)'] -} - -# Evaluate the recommendations using the test set -precision, recall, f1 = evaluate_recommendations(test_set, get_similar_movies, num_rec=10) - -print('Precision:', precision) -print('Recall:', recall) -print('F1 score:', f1) - -# ************ Using deep Learning Methods ************* -# def build_model(input_dim): -# inputs = Input(shape=(input_dim,)) -# x = Dense(256, activation='relu')(inputs) -# x = Dropout(0.2)(x) -# x = Dense(128, activation='relu')(x) -# x = Dropout(0.2)(x) -# x = Dense(64, activation='relu')(x) -# embeddings = Dense(32, activation='relu')(x) -# -# model = Model(inputs=inputs, outputs=embeddings) -# model.compile(optimizer=Adam(learning_rate=0.001), loss='mse') -# return model - -# Scale the feature data -scaler = StandardScaler() -X = scaler.fit_transform(movies_meta_train.values) - -# Create and train the model -# input_dim = X.shape[1] -# model = build_model(input_dim) -# model.fit(X, X, batch_size=64, epochs=30, verbose=1) - - -# def nn_get_similar_movies(movie_title, num_rec=10): -# try: -# movie_embedding = model.predict(scaler.transform([movies_meta_train.loc[movie_title].values])) -# all_embeddings = model.predict(X) -# similarities = tf.keras.losses.cosine_similarity(movie_embedding, all_embeddings) -# -# top_indices = np.argsort(similarities.numpy())[:num_rec] -# return movies_meta_train.index[top_indices] -# except ValueError as e: -# print(e) - - - -# ******************* Front Variables ****************** -input1 = "" -input2 = "" -input3 = "" - -# movie_name1 = "Undisputed III : Redemption" -# movie_name2 = "Finding Nemo" -# movie_name3 = "Thor" -movie_name1 = "" -movie_name2 = "" -movie_name3 = "" - -movie_year1 = 0 -movie_year2 = 0 -movie_year3 = 0 - -input1_count = 0 -input2_count = 0 -input3_count = 0 - -recomendation_list = [] - -counter1 = 0 -counter2 = 0 -counter3 = 0 - - -# ******************* Front Part ****************** -# @st.cache_data() -def get_base64_of_bin_file(bin_file): - with open(bin_file, 'rb') as f: - data = f.read() - return base64.b64encode(data).decode() - - -def add_bg_from_url(): - st.markdown( - f""" - - """, - unsafe_allow_html=True - ) - - -add_bg_from_url() - - -# st.set_page_config(page_title="Movie Recommender System", page_icon="🎥", theme="light") - -heading_style = """ - -""" -st.markdown(heading_style, unsafe_allow_html=True) -st.markdown("

    Movie Recommender System

    ", unsafe_allow_html=True) - -select_list = [] - -for s in movies_meta['title_new'].values: - s = s.replace('(', "") - s = s.replace(')', "") - s = s.split('-')[0] - inp = s[0:-4] - inp = inp + " - " - inp = inp + s[-4:] - select_list.append(inp) - - - -st.write(""" - **description :** This movie recommendation app is a helpful tool for users who are -looking for new movies to watch based on their previous viewing preferences. -It offers a personalized and convenient way for users to discover new movies that -they are likely to enjoy. -*** -""") - -st.write(""" - ## Enter 3 movies that you have watched and liked with their rates. -""") - -movie_name1 = movie_name1 + (st.selectbox('Enter your first movie:',select_list))[0:-8] -movie_rate1 = st.slider( - f'Rate {movie_name1}!', - 0, 10, 1, - key='movie_rate1') -st.write(""" -*** -""") -movie_name2 = movie_name2 + (st.selectbox('Enter your second movie:',select_list))[0:-8] -movie_rate2 = st.slider( - f'Rate {movie_name2}!', - 0, 10, 1, - key='movie_rate2') -st.write(""" -*** -""") -movie_name3 = movie_name3 + (st.selectbox('Enter your third movie:',select_list))[0:-8] -movie_rate3 = st.slider( - f'Rate {movie_name3}!', - 0, 10, 1, - key = 'movie_rate3' -) - -button_style = """ - -""" -st.markdown(button_style, unsafe_allow_html=True) -if st.button("Suggest me!", key="my-button", help="Submit!"): - - filtered_row3 = movies_meta.loc[ - (movies_meta['original_title'] == movie_name1)] - input1 = filtered_row3.iloc[0]['title_new'] - - filtered_row2 = movies_meta.loc[ - (movies_meta['original_title'] == movie_name2)] - input2 = filtered_row2.iloc[0]['title_new'] - - filtered_row3 = movies_meta.loc[ - (movies_meta['original_title'] == movie_name3)] - input3 = filtered_row3.iloc[0]['title_new'] - - recommend1 = get_similar_movies(input1)[1:] - recommend2 = get_similar_movies(input2)[1:] - recommend3 = get_similar_movies(input3)[1:] - - - # nn_recommend1 = nn_get_similar_movies(input1) - # nn_recommend2 = nn_get_similar_movies(input3) - # nn_recommend3 = nn_get_similar_movies(input3) - - - input1_count = math.ceil((movie_rate1 * 10) / (movie_rate1 + movie_rate2 + movie_rate3)) - input2_count = math.ceil((movie_rate2 * 10) / (movie_rate1 + movie_rate2 + movie_rate3)) - input3_count = math.ceil((movie_rate3 * 10) / (movie_rate1 + movie_rate2 + movie_rate3)) - - while True: - if len(recomendation_list) >= 10: - break - if counter1 <= input1_count: - recomendation_list.append([recommend1[counter1], movie_name1]) - counter1 += 1 - if len(recomendation_list) >= 10: - break - if counter2 <= input2_count: - recomendation_list.append([recommend2[counter2], movie_name2]) - counter2 += 1 - if len(recomendation_list) >= 10: - break - if counter3 <= input3_count: - recomendation_list.append([recommend3[counter3], movie_name3]) - counter3 += 1 - - for i in range(10): - movie, based = recomendation_list[i] - - movie_df = movies_meta.loc[ - (movies_meta['title_new'] == movie)] - - original_title = movie_df['original_title'].values[0] - release_year = int(movie_df['release_year'].values[0]) - - movie_metadata_df = copy_movies_meta.loc[ - (copy_movies_meta['original_title'] == original_title)] - - st.write(f"**{i + 1}. {original_title} ({release_year})** based on {based} \n\n") - - p_id = movie_metadata_df['id'].values[0] - response = requests.get( - 'https://api.themoviedb.org/3/movie/{}?api_key=1c8d419f76f8f9870d3e91eb896c2e54'.format(str(p_id))) - data = response.json() - # poster_path = data['poster_path'] - # - # if data['poster_path'] != "None": - # full_path = "https://image.tmdb.org/t/p/w500/" + data['poster_path'] - # response = requests.get(full_path) - # image = Image.open(BytesIO(response.content)) - # resized_image = image.resize((200, 200)) # Resize the image to 50x50 pixels - # - # # Save the resized image to a new file - # resized_image_path = "resized_image.jpg" - # resized_image.save(resized_image_path) - # - # # Display the resized image - # st.image(resized_image_path) - if 'poster_path' in data and data['poster_path'] is not None: - poster_path = data['poster_path'] - full_path = "https://image.tmdb.org/t/p/w500/" + poster_path - response = requests.get(full_path) - image = Image.open(BytesIO(response.content)) - resized_image = image.resize((200, 200)) # Resize the image to 200x200 pixels - - # Save the resized image to a new file - resized_image_path = "resized_image.jpg" - resized_image.save(resized_image_path) - - # Display the resized image - st.image(resized_image_path) - else: - st.write("No poster image available for this movie.") - - cast_df = copy_credits.loc[ - (copy_credits['id'] == int(p_id))] - - with st.expander(f"See more about {original_title}"): - runtime = int(movie_metadata_df['runtime'].values[0]) - hour = int(runtime / 60) - minute = runtime - (hour * 60) - imdb_rate = movie_metadata_df['vote_average'].values[0] - popularity = round(movie_metadata_df['popularity'].values[0], 2) - movie_overview = movie_metadata_df['overview'].values[0] - movie_tagline = movie_metadata_df['tagline'].values[0] - vote_count = int(movie_metadata_df['vote_count'].values[0]) - actors_list = cast_df['cast'].values[0] - actors_list = actors_list.replace("'", '') - actors_list = actors_list.split('}, {')[0:5] - actors = [] - characters = [] - for actor in actors_list: - actor = actor[3:].split(', ') - for i in actor: - if i[:5] == 'name:': - actors.append(i[5:]) - break - if i[:10] == 'character:': - characters.append(i[10:]) - - actors_line = f"**Stars :** {actors[0]} (as {characters[0]}) . " \ - f" {actors[1]} (as {characters[1]}) . " \ - f" {actors[2]} (as {characters[2]}) . " \ - f" {actors[3]} (as {characters[3]}) . " \ - f" {actors[4]} (as {characters[4]}) " - - crew_list = cast_df['crew'].values[0] - crew_list = crew_list.replace("'", '') - index = crew_list.find("job: Director, name:") - director = crew_list[index + 20:].split(',')[0] - - genres_list = movie_metadata_df['genres'].values[0] - genres_list = genres_list[2:-2] - genres_list = genres_list.replace("'", '') - genres_list = genres_list.split('}, {') - movie_genres = "" - - # Extract the genre names from each dictionary - genre_names = [] - for genre in genres_list: - genre = genre.split(',') - if genre[1][7:] == 'Animation': - emoji = " 🎨" - elif genre[1][7:] == 'Comedy': - emoji = " 🤡" - elif genre[1][7:] == 'Family': - emoji = " 👨‍👩‍👧‍👦" - elif genre[1][7:] == 'Crime': - emoji = " 🕵️‍" - elif genre[1][7:] == 'Adventure': - emoji = " 🛣️‍" - elif genre[1][7:] == 'Fantasy': - emoji = " 🧚" - elif genre[1][7:] == 'Romance': - emoji = " 💏" - elif genre[1][7:] == 'Drama': - emoji = " 🎭" - elif genre[1][7:] == "Action": - emoji = " 🔫" - elif genre[1][7:] == 'Thriller': - emoji = " 🍿" - elif genre[1][7:] == 'Horror': - emoji = " 😱" - elif genre[1][7:] == 'History': - emoji = " 🏛" - elif genre[1][7:] == 'Science Fiction': - emoji = " 🤖👽" - elif genre[1][7:] == 'Mystery': - emoji = " ❓" - elif genre[1][7:] == 'War': - emoji = " ⚔" - elif genre[1][7:] == 'Western': - emoji = " 🤠" - elif genre[1][7:] == 'Foreign': - emoji = " 🗺" - elif genre[1][7:] == 'Music': - emoji = " 🎶" - elif genre[1][7:] == 'Documentary': - emoji = " 🤔" - - movie_genres = movie_genres + emoji - movie_genres = movie_genres + genre[1][7:] - movie_genres = movie_genres + " " - genre_names.append(genre[1][7:]) - - second_line = f" " \ - f" " \ - f" IMDb RATING POPULARITY\n" - third_line = f" {release_year} {hour}h {minute}m " \ - f" " \ - f" ⭐{imdb_rate}/10 " \ - f"📊 {popularity} ({vote_count})\n \n \n" - movie_genres = movie_genres.replace(' ', ' ') - second_line = second_line.replace(' ', ' ') - actors_line = actors_line.replace(' ', ' ') - third_line = third_line.replace(' ', ' ') - st.header(f"{original_title} \n") - st.write(second_line, unsafe_allow_html=True) - st.write(third_line, unsafe_allow_html=True) - st.write(movie_genres, unsafe_allow_html=True) - st.write(""" - *** - """) - st.markdown(f" ##### {movie_tagline}") - st.write(movie_overview) - st.write(""" - *** - """) - st.markdown(f"**Director(s) :** {director}") - st.markdown(actors_line, unsafe_allow_html=True) - diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/default_constructor.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/default_constructor.py deleted file mode 100644 index 3f1f5b44168768dfda3947393a63a6cf9cf50b41..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/default_constructor.py +++ /dev/null @@ -1,44 +0,0 @@ -from .builder import RUNNER_BUILDERS, RUNNERS - - -@RUNNER_BUILDERS.register_module() -class DefaultRunnerConstructor: - """Default constructor for runners. - - Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`. - For example, We can inject some new properties and functions for `Runner`. - - Example: - >>> from annotator.uniformer.mmcv.runner import RUNNER_BUILDERS, build_runner - >>> # Define a new RunnerReconstructor - >>> @RUNNER_BUILDERS.register_module() - >>> class MyRunnerConstructor: - ... def __init__(self, runner_cfg, default_args=None): - ... if not isinstance(runner_cfg, dict): - ... raise TypeError('runner_cfg should be a dict', - ... f'but got {type(runner_cfg)}') - ... self.runner_cfg = runner_cfg - ... self.default_args = default_args - ... - ... def __call__(self): - ... runner = RUNNERS.build(self.runner_cfg, - ... default_args=self.default_args) - ... # Add new properties for existing runner - ... runner.my_name = 'my_runner' - ... runner.my_function = lambda self: print(self.my_name) - ... ... - >>> # build your runner - >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40, - ... constructor='MyRunnerConstructor') - >>> runner = build_runner(runner_cfg) - """ - - def __init__(self, runner_cfg, default_args=None): - if not isinstance(runner_cfg, dict): - raise TypeError('runner_cfg should be a dict', - f'but got {type(runner_cfg)}') - self.runner_cfg = runner_cfg - self.default_args = default_args - - def __call__(self): - return RUNNERS.build(self.runner_cfg, default_args=self.default_args) diff --git a/spaces/MesonWarrior/vk/README.md b/spaces/MesonWarrior/vk/README.md deleted file mode 100644 index c510d1fbe635395da96cb066da461d3acf78f6af..0000000000000000000000000000000000000000 --- a/spaces/MesonWarrior/vk/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: GPT2 VK (Russian) -emoji: 🔥 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/MilesCranmer/PySR/README.md b/spaces/MilesCranmer/PySR/README.md deleted file mode 100644 index affe03b50736e94da02d433a69e4d298bdd03e5e..0000000000000000000000000000000000000000 --- a/spaces/MilesCranmer/PySR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PySR -emoji: 🌍 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/retrieval/README.md b/spaces/NAACL2022/CLIP-Caption-Reward/retrieval/README.md deleted file mode 100644 index 2f5cce9ad9b93234fa5ac7a0e99f05868d883fd0..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/retrieval/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Finetuning CLIP reward model - -```bash -python train_pl.py --cfg clip_negative_text --id clip_negative_text -``` \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/vision_baseline_lstm.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/vision_baseline_lstm.py deleted file mode 100644 index ccf3ab23b06b71ed2a6d300b9a7d2a67a396c52e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/tfcode/vision_baseline_lstm.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -import numpy as np - - -import tensorflow as tf - -from tensorflow.contrib import slim - -import logging -from tensorflow.python.platform import app -from tensorflow.python.platform import flags -from src import utils -import src.file_utils as fu -import tfcode.nav_utils as nu -from tfcode import tf_utils - -setup_train_step_kwargs = nu.default_train_step_kwargs -compute_losses_multi_or = nu.compute_losses_multi_or -get_repr_from_image = nu.get_repr_from_image - -_save_d_at_t = nu.save_d_at_t -_save_all = nu.save_all -_eval_ap = nu.eval_ap -_eval_dist = nu.eval_dist -_plot_trajectories = nu.plot_trajectories - -def lstm_online(cell_fn, num_steps, inputs, state, varscope): - # inputs is B x num_steps x C, C channels. - # state is 2 tuple with B x 1 x C1, B x 1 x C2 - # Output state is always B x 1 x C - inputs = tf.unstack(inputs, axis=1, num=num_steps) - state = tf.unstack(state, axis=1, num=1)[0] - outputs = [] - - if num_steps > 1: - varscope.reuse_variables() - - for s in range(num_steps): - output, state = cell_fn(inputs[s], state) - outputs.append(output) - outputs = tf.stack(outputs, axis=1) - state = tf.stack([state], axis=1) - return outputs, state - -def _inputs(problem, lstm_states, lstm_state_dims): - # Set up inputs. - with tf.name_scope('inputs'): - n_views = problem.n_views - - inputs = [] - inputs.append(('orig_maps', tf.float32, - (problem.batch_size, 1, None, None, 1))) - inputs.append(('goal_loc', tf.float32, - (problem.batch_size, problem.num_goals, 2))) - - # For initing LSTM. - inputs.append(('rel_goal_loc_at_start', tf.float32, - (problem.batch_size, problem.num_goals, - problem.rel_goal_loc_dim))) - common_input_data, _ = tf_utils.setup_inputs(inputs) - - inputs = [] - inputs.append(('imgs', tf.float32, (problem.batch_size, None, n_views, - problem.img_height, problem.img_width, - problem.img_channels))) - # Goal location as a tuple of delta location and delta theta. - inputs.append(('rel_goal_loc', tf.float32, (problem.batch_size, None, - problem.rel_goal_loc_dim))) - if problem.outputs.visit_count: - inputs.append(('visit_count', tf.int32, (problem.batch_size, None, 1))) - inputs.append(('last_visit', tf.int32, (problem.batch_size, None, 1))) - - for i, (state, dim) in enumerate(zip(lstm_states, lstm_state_dims)): - inputs.append((state, tf.float32, (problem.batch_size, 1, dim))) - - if problem.outputs.egomotion: - inputs.append(('incremental_locs', tf.float32, - (problem.batch_size, None, 2))) - inputs.append(('incremental_thetas', tf.float32, - (problem.batch_size, None, 1))) - - inputs.append(('step_number', tf.int32, (1, None, 1))) - inputs.append(('node_ids', tf.int32, (problem.batch_size, None, - problem.node_ids_dim))) - inputs.append(('perturbs', tf.float32, (problem.batch_size, None, - problem.perturbs_dim))) - - # For plotting result plots - inputs.append(('loc_on_map', tf.float32, (problem.batch_size, None, 2))) - inputs.append(('gt_dist_to_goal', tf.float32, (problem.batch_size, None, 1))) - step_input_data, _ = tf_utils.setup_inputs(inputs) - - inputs = [] - inputs.append(('executed_actions', tf.int32, (problem.batch_size, None))) - inputs.append(('rewards', tf.float32, (problem.batch_size, None))) - inputs.append(('action_sample_wts', tf.float32, (problem.batch_size, None))) - inputs.append(('action', tf.int32, (problem.batch_size, None, - problem.num_actions))) - train_data, _ = tf_utils.setup_inputs(inputs) - train_data.update(step_input_data) - train_data.update(common_input_data) - return common_input_data, step_input_data, train_data - - -def _add_summaries(m, summary_mode, arop_full_summary_iters): - summarize_ops = [m.lr_op, m.global_step_op, m.sample_gt_prob_op, - m.total_loss_op, m.data_loss_op, m.reg_loss_op] + m.acc_ops - summarize_names = ['lr', 'global_step', 'sample_gt_prob_op', 'total_loss', - 'data_loss', 'reg_loss'] + \ - ['acc_{:d}'.format(i) for i in range(len(m.acc_ops))] - to_aggregate = [0, 0, 0, 1, 1, 1] + [1]*len(m.acc_ops) - - scope_name = 'summary' - with tf.name_scope(scope_name): - s_ops = nu.add_default_summaries(summary_mode, arop_full_summary_iters, - summarize_ops, summarize_names, - to_aggregate, m.action_prob_op, - m.input_tensors, scope_name=scope_name) - m.summary_ops = {summary_mode: s_ops} - -def visit_count_fc(visit_count, last_visit, embed_neurons, wt_decay, fc_dropout): - with tf.variable_scope('embed_visit_count'): - visit_count = tf.reshape(visit_count, shape=[-1]) - last_visit = tf.reshape(last_visit, shape=[-1]) - - visit_count = tf.clip_by_value(visit_count, clip_value_min=-1, - clip_value_max=15) - last_visit = tf.clip_by_value(last_visit, clip_value_min=-1, - clip_value_max=15) - visit_count = tf.one_hot(visit_count, depth=16, axis=1, dtype=tf.float32, - on_value=10., off_value=0.) - last_visit = tf.one_hot(last_visit, depth=16, axis=1, dtype=tf.float32, - on_value=10., off_value=0.) - f = tf.concat([visit_count, last_visit], 1) - x, _ = tf_utils.fc_network( - f, neurons=embed_neurons, wt_decay=wt_decay, name='visit_count_embed', - offset=0, batch_norm_param=None, dropout_ratio=fc_dropout, - is_training=is_training) - return x - -def lstm_setup(name, x, batch_size, is_single_step, lstm_dim, lstm_out, - num_steps, state_input_op): - # returns state_name, state_init_op, updated_state_op, out_op - with tf.name_scope('reshape_'+name): - sh = x.get_shape().as_list() - x = tf.reshape(x, shape=[batch_size, -1, sh[-1]]) - - with tf.variable_scope(name) as varscope: - cell = tf.contrib.rnn.LSTMCell( - num_units=lstm_dim, forget_bias=1.0, state_is_tuple=False, - num_proj=lstm_out, use_peepholes=True, - initializer=tf.random_uniform_initializer(-0.01, 0.01, seed=0), - cell_clip=None, proj_clip=None) - - sh = [batch_size, 1, lstm_dim+lstm_out] - state_init_op = tf.constant(0., dtype=tf.float32, shape=sh) - - fn = lambda ns: lstm_online(cell, ns, x, state_input_op, varscope) - out_op, updated_state_op = tf.cond(is_single_step, lambda: fn(1), lambda: - fn(num_steps)) - - return name, state_init_op, updated_state_op, out_op - -def combine_setup(name, combine_type, embed_img, embed_goal, num_img_neuorons=None, - num_goal_neurons=None): - with tf.name_scope(name + '_' + combine_type): - if combine_type == 'add': - # Simple concat features from goal and image - out = embed_img + embed_goal - - elif combine_type == 'multiply': - # Multiply things together - re_embed_img = tf.reshape( - embed_img, shape=[-1, num_img_neuorons / num_goal_neurons, - num_goal_neurons]) - re_embed_goal = tf.reshape(embed_goal, shape=[-1, num_goal_neurons, 1]) - x = tf.matmul(re_embed_img, re_embed_goal, transpose_a=False, transpose_b=False) - out = slim.flatten(x) - elif combine_type == 'none' or combine_type == 'imgonly': - out = embed_img - elif combine_type == 'goalonly': - out = embed_goal - else: - logging.fatal('Undefined combine_type: %s', combine_type) - return out - - -def preprocess_egomotion(locs, thetas): - with tf.name_scope('pre_ego'): - pre_ego = tf.concat([locs, tf.sin(thetas), tf.cos(thetas)], 2) - sh = pre_ego.get_shape().as_list() - pre_ego = tf.reshape(pre_ego, [-1, sh[-1]]) - return pre_ego - -def setup_to_run(m, args, is_training, batch_norm_is_training, summary_mode): - # Set up the model. - tf.set_random_seed(args.solver.seed) - task_params = args.navtask.task_params - num_steps = task_params.num_steps - num_goals = task_params.num_goals - num_actions = task_params.num_actions - num_actions_ = num_actions - - n_views = task_params.n_views - - batch_norm_is_training_op = \ - tf.placeholder_with_default(batch_norm_is_training, shape=[], - name='batch_norm_is_training_op') - # Setup the inputs - m.input_tensors = {} - lstm_states = []; lstm_state_dims = []; - state_names = []; updated_state_ops = []; init_state_ops = []; - if args.arch.lstm_output: - lstm_states += ['lstm_output'] - lstm_state_dims += [args.arch.lstm_output_dim+task_params.num_actions] - if args.arch.lstm_ego: - lstm_states += ['lstm_ego'] - lstm_state_dims += [args.arch.lstm_ego_dim + args.arch.lstm_ego_out] - lstm_states += ['lstm_img'] - lstm_state_dims += [args.arch.lstm_img_dim + args.arch.lstm_img_out] - elif args.arch.lstm_img: - # An LSTM only on the image - lstm_states += ['lstm_img'] - lstm_state_dims += [args.arch.lstm_img_dim + args.arch.lstm_img_out] - else: - # No LSTMs involved here. - None - - m.input_tensors['common'], m.input_tensors['step'], m.input_tensors['train'] = \ - _inputs(task_params, lstm_states, lstm_state_dims) - - with tf.name_scope('check_size'): - is_single_step = tf.equal(tf.unstack(tf.shape(m.input_tensors['step']['imgs']), - num=6)[1], 1) - - images_reshaped = tf.reshape(m.input_tensors['step']['imgs'], - shape=[-1, task_params.img_height, task_params.img_width, - task_params.img_channels], name='re_image') - - rel_goal_loc_reshaped = tf.reshape(m.input_tensors['step']['rel_goal_loc'], - shape=[-1, task_params.rel_goal_loc_dim], name='re_rel_goal_loc') - - x, vars_ = get_repr_from_image( - images_reshaped, task_params.modalities, task_params.data_augment, - args.arch.encoder, args.solver.freeze_conv, args.solver.wt_decay, - is_training) - - # Reshape into nice things so that these can be accumulated over time steps - # for faster backprop. - sh_before = x.get_shape().as_list() - m.encoder_output = tf.reshape( - x, shape=[task_params.batch_size, -1, n_views] + sh_before[1:]) - x = tf.reshape(m.encoder_output, shape=[-1] + sh_before[1:]) - - # Add a layer to reduce dimensions for a fc layer. - if args.arch.dim_reduce_neurons > 0: - ks = 1; neurons = args.arch.dim_reduce_neurons; - init_var = np.sqrt(2.0/(ks**2)/neurons) - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - m.conv_feat = slim.conv2d( - x, neurons, kernel_size=ks, stride=1, normalizer_fn=slim.batch_norm, - normalizer_params=batch_norm_param, padding='SAME', scope='dim_reduce', - weights_regularizer=slim.l2_regularizer(args.solver.wt_decay), - weights_initializer=tf.random_normal_initializer(stddev=init_var)) - reshape_conv_feat = slim.flatten(m.conv_feat) - sh = reshape_conv_feat.get_shape().as_list() - m.reshape_conv_feat = tf.reshape(reshape_conv_feat, - shape=[-1, sh[1]*n_views]) - - # Restore these from a checkpoint. - if args.solver.pretrained_path is not None: - m.init_fn = slim.assign_from_checkpoint_fn(args.solver.pretrained_path, - vars_) - else: - m.init_fn = None - - # Hit the goal_location with a bunch of fully connected layers, to embed it - # into some space. - with tf.variable_scope('embed_goal'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - m.embed_goal, _ = tf_utils.fc_network( - rel_goal_loc_reshaped, neurons=args.arch.goal_embed_neurons, - wt_decay=args.solver.wt_decay, name='goal_embed', offset=0, - batch_norm_param=batch_norm_param, dropout_ratio=args.arch.fc_dropout, - is_training=is_training) - - if args.arch.embed_goal_for_state: - with tf.variable_scope('embed_goal_for_state'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - m.embed_goal_for_state, _ = tf_utils.fc_network( - m.input_tensors['common']['rel_goal_loc_at_start'][:,0,:], - neurons=args.arch.goal_embed_neurons, wt_decay=args.solver.wt_decay, - name='goal_embed', offset=0, batch_norm_param=batch_norm_param, - dropout_ratio=args.arch.fc_dropout, is_training=is_training) - - # Hit the goal_location with a bunch of fully connected layers, to embed it - # into some space. - with tf.variable_scope('embed_img'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - m.embed_img, _ = tf_utils.fc_network( - m.reshape_conv_feat, neurons=args.arch.img_embed_neurons, - wt_decay=args.solver.wt_decay, name='img_embed', offset=0, - batch_norm_param=batch_norm_param, dropout_ratio=args.arch.fc_dropout, - is_training=is_training) - - # For lstm_ego, and lstm_image, embed the ego motion, accumulate it into an - # LSTM, combine with image features and accumulate those in an LSTM. Finally - # combine what you get from the image LSTM with the goal to output an action. - if args.arch.lstm_ego: - ego_reshaped = preprocess_egomotion(m.input_tensors['step']['incremental_locs'], - m.input_tensors['step']['incremental_thetas']) - with tf.variable_scope('embed_ego'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - m.embed_ego, _ = tf_utils.fc_network( - ego_reshaped, neurons=args.arch.ego_embed_neurons, - wt_decay=args.solver.wt_decay, name='ego_embed', offset=0, - batch_norm_param=batch_norm_param, dropout_ratio=args.arch.fc_dropout, - is_training=is_training) - - state_name, state_init_op, updated_state_op, out_op = lstm_setup( - 'lstm_ego', m.embed_ego, task_params.batch_size, is_single_step, - args.arch.lstm_ego_dim, args.arch.lstm_ego_out, num_steps*num_goals, - m.input_tensors['step']['lstm_ego']) - state_names += [state_name] - init_state_ops += [state_init_op] - updated_state_ops += [updated_state_op] - - # Combine the output with the vision features. - m.img_ego_op = combine_setup('img_ego', args.arch.combine_type_ego, - m.embed_img, out_op, - args.arch.img_embed_neurons[-1], - args.arch.lstm_ego_out) - - # LSTM on these vision features. - state_name, state_init_op, updated_state_op, out_op = lstm_setup( - 'lstm_img', m.img_ego_op, task_params.batch_size, is_single_step, - args.arch.lstm_img_dim, args.arch.lstm_img_out, num_steps*num_goals, - m.input_tensors['step']['lstm_img']) - state_names += [state_name] - init_state_ops += [state_init_op] - updated_state_ops += [updated_state_op] - - m.img_for_goal = out_op - num_img_for_goal_neurons = args.arch.lstm_img_out - - elif args.arch.lstm_img: - # LSTM on just the image features. - state_name, state_init_op, updated_state_op, out_op = lstm_setup( - 'lstm_img', m.embed_img, task_params.batch_size, is_single_step, - args.arch.lstm_img_dim, args.arch.lstm_img_out, num_steps*num_goals, - m.input_tensors['step']['lstm_img']) - state_names += [state_name] - init_state_ops += [state_init_op] - updated_state_ops += [updated_state_op] - m.img_for_goal = out_op - num_img_for_goal_neurons = args.arch.lstm_img_out - - else: - m.img_for_goal = m.embed_img - num_img_for_goal_neurons = args.arch.img_embed_neurons[-1] - - - if args.arch.use_visit_count: - m.embed_visit_count = visit_count_fc( - m.input_tensors['step']['visit_count'], - m.input_tensors['step']['last_visit'], args.arch.goal_embed_neurons, - args.solver.wt_decay, args.arch.fc_dropout, is_training=is_training) - m.embed_goal = m.embed_goal + m.embed_visit_count - - m.combined_f = combine_setup('img_goal', args.arch.combine_type, - m.img_for_goal, m.embed_goal, - num_img_for_goal_neurons, - args.arch.goal_embed_neurons[-1]) - - # LSTM on the combined representation. - if args.arch.lstm_output: - name = 'lstm_output' - # A few fully connected layers here. - with tf.variable_scope('action_pred'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - x, _ = tf_utils.fc_network( - m.combined_f, neurons=args.arch.pred_neurons, - wt_decay=args.solver.wt_decay, name='pred', offset=0, - batch_norm_param=batch_norm_param, dropout_ratio=args.arch.fc_dropout) - - if args.arch.lstm_output_init_state_from_goal: - # Use the goal embedding to initialize the LSTM state. - # UGLY CLUGGY HACK: if this is doing computation for a single time step - # then this will not involve back prop, so we can use the state input from - # the feed dict, otherwise we compute the state representation from the - # goal and feed that in. Necessary for using goal location to generate the - # state representation. - m.embed_goal_for_state = tf.expand_dims(m.embed_goal_for_state, dim=1) - state_op = tf.cond(is_single_step, lambda: m.input_tensors['step'][name], - lambda: m.embed_goal_for_state) - state_name, state_init_op, updated_state_op, out_op = lstm_setup( - name, x, task_params.batch_size, is_single_step, - args.arch.lstm_output_dim, - num_actions_, - num_steps*num_goals, state_op) - init_state_ops += [m.embed_goal_for_state] - else: - state_op = m.input_tensors['step'][name] - state_name, state_init_op, updated_state_op, out_op = lstm_setup( - name, x, task_params.batch_size, is_single_step, - args.arch.lstm_output_dim, - num_actions_, num_steps*num_goals, state_op) - init_state_ops += [state_init_op] - - state_names += [state_name] - updated_state_ops += [updated_state_op] - - out_op = tf.reshape(out_op, shape=[-1, num_actions_]) - if num_actions_ > num_actions: - m.action_logits_op = out_op[:,:num_actions] - m.baseline_op = out_op[:,num_actions:] - else: - m.action_logits_op = out_op - m.baseline_op = None - m.action_prob_op = tf.nn.softmax(m.action_logits_op) - - else: - # A few fully connected layers here. - with tf.variable_scope('action_pred'): - batch_norm_param = args.arch.batch_norm_param - batch_norm_param['is_training'] = batch_norm_is_training_op - out_op, _ = tf_utils.fc_network( - m.combined_f, neurons=args.arch.pred_neurons, - wt_decay=args.solver.wt_decay, name='pred', offset=0, - num_pred=num_actions_, - batch_norm_param=batch_norm_param, - dropout_ratio=args.arch.fc_dropout, is_training=is_training) - if num_actions_ > num_actions: - m.action_logits_op = out_op[:,:num_actions] - m.baseline_op = out_op[:,num_actions:] - else: - m.action_logits_op = out_op - m.baseline_op = None - m.action_prob_op = tf.nn.softmax(m.action_logits_op) - - m.train_ops = {} - m.train_ops['step'] = m.action_prob_op - m.train_ops['common'] = [m.input_tensors['common']['orig_maps'], - m.input_tensors['common']['goal_loc'], - m.input_tensors['common']['rel_goal_loc_at_start']] - m.train_ops['state_names'] = state_names - m.train_ops['init_state'] = init_state_ops - m.train_ops['updated_state'] = updated_state_ops - m.train_ops['batch_norm_is_training_op'] = batch_norm_is_training_op - - # Flat list of ops which cache the step data. - m.train_ops['step_data_cache'] = [tf.no_op()] - - if args.solver.freeze_conv: - m.train_ops['step_data_cache'] = [m.encoder_output] - else: - m.train_ops['step_data_cache'] = [] - - ewma_decay = 0.99 if is_training else 0.0 - weight = tf.ones_like(m.input_tensors['train']['action'], dtype=tf.float32, - name='weight') - - m.reg_loss_op, m.data_loss_op, m.total_loss_op, m.acc_ops = \ - compute_losses_multi_or( - m.action_logits_op, m.input_tensors['train']['action'], - weights=weight, num_actions=num_actions, - data_loss_wt=args.solver.data_loss_wt, - reg_loss_wt=args.solver.reg_loss_wt, ewma_decay=ewma_decay) - - - if args.solver.freeze_conv: - vars_to_optimize = list(set(tf.trainable_variables()) - set(vars_)) - else: - vars_to_optimize = None - - m.lr_op, m.global_step_op, m.train_op, m.should_stop_op, m.optimizer, \ - m.sync_optimizer = tf_utils.setup_training( - m.total_loss_op, - args.solver.initial_learning_rate, - args.solver.steps_per_decay, - args.solver.learning_rate_decay, - args.solver.momentum, - args.solver.max_steps, - args.solver.sync, - args.solver.adjust_lr_sync, - args.solver.num_workers, - args.solver.task, - vars_to_optimize=vars_to_optimize, - clip_gradient_norm=args.solver.clip_gradient_norm, - typ=args.solver.typ, momentum2=args.solver.momentum2, - adam_eps=args.solver.adam_eps) - - - if args.arch.sample_gt_prob_type == 'inverse_sigmoid_decay': - m.sample_gt_prob_op = tf_utils.inverse_sigmoid_decay(args.arch.isd_k, - m.global_step_op) - elif args.arch.sample_gt_prob_type == 'zero': - m.sample_gt_prob_op = tf.constant(-1.0, dtype=tf.float32) - elif args.arch.sample_gt_prob_type.split('_')[0] == 'step': - step = int(args.arch.sample_gt_prob_type.split('_')[1]) - m.sample_gt_prob_op = tf_utils.step_gt_prob( - step, m.input_tensors['step']['step_number'][0,0,0]) - - m.sample_action_type = args.arch.action_sample_type - m.sample_action_combine_type = args.arch.action_sample_combine_type - _add_summaries(m, summary_mode, args.summary.arop_full_summary_iters) - - m.init_op = tf.group(tf.global_variables_initializer(), - tf.local_variables_initializer()) - m.saver_op = tf.train.Saver(keep_checkpoint_every_n_hours=4, - write_version=tf.train.SaverDef.V2) - - return m diff --git a/spaces/Nephele/bert-vits2-multi-voice/setup_ffmpeg.py b/spaces/Nephele/bert-vits2-multi-voice/setup_ffmpeg.py deleted file mode 100644 index 7137ab5faebb6d80740b8c843667458f25596839..0000000000000000000000000000000000000000 --- a/spaces/Nephele/bert-vits2-multi-voice/setup_ffmpeg.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import sys -import re -from pathlib import Path -import winreg - -def check_ffmpeg_path(): - path_list = os.environ['Path'].split(';') - ffmpeg_found = False - - for path in path_list: - if 'ffmpeg' in path.lower() and 'bin' in path.lower(): - ffmpeg_found = True - print("FFmpeg already installed.") - break - - return ffmpeg_found - -def add_ffmpeg_path_to_user_variable(): - ffmpeg_bin_path = Path('.\\ffmpeg\\bin') - if ffmpeg_bin_path.is_dir(): - abs_path = str(ffmpeg_bin_path.resolve()) - - try: - key = winreg.OpenKey( - winreg.HKEY_CURRENT_USER, - r"Environment", - 0, - winreg.KEY_READ | winreg.KEY_WRITE - ) - - try: - current_path, _ = winreg.QueryValueEx(key, "Path") - if abs_path not in current_path: - new_path = f"{current_path};{abs_path}" - winreg.SetValueEx(key, "Path", 0, winreg.REG_EXPAND_SZ, new_path) - print(f"Added FFmpeg path to user variable 'Path': {abs_path}") - else: - print("FFmpeg path already exists in the user variable 'Path'.") - finally: - winreg.CloseKey(key) - except WindowsError: - print("Error: Unable to modify user variable 'Path'.") - sys.exit(1) - - else: - print("Error: ffmpeg\\bin folder not found in the current path.") - sys.exit(1) - -def main(): - if not check_ffmpeg_path(): - add_ffmpeg_path_to_user_variable() - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/protogen.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/protogen.py deleted file mode 100644 index 0f3dd33d03b439c9bfd0ef132f49879c7481aa33..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/protogen.py +++ /dev/null @@ -1,53 +0,0 @@ -# -*- coding: utf-8 -*- -# file: protogen.py -# time: 14:27 2023/1/9 -# author: yangheng -# github: https://github.com/yangheng95 -# huggingface: https://huggingface.co/yangheng -# google scholar: https://scholar.google.com/citations?user=NPq5a_0AAAAJ&hl=en -# Copyright (C) 2021. All Rights Reserved. - -from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler -import torch -import random - -prompt_keys = [ - "naked", - "loli", - "teen", - "squat", - "big nipples", - "hairy pussy", - "pee", - "beautiful eyes", - # 'dress', 'wind', 'fingers', 'hands', - # random.choice(['Sinon', 'saber', ]), - # random.choice(['white dress', 'red dress', 'blonde dress', 'black dress', 'green dress', ]), - # random.choice(['white bra', 'red bra', 'black bra',]), - "lovely", - "details", - # random.choice(['white hair', 'red hair', 'blonde hair', 'black hair', 'green hair', ]), - random.choice(["white hair"]), - random.choice(["blue eyes", "red eyes", "black eyes"]), - random.choice(["flower meadow", "garden"]), -] -prompt = ",".join(prompt_keys) -model_id = "darkstorm2150/Protogen_x3.4_Official_Release" -pipe = StableDiffusionPipeline.from_pretrained( - model_id, torch_dtype=torch.float16, safety_checker=None -) -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe = pipe.to("cuda") - -guidance = 7.5 -width = 768 -height = 512 -image = pipe( - prompt, - num_inference_steps=25, - guidance_scale=guidance, - width=width, - height=height, -).images[0] - -image.save("./result.jpg") diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/ngram_repeat_block.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/ngram_repeat_block.py deleted file mode 100644 index 854125149448a2d37ad2773cd1e6d614e73e0e79..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/ngram_repeat_block.py +++ /dev/null @@ -1,150 +0,0 @@ -# Originally from Microsoft Corporation. -# Licensed under the MIT License. - -""" Wrapper for ngram_repeat_block cuda extension """ -import torch -from torch import nn - -import math -from typing import Dict, List, Optional -import warnings - -try: - from fairseq import ngram_repeat_block_cuda - - EXTENSION_BUILT = True -except ImportError: - EXTENSION_BUILT = False - - -def is_cuda_extension_usable() -> bool: - """Check whether ngram_repeat_block_cuda is built properly""" - if not EXTENSION_BUILT or not torch.cuda.is_available(): - return False - bsz = 2 - tokens = torch.tensor([[4, 4, 3, 2], [1, 2, 3, 4]], dtype=torch.long, device="cuda") - lprobs = torch.rand((8, 12), device="cuda") - try: - outputs = ngram_repeat_block_cuda.forward(tokens, lprobs, bsz, 3, 4, 3) - outputs = outputs + 4 # This line breaks if the extension is built incorrectly. - return True - except RuntimeError: - warnings.warn( - "NGramRepeatBlock extension must be rebuilt." - 'Run TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0" python setup.py build_ext --inplace' - ) - return False - - -class NGramRepeatBlock(nn.Module): - """ Wrapper class for calling ngram_repeat_block cuda extension """ - - def __init__(self, no_repeat_ngram_size: int, use_extension: bool = True): - super().__init__() - self.use_extension = is_cuda_extension_usable() if use_extension else False - self.no_repeat_ngram_size = no_repeat_ngram_size - - def reset_parameters(self): - pass - - @torch.jit.unused - def call_cuda_extension( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - return ngram_repeat_block_cuda.forward( - tokens, lprobs, bsz, step, beam_size, self.no_repeat_ngram_size - ) - - def forward( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - """ - Args: - tokens(Tensor): Input tokens(Bsz*beam, seq_len) - lprobs(Tensor): likelihood probability, - Expected to be updated in place.(Bsz*beam, vocab_size) - bsz(int): batch size - step(int): current step - beam_size(int): beam size - no_repeat_ngram_size(int): Ngram size - """ - msg = f"expected {bsz *beam_size} got" - assert tokens.size(0) == bsz * beam_size, f"{msg} {tokens.size(0)}" - assert lprobs.size(0) == bsz * beam_size, f"{msg} {lprobs.size(0)}" - if self.use_extension: - return self.call_cuda_extension(tokens, lprobs, bsz, beam_size, step) - - else: - return self._no_repeat_ngram( - tokens, - lprobs, - bsz, - beam_size, - step, - ) - - def _no_repeat_ngram(self, tokens, lprobs, bsz: int, beam_size: int, step: int): - """For each hypothesis generate a list of previous ngrams and set associated lprobs to -inf""" - gen_ngrams: List[Dict[str, List[int]]] = [ - torch.jit.annotate(Dict[str, List[int]], {}) - for bbsz_idx in range(bsz * beam_size) - ] - cpu_tokens = tokens.cpu() - for bbsz_idx in range(bsz * beam_size): - gen_tokens: List[int] = cpu_tokens[bbsz_idx].tolist() - for ngram in self.transpose_list( - [gen_tokens[i:] for i in range(self.no_repeat_ngram_size)] - ): - key = ",".join([str(x) for x in ngram[:-1]]) - gen_ngrams[bbsz_idx][key] = gen_ngrams[bbsz_idx].get( - key, torch.jit.annotate(List[int], []) - ) + [ngram[-1]] - if step + 2 - self.no_repeat_ngram_size >= 0: - # no banned tokens if we haven't generated no_repeat_ngram_size tokens yet - banned_tokens = [ - self.calculate_banned_tokens( - tokens, step, gen_ngrams, self.no_repeat_ngram_size, bbsz_idx - ) - for bbsz_idx in range(bsz * beam_size) - ] - else: - banned_tokens = [ - torch.jit.annotate(List[int], []) for bbsz_idx in range(bsz * beam_size) - ] - for bbsz_idx in range(bsz * beam_size): - lprobs[bbsz_idx][ - torch.tensor(banned_tokens[bbsz_idx], dtype=torch.int64) - ] = torch.tensor(-math.inf).to(lprobs) - return lprobs - - @staticmethod - def calculate_banned_tokens( - tokens, - step: int, - gen_ngrams: List[Dict[str, List[int]]], - no_repeat_ngram_size: int, - bbsz_idx: int, - ): - tokens_list: List[int] = tokens[ - bbsz_idx, step + 2 - no_repeat_ngram_size : step + 1 - ].tolist() - # before decoding the next token, prevent decoding of ngrams that have already appeared - ngram_index = ",".join([str(x) for x in tokens_list]) - return gen_ngrams[bbsz_idx].get(ngram_index, torch.jit.annotate(List[int], [])) - - @staticmethod - def transpose_list(l: List[List[int]]): - # GeneratorExp aren't supported in TS so ignoring the lint - min_len = min([len(x) for x in l]) # noqa - l2 = [[row[i] for row in l] for i in range(min_len)] - return l2 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/base_decoder.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/base_decoder.py deleted file mode 100644 index a097969b3c0650cf8ea2ab5f8e96bbc68ea9b97f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/new/decoders/base_decoder.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools as it -from typing import Any, Dict, List - -import torch -from fairseq.data.dictionary import Dictionary -from fairseq.models.fairseq_model import FairseqModel - - -class BaseDecoder: - def __init__(self, tgt_dict: Dictionary) -> None: - self.tgt_dict = tgt_dict - self.vocab_size = len(tgt_dict) - - self.blank = ( - tgt_dict.index("") - if "" in tgt_dict.indices - else tgt_dict.bos() - ) - if "" in tgt_dict.indices: - self.silence = tgt_dict.index("") - elif "|" in tgt_dict.indices: - self.silence = tgt_dict.index("|") - else: - self.silence = tgt_dict.eos() - - def generate( - self, models: List[FairseqModel], sample: Dict[str, Any], **unused - ) -> List[List[Dict[str, torch.LongTensor]]]: - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions = self.get_emissions(models, encoder_input) - return self.decode(emissions) - - def get_emissions( - self, - models: List[FairseqModel], - encoder_input: Dict[str, Any], - ) -> torch.FloatTensor: - model = models[0] - encoder_out = model(**encoder_input) - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out) - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - return emissions.transpose(0, 1).float().cpu().contiguous() - - def get_tokens(self, idxs: torch.IntTensor) -> torch.LongTensor: - idxs = (g[0] for g in it.groupby(idxs)) - idxs = filter(lambda x: x != self.blank, idxs) - return torch.LongTensor(list(idxs)) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - raise NotImplementedError diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py deleted file mode 100644 index 989868388eefccc37c82d7602f709632035c7aa1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -for idx, line in enumerate(sys.stdin): - print(f"utt{idx:010d} {line}", end="") diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/modules/qemb.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/modules/qemb.py deleted file mode 100644 index 3a74ad3c4c7c9d3203d26e7885864ba578951bfe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/pq/modules/qemb.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class PQEmbedding(nn.Module): - """ - Quantized counterpart of nn.Embedding module. Stores the centroids and - the assignments. The full weight is re-instantiated at each forward - pass. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_features x n_blocks - - bias: the non-quantized bias - - Remarks: - - We refer the reader to the official documentation of the nn.Embedding module - for the other arguments and the behavior of the module - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Embedding module for a standard training loop. - """ - - def __init__( - self, - centroids, - assignments, - num_embeddings, - embedding_dim, - padding_idx=None, - max_norm=None, - norm_type=2.0, - scale_grad_by_freq=False, - sparse=False, - _weight=None, - ): - super(PQEmbedding, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.num_embeddings = num_embeddings - self.embedding_dim = embedding_dim - if padding_idx is not None: - if padding_idx > 0: - assert ( - padding_idx < self.num_embeddings - ), "Padding_idx must be within num_embeddings" - elif padding_idx < 0: - assert ( - padding_idx >= -self.num_embeddings - ), "Padding_idx must be within num_embeddings" - padding_idx = self.num_embeddings + padding_idx - self.padding_idx = padding_idx - self.max_norm = max_norm - self.norm_type = norm_type - self.scale_grad_by_freq = scale_grad_by_freq - self.sparse = sparse - # check compatibility - if self.embedding_dim % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % self.num_embeddings != 0: - raise ValueError("Wrong PQ sizes") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.num_embeddings, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - def forward(self, input): - return F.embedding( - input, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - - def extra_repr(self): - s = "{num_embeddings}, {embedding_dim}" - if self.padding_idx is not None: - s += ", padding_idx={padding_idx}" - if self.max_norm is not None: - s += ", max_norm={max_norm}" - if self.norm_type != 2: - s += ", norm_type={norm_type}" - if self.scale_grad_by_freq is not False: - s += ", scale_grad_by_freq={scale_grad_by_freq}" - if self.sparse is not False: - s += ", sparse=True" - s += ", n_centroids={n_centroids}, block_size={block_size}" - - return s.format(**self.__dict__) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/text_to_speech.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/text_to_speech.py deleted file mode 100644 index 5646e41d39f6e39d4b046ee34ff69b998dab160d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/text_to_speech.py +++ /dev/null @@ -1,467 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import os.path as op - -import torch -import torch.nn.functional as F -import numpy as np - -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.speech_generator import ( - AutoRegressiveSpeechGenerator, NonAutoregressiveSpeechGenerator, - TeacherForcingAutoRegressiveSpeechGenerator -) - -logging.basicConfig( - format='%(asctime)s | %(levelname)s | %(name)s | %(message)s', - datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO -) -logger = logging.getLogger(__name__) - - -try: - from tensorboardX import SummaryWriter -except ImportError: - logger.info("Please install tensorboardX: pip install tensorboardX") - SummaryWriter = None - - -@register_task('text_to_speech') -class TextToSpeechTask(SpeechToTextTask): - @staticmethod - def add_args(parser): - parser.add_argument('data', help='manifest root path') - parser.add_argument( - '--config-yaml', type=str, default='config.yaml', - help='Configuration YAML filename (under manifest root)' - ) - parser.add_argument('--max-source-positions', default=1024, type=int, - metavar='N', - help='max number of tokens in the source sequence') - parser.add_argument('--max-target-positions', default=1200, type=int, - metavar='N', - help='max number of tokens in the target sequence') - parser.add_argument("--n-frames-per-step", type=int, default=1) - parser.add_argument("--eos-prob-threshold", type=float, default=0.5) - parser.add_argument("--eval-inference", action="store_true") - parser.add_argument("--eval-tb-nsample", type=int, default=8) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--spec-bwd-max-iter", type=int, default=8) - - def __init__(self, args, src_dict): - super().__init__(args, src_dict) - self.src_dict = src_dict - self.sr = self.data_cfg.config.get("features").get("sample_rate") - - self.tensorboard_writer = None - self.tensorboard_dir = "" - if args.tensorboard_logdir and SummaryWriter is not None: - self.tensorboard_dir = os.path.join(args.tensorboard_logdir, - "valid_extra") - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - is_train_split = split.startswith('train') - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = TextToSpeechDatasetCreator.from_tsv( - self.args.data, self.data_cfg, split, self.src_dict, - pre_tokenizer, bpe_tokenizer, is_train_split=is_train_split, - epoch=epoch, seed=self.args.seed, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id - ) - - @property - def target_dictionary(self): - return None - - @property - def source_dictionary(self): - return self.src_dict - - def get_speaker_embeddings_path(self): - speaker_emb_path = None - if self.data_cfg.config.get("speaker_emb_filename") is not None: - speaker_emb_path = op.join( - self.args.data, self.data_cfg.config.get("speaker_emb_filename") - ) - return speaker_emb_path - - @classmethod - def get_speaker_embeddings(cls, args): - embed_speaker = None - if args.speaker_to_id is not None: - if args.speaker_emb_path is None: - embed_speaker = torch.nn.Embedding( - len(args.speaker_to_id), args.speaker_embed_dim - ) - else: - speaker_emb_mat = np.load(args.speaker_emb_path) - assert speaker_emb_mat.shape[1] == args.speaker_embed_dim - embed_speaker = torch.nn.Embedding.from_pretrained( - torch.from_numpy(speaker_emb_mat), freeze=True, - ) - logger.info( - f"load speaker embeddings from {args.speaker_emb_path}. " - f"train embedding? {embed_speaker.weight.requires_grad}\n" - f"embeddings:\n{speaker_emb_mat}" - ) - return embed_speaker - - def build_model(self, cfg): - cfg.pitch_min = self.data_cfg.config["features"].get("pitch_min", None) - cfg.pitch_max = self.data_cfg.config["features"].get("pitch_max", None) - cfg.energy_min = self.data_cfg.config["features"].get("energy_min", None) - cfg.energy_max = self.data_cfg.config["features"].get("energy_max", None) - cfg.speaker_emb_path = self.get_speaker_embeddings_path() - model = super().build_model(cfg) - self.generator = None - if getattr(cfg, "eval_inference", False): - self.generator = self.build_generator([model], cfg) - return model - - def build_generator(self, models, cfg, vocoder=None, **unused): - if vocoder is None: - vocoder = self.build_default_vocoder() - model = models[0] - if getattr(model, "NON_AUTOREGRESSIVE", False): - return NonAutoregressiveSpeechGenerator( - model, vocoder, self.data_cfg - ) - else: - generator = AutoRegressiveSpeechGenerator - if getattr(cfg, "teacher_forcing", False): - generator = TeacherForcingAutoRegressiveSpeechGenerator - logger.info("Teacher forcing mode for generation") - return generator( - model, vocoder, self.data_cfg, - max_iter=self.args.max_target_positions, - eos_prob_threshold=self.args.eos_prob_threshold - ) - - def build_default_vocoder(self): - from fairseq.models.text_to_speech.vocoder import get_vocoder - vocoder = get_vocoder(self.args, self.data_cfg) - if torch.cuda.is_available() and not self.args.cpu: - vocoder = vocoder.cuda() - else: - vocoder = vocoder.cpu() - return vocoder - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step( - sample, model, criterion - ) - - if getattr(self.args, "eval_inference", False): - hypos, inference_losses = self.valid_step_with_inference( - sample, model, self.generator - ) - for k, v in inference_losses.items(): - assert(k not in logging_output) - logging_output[k] = v - - picked_id = 0 - if self.tensorboard_dir and (sample["id"] == picked_id).any(): - self.log_tensorboard( - sample, - hypos[:self.args.eval_tb_nsample], - model._num_updates, - is_na_model=getattr(model, "NON_AUTOREGRESSIVE", False) - ) - return loss, sample_size, logging_output - - def valid_step_with_inference(self, sample, model, generator): - hypos = generator.generate(model, sample, has_targ=True) - - losses = { - "mcd_loss": 0., - "targ_frames": 0., - "pred_frames": 0., - "nins": 0., - "ndel": 0., - } - rets = batch_mel_cepstral_distortion( - [hypo["targ_waveform"] for hypo in hypos], - [hypo["waveform"] for hypo in hypos], - self.sr, - normalize_type=None - ) - for d, extra in rets: - pathmap = extra[-1] - losses["mcd_loss"] += d.item() - losses["targ_frames"] += pathmap.size(0) - losses["pred_frames"] += pathmap.size(1) - losses["nins"] += (pathmap.sum(dim=1) - 1).sum().item() - losses["ndel"] += (pathmap.sum(dim=0) - 1).sum().item() - - return hypos, losses - - def log_tensorboard(self, sample, hypos, num_updates, is_na_model=False): - if self.tensorboard_writer is None: - self.tensorboard_writer = SummaryWriter(self.tensorboard_dir) - tb_writer = self.tensorboard_writer - for b in range(len(hypos)): - idx = sample["id"][b] - text = sample["src_texts"][b] - targ = hypos[b]["targ_feature"] - pred = hypos[b]["feature"] - attn = hypos[b]["attn"] - - if is_na_model: - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1)], - [f"target (idx={idx})", "output"], attn, - "alignment", ret_np=True, suptitle=text, - ) - else: - eos_prob = hypos[b]["eos_prob"] - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1), attn], - [f"target (idx={idx})", "output", "alignment"], eos_prob, - "eos prob", ret_np=True, suptitle=text, - ) - - tb_writer.add_image( - f"inference_sample_{b}", data, num_updates, - dataformats="HWC" - ) - - if hypos[b]["waveform"] is not None: - targ_wave = hypos[b]["targ_waveform"].detach().cpu().float() - pred_wave = hypos[b]["waveform"].detach().cpu().float() - tb_writer.add_audio( - f"inference_targ_{b}", - targ_wave, - num_updates, - sample_rate=self.sr - ) - tb_writer.add_audio( - f"inference_pred_{b}", - pred_wave, - num_updates, - sample_rate=self.sr - ) - - -def save_figure_to_numpy(fig): - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - return data - - -DEFAULT_V_MIN = np.log(1e-5) - - -def plot_tts_output( - data_2d, title_2d, data_1d, title_1d, figsize=(24, 4), - v_min=DEFAULT_V_MIN, v_max=3, ret_np=False, suptitle="" -): - try: - import matplotlib.pyplot as plt - from mpl_toolkits.axes_grid1 import make_axes_locatable - except ImportError: - raise ImportError("Please install Matplotlib: pip install matplotlib") - - data_2d = [ - x.detach().cpu().float().numpy() - if isinstance(x, torch.Tensor) else x for x in data_2d - ] - fig, axes = plt.subplots(1, len(data_2d) + 1, figsize=figsize) - if suptitle: - fig.suptitle(suptitle[:400]) # capped at 400 chars - axes = [axes] if len(data_2d) == 0 else axes - for ax, x, name in zip(axes, data_2d, title_2d): - ax.set_title(name) - divider = make_axes_locatable(ax) - cax = divider.append_axes('right', size='5%', pad=0.05) - im = ax.imshow( - x, origin="lower", aspect="auto", vmin=max(x.min(), v_min), - vmax=min(x.max(), v_max) - ) - fig.colorbar(im, cax=cax, orientation='vertical') - - if isinstance(data_1d, torch.Tensor): - data_1d = data_1d.detach().cpu().numpy() - axes[-1].plot(data_1d) - axes[-1].set_title(title_1d) - plt.tight_layout() - - if ret_np: - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close(fig) - return data - - -def antidiag_indices(offset, min_i=0, max_i=None, min_j=0, max_j=None): - """ - for a (3, 4) matrix with min_i=1, max_i=3, min_j=1, max_j=4, outputs - - offset=2 (1, 1), - offset=3 (2, 1), (1, 2) - offset=4 (2, 2), (1, 3) - offset=5 (2, 3) - - constraints: - i + j = offset - min_j <= j < max_j - min_i <= offset - j < max_i - """ - if max_i is None: - max_i = offset + 1 - if max_j is None: - max_j = offset + 1 - min_j = max(min_j, offset - max_i + 1, 0) - max_j = min(max_j, offset - min_i + 1, offset + 1) - j = torch.arange(min_j, max_j) - i = offset - j - return torch.stack([i, j]) - - -def batch_dynamic_time_warping(distance, shapes=None): - """full batched DTW without any constraints - - distance: (batchsize, max_M, max_N) matrix - shapes: (batchsize,) vector specifying (M, N) for each entry - """ - # ptr: 0=left, 1=up-left, 2=up - ptr2dij = {0: (0, -1), 1: (-1, -1), 2: (-1, 0)} - - bsz, m, n = distance.size() - cumdist = torch.zeros_like(distance) - backptr = torch.zeros_like(distance).type(torch.int32) - 1 - - # initialize - cumdist[:, 0, :] = distance[:, 0, :].cumsum(dim=-1) - cumdist[:, :, 0] = distance[:, :, 0].cumsum(dim=-1) - backptr[:, 0, :] = 0 - backptr[:, :, 0] = 2 - - # DP with optimized anti-diagonal parallelization, O(M+N) steps - for offset in range(2, m + n - 1): - ind = antidiag_indices(offset, 1, m, 1, n) - c = torch.stack( - [cumdist[:, ind[0], ind[1] - 1], cumdist[:, ind[0] - 1, ind[1] - 1], - cumdist[:, ind[0] - 1, ind[1]], ], - dim=2 - ) - v, b = c.min(axis=-1) - backptr[:, ind[0], ind[1]] = b.int() - cumdist[:, ind[0], ind[1]] = v + distance[:, ind[0], ind[1]] - - # backtrace - pathmap = torch.zeros_like(backptr) - for b in range(bsz): - i = m - 1 if shapes is None else (shapes[b][0] - 1).item() - j = n - 1 if shapes is None else (shapes[b][1] - 1).item() - dtwpath = [(i, j)] - while (i != 0 or j != 0) and len(dtwpath) < 10000: - assert (i >= 0 and j >= 0) - di, dj = ptr2dij[backptr[b, i, j].item()] - i, j = i + di, j + dj - dtwpath.append((i, j)) - dtwpath = dtwpath[::-1] - indices = torch.from_numpy(np.array(dtwpath)) - pathmap[b, indices[:, 0], indices[:, 1]] = 1 - - return cumdist, backptr, pathmap - - -def compute_l2_dist(x1, x2): - """compute an (m, n) L2 distance matrix from (m, d) and (n, d) matrices""" - return torch.cdist(x1.unsqueeze(0), x2.unsqueeze(0), p=2).squeeze(0).pow(2) - - -def compute_rms_dist(x1, x2): - l2_dist = compute_l2_dist(x1, x2) - return (l2_dist / x1.size(1)).pow(0.5) - - -def get_divisor(pathmap, normalize_type): - if normalize_type is None: - return 1 - elif normalize_type == "len1": - return pathmap.size(0) - elif normalize_type == "len2": - return pathmap.size(1) - elif normalize_type == "path": - return pathmap.sum().item() - else: - raise ValueError(f"normalize_type {normalize_type} not supported") - - -def batch_compute_distortion(y1, y2, sr, feat_fn, dist_fn, normalize_type): - d, s, x1, x2 = [], [], [], [] - for cur_y1, cur_y2 in zip(y1, y2): - assert (cur_y1.ndim == 1 and cur_y2.ndim == 1) - cur_x1 = feat_fn(cur_y1) - cur_x2 = feat_fn(cur_y2) - x1.append(cur_x1) - x2.append(cur_x2) - - cur_d = dist_fn(cur_x1, cur_x2) - d.append(cur_d) - s.append(d[-1].size()) - max_m = max(ss[0] for ss in s) - max_n = max(ss[1] for ss in s) - d = torch.stack( - [F.pad(dd, (0, max_n - dd.size(1), 0, max_m - dd.size(0))) for dd in d] - ) - s = torch.LongTensor(s).to(d.device) - cumdists, backptrs, pathmaps = batch_dynamic_time_warping(d, s) - - rets = [] - itr = zip(s, x1, x2, d, cumdists, backptrs, pathmaps) - for (m, n), cur_x1, cur_x2, dist, cumdist, backptr, pathmap in itr: - cumdist = cumdist[:m, :n] - backptr = backptr[:m, :n] - pathmap = pathmap[:m, :n] - divisor = get_divisor(pathmap, normalize_type) - - distortion = cumdist[-1, -1] / divisor - ret = distortion, (cur_x1, cur_x2, dist, cumdist, backptr, pathmap) - rets.append(ret) - return rets - - -def batch_mel_cepstral_distortion( - y1, y2, sr, normalize_type="path", mfcc_fn=None -): - """ - https://arxiv.org/pdf/2011.03568.pdf - - The root mean squared error computed on 13-dimensional MFCC using DTW for - alignment. MFCC features are computed from an 80-channel log-mel - spectrogram using a 50ms Hann window and hop of 12.5ms. - - y1: list of waveforms - y2: list of waveforms - sr: sampling rate - """ - - try: - import torchaudio - except ImportError: - raise ImportError("Please install torchaudio: pip install torchaudio") - - if mfcc_fn is None or mfcc_fn.sample_rate != sr: - melkwargs = { - "n_fft": int(0.05 * sr), "win_length": int(0.05 * sr), - "hop_length": int(0.0125 * sr), "f_min": 20, - "n_mels": 80, "window_fn": torch.hann_window - } - mfcc_fn = torchaudio.transforms.MFCC( - sr, n_mfcc=13, log_mels=True, melkwargs=melkwargs - ).to(y1[0].device) - return batch_compute_distortion( - y1, y2, sr, lambda y: mfcc_fn(y).transpose(-1, -2), compute_rms_dist, - normalize_type - ) diff --git a/spaces/PAIR/PAIR-Diffusion/README.md b/spaces/PAIR/PAIR-Diffusion/README.md deleted file mode 100644 index 4368be22b5a39ef23354993d53a668aa1e607415..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: PAIR Diffusion -emoji: 📚 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/utils.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/utils.py deleted file mode 100644 index c5befb8e56ece50b5fecfd007b26f8a29124c0bd..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/utils.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import random -import sys -import time -import warnings -from getpass import getuser -from socket import gethostname - -import numpy as np -import torch - -import annotator.uniformer.mmcv as mmcv - - -def get_host_info(): - """Get hostname and username. - - Return empty string if exception raised, e.g. ``getpass.getuser()`` will - lead to error in docker container - """ - host = '' - try: - host = f'{getuser()}@{gethostname()}' - except Exception as e: - warnings.warn(f'Host or user not found: {str(e)}') - finally: - return host - - -def get_time_str(): - return time.strftime('%Y%m%d_%H%M%S', time.localtime()) - - -def obj_from_dict(info, parent=None, default_args=None): - """Initialize an object from dict. - - The dict must contain the key "type", which indicates the object type, it - can be either a string or type, such as "list" or ``list``. Remaining - fields are treated as the arguments for constructing the object. - - Args: - info (dict): Object types and arguments. - parent (:class:`module`): Module which may containing expected object - classes. - default_args (dict, optional): Default arguments for initializing the - object. - - Returns: - any type: Object built from the dict. - """ - assert isinstance(info, dict) and 'type' in info - assert isinstance(default_args, dict) or default_args is None - args = info.copy() - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if parent is not None: - obj_type = getattr(parent, obj_type) - else: - obj_type = sys.modules[obj_type] - elif not isinstance(obj_type, type): - raise TypeError('type must be a str or valid type, but ' - f'got {type(obj_type)}') - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - return obj_type(**args) - - -def set_random_seed(seed, deterministic=False, use_rank_shift=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - rank_shift (bool): Whether to add rank number to the random seed to - have different random seed in different threads. Default: False. - """ - if use_rank_shift: - rank, _ = mmcv.runner.get_dist_info() - seed += rank - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/parrots_wrapper.py deleted file mode 100644 index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/parrots_wrapper.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from functools import partial - -import torch - -TORCH_VERSION = torch.__version__ - - -def is_rocm_pytorch() -> bool: - is_rocm = False - if TORCH_VERSION != 'parrots': - try: - from torch.utils.cpp_extension import ROCM_HOME - is_rocm = True if ((torch.version.hip is not None) and - (ROCM_HOME is not None)) else False - except ImportError: - pass - return is_rocm - - -def _get_cuda_home(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import CUDA_HOME - else: - if is_rocm_pytorch(): - from torch.utils.cpp_extension import ROCM_HOME - CUDA_HOME = ROCM_HOME - else: - from torch.utils.cpp_extension import CUDA_HOME - return CUDA_HOME - - -def get_build_config(): - if TORCH_VERSION == 'parrots': - from parrots.config import get_build_info - return get_build_info() - else: - return torch.__config__.show() - - -def _get_conv(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin - else: - from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin - return _ConvNd, _ConvTransposeMixin - - -def _get_dataloader(): - if TORCH_VERSION == 'parrots': - from torch.utils.data import DataLoader, PoolDataLoader - else: - from torch.utils.data import DataLoader - PoolDataLoader = DataLoader - return DataLoader, PoolDataLoader - - -def _get_extension(): - if TORCH_VERSION == 'parrots': - from parrots.utils.build_extension import BuildExtension, Extension - CppExtension = partial(Extension, cuda=False) - CUDAExtension = partial(Extension, cuda=True) - else: - from torch.utils.cpp_extension import (BuildExtension, CppExtension, - CUDAExtension) - return BuildExtension, CppExtension, CUDAExtension - - -def _get_pool(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - else: - from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd, - _AdaptiveMaxPoolNd, _AvgPoolNd, - _MaxPoolNd) - return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd - - -def _get_norm(): - if TORCH_VERSION == 'parrots': - from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm2d - else: - from torch.nn.modules.instancenorm import _InstanceNorm - from torch.nn.modules.batchnorm import _BatchNorm - SyncBatchNorm_ = torch.nn.SyncBatchNorm - return _BatchNorm, _InstanceNorm, SyncBatchNorm_ - - -_ConvNd, _ConvTransposeMixin = _get_conv() -DataLoader, PoolDataLoader = _get_dataloader() -BuildExtension, CppExtension, CUDAExtension = _get_extension() -_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm() -_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool() - - -class SyncBatchNorm(SyncBatchNorm_): - - def _check_input_dim(self, input): - if TORCH_VERSION == 'parrots': - if input.dim() < 2: - raise ValueError( - f'expected at least 2D input (got {input.dim()}D input)') - else: - super()._check_input_dim(input) diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/diffusionmodules/model.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/diffusionmodules/model.py deleted file mode 100644 index 533e589a2024f1d7c52093d8c472c3b1b6617e26..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/diffusionmodules/model.py +++ /dev/null @@ -1,835 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np -from einops import rearrange - -from ldm.util import instantiate_from_config -from ldm.modules.attention import LinearAttention - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels, num_groups=32): - return torch.nn.GroupNorm(num_groups=num_groups, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class LinAttnBlock(LinearAttention): - """to match AttnBlock usage""" - def __init__(self, in_channels): - super().__init__(dim=in_channels, heads=1, dim_head=in_channels) - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -def make_attn(in_channels, attn_type="vanilla"): - assert attn_type in ["vanilla", "linear", "none"], f'attn_type {attn_type} unknown' - print(f"making attention of type '{attn_type}' with {in_channels} in_channels") - if attn_type == "vanilla": - return AttnBlock(in_channels) - elif attn_type == "none": - return nn.Identity(in_channels) - else: - return LinAttnBlock(in_channels) - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True, use_linear_attn=False, attn_type="vanilla"): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x, t=None, context=None): - #assert x.shape[2] == x.shape[3] == self.resolution - if context is not None: - # assume aligned context, cat along channel axis - x = torch.cat((x, context), dim=1) - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_last_layer(self): - return self.conv_out.weight - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, use_linear_attn=False, attn_type="vanilla", - **ignore_kwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.in_ch_mult = in_ch_mult - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False, - attn_type="vanilla", **ignorekwargs): - super().__init__() - if use_linear_attn: attn_type = "linear" - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - self.tanh_out = tanh_out - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = make_attn(block_in, attn_type=attn_type) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(make_attn(block_in, attn_type=attn_type)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - if self.tanh_out: - h = torch.tanh(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class LatentRescaler(nn.Module): - def __init__(self, factor, in_channels, mid_channels, out_channels, depth=2): - super().__init__() - # residual block, interpolate, residual block - self.factor = factor - self.conv_in = nn.Conv2d(in_channels, - mid_channels, - kernel_size=3, - stride=1, - padding=1) - self.res_block1 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - self.attn = AttnBlock(mid_channels) - self.res_block2 = nn.ModuleList([ResnetBlock(in_channels=mid_channels, - out_channels=mid_channels, - temb_channels=0, - dropout=0.0) for _ in range(depth)]) - - self.conv_out = nn.Conv2d(mid_channels, - out_channels, - kernel_size=1, - ) - - def forward(self, x): - x = self.conv_in(x) - for block in self.res_block1: - x = block(x, None) - x = torch.nn.functional.interpolate(x, size=(int(round(x.shape[2]*self.factor)), int(round(x.shape[3]*self.factor)))) - x = self.attn(x) - for block in self.res_block2: - x = block(x, None) - x = self.conv_out(x) - return x - - -class MergedRescaleEncoder(nn.Module): - def __init__(self, in_channels, ch, resolution, out_ch, num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - ch_mult=(1,2,4,8), rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - intermediate_chn = ch * ch_mult[-1] - self.encoder = Encoder(in_channels=in_channels, num_res_blocks=num_res_blocks, ch=ch, ch_mult=ch_mult, - z_channels=intermediate_chn, double_z=False, resolution=resolution, - attn_resolutions=attn_resolutions, dropout=dropout, resamp_with_conv=resamp_with_conv, - out_ch=None) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=intermediate_chn, - mid_channels=intermediate_chn, out_channels=out_ch, depth=rescale_module_depth) - - def forward(self, x): - x = self.encoder(x) - x = self.rescaler(x) - return x - - -class MergedRescaleDecoder(nn.Module): - def __init__(self, z_channels, out_ch, resolution, num_res_blocks, attn_resolutions, ch, ch_mult=(1,2,4,8), - dropout=0.0, resamp_with_conv=True, rescale_factor=1.0, rescale_module_depth=1): - super().__init__() - tmp_chn = z_channels*ch_mult[-1] - self.decoder = Decoder(out_ch=out_ch, z_channels=tmp_chn, attn_resolutions=attn_resolutions, dropout=dropout, - resamp_with_conv=resamp_with_conv, in_channels=None, num_res_blocks=num_res_blocks, - ch_mult=ch_mult, resolution=resolution, ch=ch) - self.rescaler = LatentRescaler(factor=rescale_factor, in_channels=z_channels, mid_channels=tmp_chn, - out_channels=tmp_chn, depth=rescale_module_depth) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Upsampler(nn.Module): - def __init__(self, in_size, out_size, in_channels, out_channels, ch_mult=2): - super().__init__() - assert out_size >= in_size - num_blocks = int(np.log2(out_size//in_size))+1 - factor_up = 1.+ (out_size % in_size) - print(f"Building {self.__class__.__name__} with in_size: {in_size} --> out_size {out_size} and factor {factor_up}") - self.rescaler = LatentRescaler(factor=factor_up, in_channels=in_channels, mid_channels=2*in_channels, - out_channels=in_channels) - self.decoder = Decoder(out_ch=out_channels, resolution=out_size, z_channels=in_channels, num_res_blocks=2, - attn_resolutions=[], in_channels=None, ch=in_channels, - ch_mult=[ch_mult for _ in range(num_blocks)]) - - def forward(self, x): - x = self.rescaler(x) - x = self.decoder(x) - return x - - -class Resize(nn.Module): - def __init__(self, in_channels=None, learned=False, mode="bilinear"): - super().__init__() - self.with_conv = learned - self.mode = mode - if self.with_conv: - print(f"Note: {self.__class__.__name} uses learned downsampling and will ignore the fixed {mode} mode") - raise NotImplementedError() - assert in_channels is not None - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=4, - stride=2, - padding=1) - - def forward(self, x, scale_factor=1.0): - if scale_factor==1.0: - return x - else: - x = torch.nn.functional.interpolate(x, mode=self.mode, align_corners=False, scale_factor=scale_factor) - return x - -class FirstStagePostProcessor(nn.Module): - - def __init__(self, ch_mult:list, in_channels, - pretrained_model:nn.Module=None, - reshape=False, - n_channels=None, - dropout=0., - pretrained_config=None): - super().__init__() - if pretrained_config is None: - assert pretrained_model is not None, 'Either "pretrained_model" or "pretrained_config" must not be None' - self.pretrained_model = pretrained_model - else: - assert pretrained_config is not None, 'Either "pretrained_model" or "pretrained_config" must not be None' - self.instantiate_pretrained(pretrained_config) - - self.do_reshape = reshape - - if n_channels is None: - n_channels = self.pretrained_model.encoder.ch - - self.proj_norm = Normalize(in_channels,num_groups=in_channels//2) - self.proj = nn.Conv2d(in_channels,n_channels,kernel_size=3, - stride=1,padding=1) - - blocks = [] - downs = [] - ch_in = n_channels - for m in ch_mult: - blocks.append(ResnetBlock(in_channels=ch_in,out_channels=m*n_channels,dropout=dropout)) - ch_in = m * n_channels - downs.append(Downsample(ch_in, with_conv=False)) - - self.model = nn.ModuleList(blocks) - self.downsampler = nn.ModuleList(downs) - - - def instantiate_pretrained(self, config): - model = instantiate_from_config(config) - self.pretrained_model = model.eval() - # self.pretrained_model.train = False - for param in self.pretrained_model.parameters(): - param.requires_grad = False - - - @torch.no_grad() - def encode_with_pretrained(self,x): - c = self.pretrained_model.encode(x) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - return c - - def forward(self,x): - z_fs = self.encode_with_pretrained(x) - z = self.proj_norm(z_fs) - z = self.proj(z) - z = nonlinearity(z) - - for submodel, downmodel in zip(self.model,self.downsampler): - z = submodel(z,temb=None) - z = downmodel(z) - - if self.do_reshape: - z = rearrange(z,'b c h w -> b (h w) c') - return z - diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/simplify.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/simplify.go deleted file mode 100644 index 19bec774f13fe9a17e5596cb06eef3fceed0e504..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/simplify.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/match.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/match.go deleted file mode 100644 index 25832f4ebeeb7dd6076de6fa94eb8f3f7c935bb9..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/match.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/stabilityai-stablecode-instruct-alpha-3b/app.py b/spaces/PeepDaSlan9/stabilityai-stablecode-instruct-alpha-3b/app.py deleted file mode 100644 index 5be5a08bcfd1d6c532d409da09c75cf47e0e5415..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/stabilityai-stablecode-instruct-alpha-3b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stablecode-instruct-alpha-3b").launch() \ No newline at end of file diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/make_divisible.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/make_divisible.py deleted file mode 100644 index 75ad756052529f52fe83bb95dd1f0ecfc9a13078..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/make_divisible.py +++ /dev/null @@ -1,27 +0,0 @@ -def make_divisible(value, divisor, min_value=None, min_ratio=0.9): - """Make divisible function. - - This function rounds the channel number to the nearest value that can be - divisible by the divisor. It is taken from the original tf repo. It ensures - that all layers have a channel number that is divisible by divisor. It can - be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa - - Args: - value (int): The original channel number. - divisor (int): The divisor to fully divide the channel number. - min_value (int): The minimum value of the output channel. - Default: None, means that the minimum value equal to the divisor. - min_ratio (float): The minimum ratio of the rounded channel number to - the original channel number. Default: 0.9. - - Returns: - int: The modified output channel number. - """ - - if min_value is None: - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than (1-min_ratio). - if new_value < min_ratio * value: - new_value += divisor - return new_value diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_segmentation.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_segmentation.py deleted file mode 100644 index 235b3c4b4575981b7533ce18bceaff97e05b55f9..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_segmentation.py +++ /dev/null @@ -1,130 +0,0 @@ -import sys, os -import numpy as np -import scipy -import torch -import torch.nn as nn -from scipy import ndimage -from tqdm import tqdm, trange -from PIL import Image -import torch.hub -import torchvision -import torch.nn.functional as F - -# download deeplabv2_resnet101_msc-cocostuff164k-100000.pth from -# https://github.com/kazuto1011/deeplab-pytorch/releases/download/v1.0/deeplabv2_resnet101_msc-cocostuff164k-100000.pth -# and put the path here -CKPT_PATH = "TODO" - -rescale = lambda x: (x + 1.) / 2. - -def rescale_bgr(x): - x = (x+1)*127.5 - x = torch.flip(x, dims=[0]) - return x - - -class COCOStuffSegmenter(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.n_labels = 182 - model = torch.hub.load("kazuto1011/deeplab-pytorch", "deeplabv2_resnet101", n_classes=self.n_labels) - ckpt_path = CKPT_PATH - model.load_state_dict(torch.load(ckpt_path)) - self.model = model - - normalize = torchvision.transforms.Normalize(mean=self.mean, std=self.std) - self.image_transform = torchvision.transforms.Compose([ - torchvision.transforms.Lambda(lambda image: torch.stack( - [normalize(rescale_bgr(x)) for x in image])) - ]) - - def forward(self, x, upsample=None): - x = self._pre_process(x) - x = self.model(x) - if upsample is not None: - x = torch.nn.functional.upsample_bilinear(x, size=upsample) - return x - - def _pre_process(self, x): - x = self.image_transform(x) - return x - - @property - def mean(self): - # bgr - return [104.008, 116.669, 122.675] - - @property - def std(self): - return [1.0, 1.0, 1.0] - - @property - def input_size(self): - return [3, 224, 224] - - -def run_model(img, model): - model = model.eval() - with torch.no_grad(): - segmentation = model(img, upsample=(img.shape[2], img.shape[3])) - segmentation = torch.argmax(segmentation, dim=1, keepdim=True) - return segmentation.detach().cpu() - - -def get_input(batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format) - return x.float() - - -def save_segmentation(segmentation, path): - # --> class label to uint8, save as png - os.makedirs(os.path.dirname(path), exist_ok=True) - assert len(segmentation.shape)==4 - assert segmentation.shape[0]==1 - for seg in segmentation: - seg = seg.permute(1,2,0).numpy().squeeze().astype(np.uint8) - seg = Image.fromarray(seg) - seg.save(path) - - -def iterate_dataset(dataloader, destpath, model): - os.makedirs(destpath, exist_ok=True) - num_processed = 0 - for i, batch in tqdm(enumerate(dataloader), desc="Data"): - try: - img = get_input(batch, "image") - img = img.cuda() - seg = run_model(img, model) - - path = batch["relative_file_path_"][0] - path = os.path.splitext(path)[0] - - path = os.path.join(destpath, path + ".png") - save_segmentation(seg, path) - num_processed += 1 - except Exception as e: - print(e) - print("but anyhow..") - - print("Processed {} files. Bye.".format(num_processed)) - - -from taming.data.sflckr import Examples -from torch.utils.data import DataLoader - -if __name__ == "__main__": - dest = sys.argv[1] - batchsize = 1 - print("Running with batch-size {}, saving to {}...".format(batchsize, dest)) - - model = COCOStuffSegmenter({}).cuda() - print("Instantiated model.") - - dataset = Examples() - dloader = DataLoader(dataset, batch_size=batchsize) - iterate_dataset(dataloader=dloader, destpath=dest, model=model) - print("done.") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_win32_console.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_win32_console.py deleted file mode 100644 index 81b1082905338a74b72b9de432ece50a456687bc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_win32_console.py +++ /dev/null @@ -1,662 +0,0 @@ -"""Light wrapper around the Win32 Console API - this module should only be imported on Windows - -The API that this module wraps is documented at https://docs.microsoft.com/en-us/windows/console/console-functions -""" -import ctypes -import sys -from typing import Any - -windll: Any = None -if sys.platform == "win32": - windll = ctypes.LibraryLoader(ctypes.WinDLL) -else: - raise ImportError(f"{__name__} can only be imported on Windows") - -import time -from ctypes import Structure, byref, wintypes -from typing import IO, NamedTuple, Type, cast - -from pip._vendor.rich.color import ColorSystem -from pip._vendor.rich.style import Style - -STDOUT = -11 -ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4 - -COORD = wintypes._COORD - - -class LegacyWindowsError(Exception): - pass - - -class WindowsCoordinates(NamedTuple): - """Coordinates in the Windows Console API are (y, x), not (x, y). - This class is intended to prevent that confusion. - Rows and columns are indexed from 0. - This class can be used in place of wintypes._COORD in arguments and argtypes. - """ - - row: int - col: int - - @classmethod - def from_param(cls, value: "WindowsCoordinates") -> COORD: - """Converts a WindowsCoordinates into a wintypes _COORD structure. - This classmethod is internally called by ctypes to perform the conversion. - - Args: - value (WindowsCoordinates): The input coordinates to convert. - - Returns: - wintypes._COORD: The converted coordinates struct. - """ - return COORD(value.col, value.row) - - -class CONSOLE_SCREEN_BUFFER_INFO(Structure): - _fields_ = [ - ("dwSize", COORD), - ("dwCursorPosition", COORD), - ("wAttributes", wintypes.WORD), - ("srWindow", wintypes.SMALL_RECT), - ("dwMaximumWindowSize", COORD), - ] - - -class CONSOLE_CURSOR_INFO(ctypes.Structure): - _fields_ = [("dwSize", wintypes.DWORD), ("bVisible", wintypes.BOOL)] - - -_GetStdHandle = windll.kernel32.GetStdHandle -_GetStdHandle.argtypes = [ - wintypes.DWORD, -] -_GetStdHandle.restype = wintypes.HANDLE - - -def GetStdHandle(handle: int = STDOUT) -> wintypes.HANDLE: - """Retrieves a handle to the specified standard device (standard input, standard output, or standard error). - - Args: - handle (int): Integer identifier for the handle. Defaults to -11 (stdout). - - Returns: - wintypes.HANDLE: The handle - """ - return cast(wintypes.HANDLE, _GetStdHandle(handle)) - - -_GetConsoleMode = windll.kernel32.GetConsoleMode -_GetConsoleMode.argtypes = [wintypes.HANDLE, wintypes.LPDWORD] -_GetConsoleMode.restype = wintypes.BOOL - - -def GetConsoleMode(std_handle: wintypes.HANDLE) -> int: - """Retrieves the current input mode of a console's input buffer - or the current output mode of a console screen buffer. - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - - Raises: - LegacyWindowsError: If any error occurs while calling the Windows console API. - - Returns: - int: Value representing the current console mode as documented at - https://docs.microsoft.com/en-us/windows/console/getconsolemode#parameters - """ - - console_mode = wintypes.DWORD() - success = bool(_GetConsoleMode(std_handle, console_mode)) - if not success: - raise LegacyWindowsError("Unable to get legacy Windows Console Mode") - return console_mode.value - - -_FillConsoleOutputCharacterW = windll.kernel32.FillConsoleOutputCharacterW -_FillConsoleOutputCharacterW.argtypes = [ - wintypes.HANDLE, - ctypes.c_char, - wintypes.DWORD, - cast(Type[COORD], WindowsCoordinates), - ctypes.POINTER(wintypes.DWORD), -] -_FillConsoleOutputCharacterW.restype = wintypes.BOOL - - -def FillConsoleOutputCharacter( - std_handle: wintypes.HANDLE, - char: str, - length: int, - start: WindowsCoordinates, -) -> int: - """Writes a character to the console screen buffer a specified number of times, beginning at the specified coordinates. - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - char (str): The character to write. Must be a string of length 1. - length (int): The number of times to write the character. - start (WindowsCoordinates): The coordinates to start writing at. - - Returns: - int: The number of characters written. - """ - character = ctypes.c_char(char.encode()) - num_characters = wintypes.DWORD(length) - num_written = wintypes.DWORD(0) - _FillConsoleOutputCharacterW( - std_handle, - character, - num_characters, - start, - byref(num_written), - ) - return num_written.value - - -_FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute -_FillConsoleOutputAttribute.argtypes = [ - wintypes.HANDLE, - wintypes.WORD, - wintypes.DWORD, - cast(Type[COORD], WindowsCoordinates), - ctypes.POINTER(wintypes.DWORD), -] -_FillConsoleOutputAttribute.restype = wintypes.BOOL - - -def FillConsoleOutputAttribute( - std_handle: wintypes.HANDLE, - attributes: int, - length: int, - start: WindowsCoordinates, -) -> int: - """Sets the character attributes for a specified number of character cells, - beginning at the specified coordinates in a screen buffer. - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - attributes (int): Integer value representing the foreground and background colours of the cells. - length (int): The number of cells to set the output attribute of. - start (WindowsCoordinates): The coordinates of the first cell whose attributes are to be set. - - Returns: - int: The number of cells whose attributes were actually set. - """ - num_cells = wintypes.DWORD(length) - style_attrs = wintypes.WORD(attributes) - num_written = wintypes.DWORD(0) - _FillConsoleOutputAttribute( - std_handle, style_attrs, num_cells, start, byref(num_written) - ) - return num_written.value - - -_SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute -_SetConsoleTextAttribute.argtypes = [ - wintypes.HANDLE, - wintypes.WORD, -] -_SetConsoleTextAttribute.restype = wintypes.BOOL - - -def SetConsoleTextAttribute( - std_handle: wintypes.HANDLE, attributes: wintypes.WORD -) -> bool: - """Set the colour attributes for all text written after this function is called. - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - attributes (int): Integer value representing the foreground and background colours. - - - Returns: - bool: True if the attribute was set successfully, otherwise False. - """ - return bool(_SetConsoleTextAttribute(std_handle, attributes)) - - -_GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo -_GetConsoleScreenBufferInfo.argtypes = [ - wintypes.HANDLE, - ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO), -] -_GetConsoleScreenBufferInfo.restype = wintypes.BOOL - - -def GetConsoleScreenBufferInfo( - std_handle: wintypes.HANDLE, -) -> CONSOLE_SCREEN_BUFFER_INFO: - """Retrieves information about the specified console screen buffer. - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - - Returns: - CONSOLE_SCREEN_BUFFER_INFO: A CONSOLE_SCREEN_BUFFER_INFO ctype struct contain information about - screen size, cursor position, colour attributes, and more.""" - console_screen_buffer_info = CONSOLE_SCREEN_BUFFER_INFO() - _GetConsoleScreenBufferInfo(std_handle, byref(console_screen_buffer_info)) - return console_screen_buffer_info - - -_SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition -_SetConsoleCursorPosition.argtypes = [ - wintypes.HANDLE, - cast(Type[COORD], WindowsCoordinates), -] -_SetConsoleCursorPosition.restype = wintypes.BOOL - - -def SetConsoleCursorPosition( - std_handle: wintypes.HANDLE, coords: WindowsCoordinates -) -> bool: - """Set the position of the cursor in the console screen - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - coords (WindowsCoordinates): The coordinates to move the cursor to. - - Returns: - bool: True if the function succeeds, otherwise False. - """ - return bool(_SetConsoleCursorPosition(std_handle, coords)) - - -_GetConsoleCursorInfo = windll.kernel32.GetConsoleCursorInfo -_GetConsoleCursorInfo.argtypes = [ - wintypes.HANDLE, - ctypes.POINTER(CONSOLE_CURSOR_INFO), -] -_GetConsoleCursorInfo.restype = wintypes.BOOL - - -def GetConsoleCursorInfo( - std_handle: wintypes.HANDLE, cursor_info: CONSOLE_CURSOR_INFO -) -> bool: - """Get the cursor info - used to get cursor visibility and width - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - cursor_info (CONSOLE_CURSOR_INFO): CONSOLE_CURSOR_INFO ctype struct that receives information - about the console's cursor. - - Returns: - bool: True if the function succeeds, otherwise False. - """ - return bool(_GetConsoleCursorInfo(std_handle, byref(cursor_info))) - - -_SetConsoleCursorInfo = windll.kernel32.SetConsoleCursorInfo -_SetConsoleCursorInfo.argtypes = [ - wintypes.HANDLE, - ctypes.POINTER(CONSOLE_CURSOR_INFO), -] -_SetConsoleCursorInfo.restype = wintypes.BOOL - - -def SetConsoleCursorInfo( - std_handle: wintypes.HANDLE, cursor_info: CONSOLE_CURSOR_INFO -) -> bool: - """Set the cursor info - used for adjusting cursor visibility and width - - Args: - std_handle (wintypes.HANDLE): A handle to the console input buffer or the console screen buffer. - cursor_info (CONSOLE_CURSOR_INFO): CONSOLE_CURSOR_INFO ctype struct containing the new cursor info. - - Returns: - bool: True if the function succeeds, otherwise False. - """ - return bool(_SetConsoleCursorInfo(std_handle, byref(cursor_info))) - - -_SetConsoleTitle = windll.kernel32.SetConsoleTitleW -_SetConsoleTitle.argtypes = [wintypes.LPCWSTR] -_SetConsoleTitle.restype = wintypes.BOOL - - -def SetConsoleTitle(title: str) -> bool: - """Sets the title of the current console window - - Args: - title (str): The new title of the console window. - - Returns: - bool: True if the function succeeds, otherwise False. - """ - return bool(_SetConsoleTitle(title)) - - -class LegacyWindowsTerm: - """This class allows interaction with the legacy Windows Console API. It should only be used in the context - of environments where virtual terminal processing is not available. However, if it is used in a Windows environment, - the entire API should work. - - Args: - file (IO[str]): The file which the Windows Console API HANDLE is retrieved from, defaults to sys.stdout. - """ - - BRIGHT_BIT = 8 - - # Indices are ANSI color numbers, values are the corresponding Windows Console API color numbers - ANSI_TO_WINDOWS = [ - 0, # black The Windows colours are defined in wincon.h as follows: - 4, # red define FOREGROUND_BLUE 0x0001 -- 0000 0001 - 2, # green define FOREGROUND_GREEN 0x0002 -- 0000 0010 - 6, # yellow define FOREGROUND_RED 0x0004 -- 0000 0100 - 1, # blue define FOREGROUND_INTENSITY 0x0008 -- 0000 1000 - 5, # magenta define BACKGROUND_BLUE 0x0010 -- 0001 0000 - 3, # cyan define BACKGROUND_GREEN 0x0020 -- 0010 0000 - 7, # white define BACKGROUND_RED 0x0040 -- 0100 0000 - 8, # bright black (grey) define BACKGROUND_INTENSITY 0x0080 -- 1000 0000 - 12, # bright red - 10, # bright green - 14, # bright yellow - 9, # bright blue - 13, # bright magenta - 11, # bright cyan - 15, # bright white - ] - - def __init__(self, file: "IO[str]") -> None: - handle = GetStdHandle(STDOUT) - self._handle = handle - default_text = GetConsoleScreenBufferInfo(handle).wAttributes - self._default_text = default_text - - self._default_fore = default_text & 7 - self._default_back = (default_text >> 4) & 7 - self._default_attrs = self._default_fore | (self._default_back << 4) - - self._file = file - self.write = file.write - self.flush = file.flush - - @property - def cursor_position(self) -> WindowsCoordinates: - """Returns the current position of the cursor (0-based) - - Returns: - WindowsCoordinates: The current cursor position. - """ - coord: COORD = GetConsoleScreenBufferInfo(self._handle).dwCursorPosition - return WindowsCoordinates(row=cast(int, coord.Y), col=cast(int, coord.X)) - - @property - def screen_size(self) -> WindowsCoordinates: - """Returns the current size of the console screen buffer, in character columns and rows - - Returns: - WindowsCoordinates: The width and height of the screen as WindowsCoordinates. - """ - screen_size: COORD = GetConsoleScreenBufferInfo(self._handle).dwSize - return WindowsCoordinates( - row=cast(int, screen_size.Y), col=cast(int, screen_size.X) - ) - - def write_text(self, text: str) -> None: - """Write text directly to the terminal without any modification of styles - - Args: - text (str): The text to write to the console - """ - self.write(text) - self.flush() - - def write_styled(self, text: str, style: Style) -> None: - """Write styled text to the terminal. - - Args: - text (str): The text to write - style (Style): The style of the text - """ - color = style.color - bgcolor = style.bgcolor - if style.reverse: - color, bgcolor = bgcolor, color - - if color: - fore = color.downgrade(ColorSystem.WINDOWS).number - fore = fore if fore is not None else 7 # Default to ANSI 7: White - if style.bold: - fore = fore | self.BRIGHT_BIT - if style.dim: - fore = fore & ~self.BRIGHT_BIT - fore = self.ANSI_TO_WINDOWS[fore] - else: - fore = self._default_fore - - if bgcolor: - back = bgcolor.downgrade(ColorSystem.WINDOWS).number - back = back if back is not None else 0 # Default to ANSI 0: Black - back = self.ANSI_TO_WINDOWS[back] - else: - back = self._default_back - - assert fore is not None - assert back is not None - - SetConsoleTextAttribute( - self._handle, attributes=ctypes.c_ushort(fore | (back << 4)) - ) - self.write_text(text) - SetConsoleTextAttribute(self._handle, attributes=self._default_text) - - def move_cursor_to(self, new_position: WindowsCoordinates) -> None: - """Set the position of the cursor - - Args: - new_position (WindowsCoordinates): The WindowsCoordinates representing the new position of the cursor. - """ - if new_position.col < 0 or new_position.row < 0: - return - SetConsoleCursorPosition(self._handle, coords=new_position) - - def erase_line(self) -> None: - """Erase all content on the line the cursor is currently located at""" - screen_size = self.screen_size - cursor_position = self.cursor_position - cells_to_erase = screen_size.col - start_coordinates = WindowsCoordinates(row=cursor_position.row, col=0) - FillConsoleOutputCharacter( - self._handle, " ", length=cells_to_erase, start=start_coordinates - ) - FillConsoleOutputAttribute( - self._handle, - self._default_attrs, - length=cells_to_erase, - start=start_coordinates, - ) - - def erase_end_of_line(self) -> None: - """Erase all content from the cursor position to the end of that line""" - cursor_position = self.cursor_position - cells_to_erase = self.screen_size.col - cursor_position.col - FillConsoleOutputCharacter( - self._handle, " ", length=cells_to_erase, start=cursor_position - ) - FillConsoleOutputAttribute( - self._handle, - self._default_attrs, - length=cells_to_erase, - start=cursor_position, - ) - - def erase_start_of_line(self) -> None: - """Erase all content from the cursor position to the start of that line""" - row, col = self.cursor_position - start = WindowsCoordinates(row, 0) - FillConsoleOutputCharacter(self._handle, " ", length=col, start=start) - FillConsoleOutputAttribute( - self._handle, self._default_attrs, length=col, start=start - ) - - def move_cursor_up(self) -> None: - """Move the cursor up a single cell""" - cursor_position = self.cursor_position - SetConsoleCursorPosition( - self._handle, - coords=WindowsCoordinates( - row=cursor_position.row - 1, col=cursor_position.col - ), - ) - - def move_cursor_down(self) -> None: - """Move the cursor down a single cell""" - cursor_position = self.cursor_position - SetConsoleCursorPosition( - self._handle, - coords=WindowsCoordinates( - row=cursor_position.row + 1, - col=cursor_position.col, - ), - ) - - def move_cursor_forward(self) -> None: - """Move the cursor forward a single cell. Wrap to the next line if required.""" - row, col = self.cursor_position - if col == self.screen_size.col - 1: - row += 1 - col = 0 - else: - col += 1 - SetConsoleCursorPosition( - self._handle, coords=WindowsCoordinates(row=row, col=col) - ) - - def move_cursor_to_column(self, column: int) -> None: - """Move cursor to the column specified by the zero-based column index, staying on the same row - - Args: - column (int): The zero-based column index to move the cursor to. - """ - row, _ = self.cursor_position - SetConsoleCursorPosition(self._handle, coords=WindowsCoordinates(row, column)) - - def move_cursor_backward(self) -> None: - """Move the cursor backward a single cell. Wrap to the previous line if required.""" - row, col = self.cursor_position - if col == 0: - row -= 1 - col = self.screen_size.col - 1 - else: - col -= 1 - SetConsoleCursorPosition( - self._handle, coords=WindowsCoordinates(row=row, col=col) - ) - - def hide_cursor(self) -> None: - """Hide the cursor""" - current_cursor_size = self._get_cursor_size() - invisible_cursor = CONSOLE_CURSOR_INFO(dwSize=current_cursor_size, bVisible=0) - SetConsoleCursorInfo(self._handle, cursor_info=invisible_cursor) - - def show_cursor(self) -> None: - """Show the cursor""" - current_cursor_size = self._get_cursor_size() - visible_cursor = CONSOLE_CURSOR_INFO(dwSize=current_cursor_size, bVisible=1) - SetConsoleCursorInfo(self._handle, cursor_info=visible_cursor) - - def set_title(self, title: str) -> None: - """Set the title of the terminal window - - Args: - title (str): The new title of the console window - """ - assert len(title) < 255, "Console title must be less than 255 characters" - SetConsoleTitle(title) - - def _get_cursor_size(self) -> int: - """Get the percentage of the character cell that is filled by the cursor""" - cursor_info = CONSOLE_CURSOR_INFO() - GetConsoleCursorInfo(self._handle, cursor_info=cursor_info) - return int(cursor_info.dwSize) - - -if __name__ == "__main__": - handle = GetStdHandle() - - from pip._vendor.rich.console import Console - - console = Console() - - term = LegacyWindowsTerm(sys.stdout) - term.set_title("Win32 Console Examples") - - style = Style(color="black", bgcolor="red") - - heading = Style.parse("black on green") - - # Check colour output - console.rule("Checking colour output") - console.print("[on red]on red!") - console.print("[blue]blue!") - console.print("[yellow]yellow!") - console.print("[bold yellow]bold yellow!") - console.print("[bright_yellow]bright_yellow!") - console.print("[dim bright_yellow]dim bright_yellow!") - console.print("[italic cyan]italic cyan!") - console.print("[bold white on blue]bold white on blue!") - console.print("[reverse bold white on blue]reverse bold white on blue!") - console.print("[bold black on cyan]bold black on cyan!") - console.print("[black on green]black on green!") - console.print("[blue on green]blue on green!") - console.print("[white on black]white on black!") - console.print("[black on white]black on white!") - console.print("[#1BB152 on #DA812D]#1BB152 on #DA812D!") - - # Check cursor movement - console.rule("Checking cursor movement") - console.print() - term.move_cursor_backward() - term.move_cursor_backward() - term.write_text("went back and wrapped to prev line") - time.sleep(1) - term.move_cursor_up() - term.write_text("we go up") - time.sleep(1) - term.move_cursor_down() - term.write_text("and down") - time.sleep(1) - term.move_cursor_up() - term.move_cursor_backward() - term.move_cursor_backward() - term.write_text("we went up and back 2") - time.sleep(1) - term.move_cursor_down() - term.move_cursor_backward() - term.move_cursor_backward() - term.write_text("we went down and back 2") - time.sleep(1) - - # Check erasing of lines - term.hide_cursor() - console.print() - console.rule("Checking line erasing") - console.print("\n...Deleting to the start of the line...") - term.write_text("The red arrow shows the cursor location, and direction of erase") - time.sleep(1) - term.move_cursor_to_column(16) - term.write_styled("<", Style.parse("black on red")) - term.move_cursor_backward() - time.sleep(1) - term.erase_start_of_line() - time.sleep(1) - - console.print("\n\n...And to the end of the line...") - term.write_text("The red arrow shows the cursor location, and direction of erase") - time.sleep(1) - - term.move_cursor_to_column(16) - term.write_styled(">", Style.parse("black on red")) - time.sleep(1) - term.erase_end_of_line() - time.sleep(1) - - console.print("\n\n...Now the whole line will be erased...") - term.write_styled("I'm going to disappear!", style=Style.parse("black on cyan")) - time.sleep(1) - term.erase_line() - - term.show_cursor() - print("\n") diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/outdoor.sh b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/outdoor.sh deleted file mode 100644 index c447e8feaa5c7ef7ff74da3b622151c7018447a6..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/scripts/reproduce_train/outdoor.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash -l - -SCRIPTPATH=$(dirname $(readlink -f "$0")) -PROJECT_DIR="${SCRIPTPATH}/../../" - -# conda activate loftr -export PYTHONPATH=$PROJECT_DIR:$PYTHONPATH -cd $PROJECT_DIR - -TRAIN_IMG_SIZE=832 -data_cfg_path="configs/data/megadepth_trainval_${TRAIN_IMG_SIZE}.py" -main_cfg_path="configs/aspan/outdoor/aspan_train.py" - -n_nodes=1 -n_gpus_per_node=8 -torch_num_workers=8 -batch_size=1 -pin_memory=true -exp_name="outdoor-ds-aspan-${TRAIN_IMG_SIZE}-bs=$(($n_gpus_per_node * $n_nodes * $batch_size))" - -CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' python -u ./train.py \ - ${data_cfg_path} \ - ${main_cfg_path} \ - --exp_name=${exp_name} \ - --gpus=${n_gpus_per_node} --num_nodes=${n_nodes} --accelerator="ddp" \ - --batch_size=${batch_size} --num_workers=${torch_num_workers} --pin_memory=${pin_memory} \ - --check_val_every_n_epoch=1 \ - --log_every_n_steps=100 \ - --flush_logs_every_n_steps=100 \ - --limit_val_batches=1. \ - --num_sanity_val_steps=10 \ - --benchmark=True \ - --max_epochs=30 \ - --mode integrated diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_train/outdoor.sh b/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_train/outdoor.sh deleted file mode 100644 index d30320f04e0b560f4b4de9ee68305a4e698b538b..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/scripts/reproduce_train/outdoor.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -l - -SCRIPTPATH=$(dirname $(readlink -f "$0")) -PROJECT_DIR="${SCRIPTPATH}/../../" - -# conda activate loftr -export PYTHONPATH=$PROJECT_DIR:$PYTHONPATH -cd $PROJECT_DIR - -data_cfg_path="configs/data/megadepth_trainval.py" -main_cfg_path="configs/model/outdoor/model_ds.py" - -n_nodes=1 -n_gpus_per_node=4 -torch_num_workers=4 -batch_size=1 -pin_memory=true -exp_name="outdoor-bs=$(($n_gpus_per_node * $n_nodes * $batch_size))" - -python -u ./train.py \ - ${data_cfg_path} \ - ${main_cfg_path} \ - --exp_name=${exp_name} \ - --gpus=${n_gpus_per_node} --num_nodes=${n_nodes} --accelerator="ddp" \ - --batch_size=${batch_size} --num_workers=${torch_num_workers} --pin_memory=${pin_memory} \ - --check_val_every_n_epoch=1 \ - --log_every_n_steps=30000 \ - --flush_logs_every_n_steps=30000 \ - --limit_val_batches=1. \ - --num_sanity_val_steps=10 \ - --benchmark=True \ - --max_epochs=40 # --ckpt_path="pretrained_epoch22.ckpt" diff --git a/spaces/Robert001/UniControl-Demo/annotator/midas/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/midas/__init__.py deleted file mode 100644 index 426c7b2328f9cb475c344e80aeb828c866559aba..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/midas/__init__.py +++ /dev/null @@ -1,52 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -# Midas Depth Estimation -# From https://github.com/isl-org/MiDaS -# MIT LICENSE - -import cv2 -import numpy as np -import torch - -from einops import rearrange -from .api import MiDaSInference - - -class MidasDetector: - def __init__(self): - self.model = MiDaSInference(model_type="dpt_hybrid").cuda() - - def __call__(self, input_image, a=np.pi * 2.0, bg_th=0.1): - assert input_image.ndim == 3 - image_depth = input_image - with torch.no_grad(): - image_depth = torch.from_numpy(image_depth).float().cuda() - image_depth = image_depth / 127.5 - 1.0 - image_depth = rearrange(image_depth, 'h w c -> 1 c h w') - depth = self.model(image_depth)[0] - - depth_pt = depth.clone() - depth_pt -= torch.min(depth_pt) - depth_pt /= torch.max(depth_pt) - depth_pt = depth_pt.cpu().numpy() - depth_image = (depth_pt * 255.0).clip(0, 255).astype(np.uint8) - - depth_np = depth.cpu().numpy() - x = cv2.Sobel(depth_np, cv2.CV_32F, 1, 0, ksize=3) - y = cv2.Sobel(depth_np, cv2.CV_32F, 0, 1, ksize=3) - z = np.ones_like(x) * a - x[depth_pt < bg_th] = 0 - y[depth_pt < bg_th] = 0 - normal = np.stack([x, y, z], axis=2) - normal /= np.sum(normal ** 2.0, axis=2, keepdims=True) ** 0.5 - normal_image = (normal * 127.5 + 127.5).clip(0, 255).astype(np.uint8) - - return depth_image, normal_image diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100644 index c5aa2eea1e8c76f8baf753d1c8c959dee665e543..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/spaces/Rongjiehuang/ProDiff/modules/ProDiff/task/ProDiff_task.py b/spaces/Rongjiehuang/ProDiff/modules/ProDiff/task/ProDiff_task.py deleted file mode 100644 index 795752e4414a8d28dd12ecf020465f9f299b6d0f..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/modules/ProDiff/task/ProDiff_task.py +++ /dev/null @@ -1,137 +0,0 @@ -import torch -from torch import nn -import utils -from functools import partial -from utils.hparams import hparams -from modules.ProDiff.model.ProDiff import GaussianDiffusion -from usr.diff.net import DiffNet -from tasks.tts.fs2 import FastSpeech2Task -from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder -from utils.pitch_utils import denorm_f0 -from tasks.tts.fs2_utils import FastSpeechDataset -DIFF_DECODERS = { - 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']), -} - - -class ProDiff_Task(FastSpeech2Task): - def __init__(self): - super(ProDiff_Task, self).__init__() - self.dataset_cls = FastSpeechDataset - self.vocoder: BaseVocoder = get_vocoder_cls(hparams)() - - def build_model(self): - self.build_tts_model() - if hparams['load_ckpt'] != '': - self.load_ckpt(hparams['load_ckpt'], strict=False) - utils.num_params(self.model, print_out=True, model_name="Generator: student") - utils.num_params(self.teacher, print_out=True, model_name="Generator: teacher") - if not hasattr(self, 'gen_params'): - self.gen_params = list(self.model.parameters()) - return self.model - - def build_tts_model(self): - mel_bins = hparams['audio_num_mel_bins'] - checkpoint = torch.load(hparams['teacher_ckpt'], map_location='cpu')["state_dict"]['model'] - teacher_timesteps = int(checkpoint['timesteps'].item()) - teacher_timescales = int(checkpoint['timescale'].item()) - student_timesteps = teacher_timesteps // 2 - student_timescales = teacher_timescales * 2 - - self.teacher = GaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - loss_type=hparams['diff_loss_type'], - timesteps=teacher_timesteps, time_scale=teacher_timescales, - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - self.model = GaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - timesteps=student_timesteps, time_scale=student_timescales, - loss_type=hparams['diff_loss_type'], - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - - utils.load_ckpt(self.teacher, hparams['teacher_ckpt'], 'model', strict=False) - utils.load_ckpt(self.model, hparams['teacher_ckpt'], 'model', strict=False) - to_torch = partial(torch.tensor, dtype=torch.float32) - self.model.num_timesteps = student_timesteps - self.model.time_scale = student_timescales - self.model.register_buffer('timesteps', to_torch(student_timesteps)) # beta - self.model.register_buffer('timescale', to_torch(student_timescales)) # beta - - for k, v in self.model.fs2.named_parameters(): - if not 'denoise_fn' in k: - v.requires_grad = False - - for param in self.teacher.parameters(): - param.requires_grad = False - - - def run_model(self, model, sample, return_output=False, infer=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s] - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - output = model(txt_tokens, self.teacher, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer) - - losses = {} - losses['l1'] = output['mel_out'] - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - if not return_output: - return losses - else: - return losses, output - - def validation_step(self, sample, batch_idx): - outputs = {} - txt_tokens = sample['txt_tokens'] # [B, T_t] - - energy = sample['energy'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - mel2ph = sample['mel2ph'] - f0 = sample['f0'] - uv = sample['uv'] - - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - # model_out = self.model( - # txt_tokens, spk_embed=spk_embed, mel2ph=None, f0=None, uv=None, energy=None, ref_mels=None, inference=True) - # self.plot_mel(batch_idx, model_out['mel_out'], model_out['fs2_mel'], name=f'diffspeech_vs_fs2_{batch_idx}') - model_out = self.model( - txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True) - gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams) - self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=model_out.get('f0_denorm')) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out']) - return outputs - - - ############ - # validation plots - ############ - def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None): - gt_wav = gt_wav[0].cpu().numpy() - wav_out = wav_out[0].cpu().numpy() - gt_f0 = gt_f0[0].cpu().numpy() - f0 = f0[0].cpu().numpy() - if is_mel: - gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0) - wav_out = self.vocoder.spec2wav(wav_out, f0=f0) - self.logger.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step) - self.logger.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step) - diff --git a/spaces/Rutakate21/anything-v3.0/app.py b/spaces/Rutakate21/anything-v3.0/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/Rutakate21/anything-v3.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/models/common.py b/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/models/common.py deleted file mode 100644 index 950288a0018315541aefd2f113de20c4eaa49c51..0000000000000000000000000000000000000000 --- a/spaces/Sa-m/Vehicles-Detection-Custom-YoloV7/models/common.py +++ /dev/null @@ -1,2047 +0,0 @@ -import math -from copy import copy -from pathlib import Path - -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision.ops import DeformConv2d -from PIL import Image -from torch.cuda import amp - -from utils.datasets import letterbox -from utils.general import non_max_suppression, make_divisible, scale_coords, increment_path, xyxy2xywh -from utils.plots import color_list, plot_one_box -from utils.torch_utils import time_synchronized - - -##### basic #### - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class MP(nn.Module): - def __init__(self, k=2): - super(MP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return self.m(x) - - -class SP(nn.Module): - def __init__(self, k=3, s=1): - super(SP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2) - - def forward(self, x): - return self.m(x) - - -class ReOrg(nn.Module): - def __init__(self): - super(ReOrg, self).__init__() - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1) - - -class Merge(nn.Module): - def __init__(self,ch=()): - super(Merge, self).__init__() - - def forward(self, x): - - return [x[0],x[1],x[2]] - - -class Refine(nn.Module): - - def __init__(self, c2, k, s, ch): # ch_in, ch_out, kernel, stride, padding, groups - super(Refine, self).__init__() - self.refine = nn.ModuleList() - for c in ch: - self.refine.append(Conv(c, c2, k, s)) - - def forward(self, x): - for i, f in enumerate(x): - if i == 0: - r = self.refine[i](f) - else: - r_p = self.refine[i](f) - r_p = F.interpolate(r_p, r.size()[2:], mode="bilinear", align_corners=False) - r = r + r_p - return r - - -class Concat(nn.Module): - def __init__(self, dimension=1): - super(Concat, self).__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class Chuncat(nn.Module): - def __init__(self, dimension=1): - super(Chuncat, self).__init__() - self.d = dimension - - def forward(self, x): - x1 = [] - x2 = [] - for xi in x: - xi1, xi2 = xi.chunk(2, self.d) - x1.append(xi1) - x2.append(xi2) - return torch.cat(x1+x2, self.d) - - -class Shortcut(nn.Module): - def __init__(self, dimension=0): - super(Shortcut, self).__init__() - self.d = dimension - - def forward(self, x): - return x[0]+x[1] - - -class Foldcut(nn.Module): - def __init__(self, dimension=0): - super(Foldcut, self).__init__() - self.d = dimension - - def forward(self, x): - x1, x2 = x.chunk(2, self.d) - return x1+x2 - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Conv, self).__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class RobustConv(nn.Module): - # Robust convolution (use high kernel size 7-11 for: downsampling and other layers). Train for 300 - 450 epochs. - def __init__(self, c1, c2, k=7, s=1, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv, self).__init__() - self.conv_dw = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv1x1 = nn.Conv2d(c1, c2, 1, 1, 0, groups=1, bias=True) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = x.to(memory_format=torch.channels_last) - x = self.conv1x1(self.conv_dw(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -class RobustConv2(nn.Module): - # Robust convolution 2 (use [32, 5, 2] or [32, 7, 4] or [32, 11, 8] for one of the paths in CSP). - def __init__(self, c1, c2, k=7, s=4, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv2, self).__init__() - self.conv_strided = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv_deconv = nn.ConvTranspose2d(in_channels=c1, out_channels=c2, kernel_size=s, stride=s, - padding=0, bias=True, dilation=1, groups=1 - ) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = self.conv_deconv(self.conv_strided(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super(GhostConv, self).__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat([y, self.cv2(y)], 1) - - -class Stem(nn.Module): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Stem, self).__init__() - c_ = int(c2/2) # hidden channels - self.cv1 = Conv(c1, c_, 3, 2) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 2) - self.pool = torch.nn.MaxPool2d(2, stride=2) - self.cv4 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x = self.cv1(x) - return self.cv4(torch.cat((self.cv3(self.cv2(x)), self.pool(x)), dim=1)) - - -class DownC(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, n=1, k=2): - super(DownC, self).__init__() - c_ = int(c1) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2//2, 3, k) - self.cv3 = Conv(c1, c2//2, 1, 1) - self.mp = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return torch.cat((self.cv2(self.cv1(x)), self.cv3(self.mp(x))), dim=1) - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super(SPP, self).__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Bottleneck(nn.Module): - # Darknet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Bottleneck, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Res(nn.Module): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Res, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 3, 1, g=g) - self.cv3 = Conv(c_, c2, 1, 1) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv3(self.cv2(self.cv1(x))) if self.add else self.cv3(self.cv2(self.cv1(x))) - - -class ResX(Res): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - - -class Ghost(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super(Ghost, self).__init__() - c_ = c2 // 2 - self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), - Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - -##### end of basic ##### - - -##### cspnet ##### - -class SPPCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super(SPPCSPC, self).__init__() - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 1) - self.cv4 = Conv(c_, c_, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - self.cv5 = Conv(4 * c_, c_, 1, 1) - self.cv6 = Conv(c_, c_, 3, 1) - self.cv7 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x1 = self.cv4(self.cv3(self.cv1(x))) - y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1))) - y2 = self.cv2(x) - return self.cv7(torch.cat((y1, y2), dim=1)) - -class GhostSPPCSPC(SPPCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super().__init__(c1, c2, n, shortcut, g, e, k) - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = GhostConv(c1, c_, 1, 1) - self.cv2 = GhostConv(c1, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 1) - self.cv4 = GhostConv(c_, c_, 1, 1) - self.cv5 = GhostConv(4 * c_, c_, 1, 1) - self.cv6 = GhostConv(c_, c_, 3, 1) - self.cv7 = GhostConv(2 * c_, c2, 1, 1) - - -class GhostStem(Stem): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__(c1, c2, k, s, p, g, act) - c_ = int(c2/2) # hidden channels - self.cv1 = GhostConv(c1, c_, 3, 2) - self.cv2 = GhostConv(c_, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 2) - self.cv4 = GhostConv(2 * c_, c2, 1, 1) - - -class BottleneckCSPA(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPB(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - - -class ResCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResXCSPA(ResCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPB(ResCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPC(ResCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class GhostCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - -##### end of cspnet ##### - - -##### yolor ##### - -class ImplicitA(nn.Module): - def __init__(self, channel, mean=0., std=.02): - super(ImplicitA, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.zeros(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit + x - - -class ImplicitM(nn.Module): - def __init__(self, channel, mean=0., std=.02): - super(ImplicitM, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.ones(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit * x - -##### end of yolor ##### - - -##### repvgg ##### - -class RepConv(nn.Module): - # Represented convolution - # https://arxiv.org/abs/2101.03697 - - def __init__(self, c1, c2, k=3, s=1, p=None, g=1, act=True, deploy=False): - super(RepConv, self).__init__() - - self.deploy = deploy - self.groups = g - self.in_channels = c1 - self.out_channels = c2 - - assert k == 3 - assert autopad(k, p) == 1 - - padding_11 = autopad(k, p) - k // 2 - - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - if deploy: - self.rbr_reparam = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=True) - - else: - self.rbr_identity = (nn.BatchNorm2d(num_features=c1) if c2 == c1 and s == 1 else None) - - self.rbr_dense = nn.Sequential( - nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - self.rbr_1x1 = nn.Sequential( - nn.Conv2d( c1, c2, 1, s, padding_11, groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - def forward(self, inputs): - if hasattr(self, "rbr_reparam"): - return self.act(self.rbr_reparam(inputs)) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - return self.act(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out) - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return ( - kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, - bias3x3 + bias1x1 + biasid, - ) - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return nn.functional.pad(kernel1x1, [1, 1, 1, 1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if isinstance(branch, nn.Sequential): - kernel = branch[0].weight - running_mean = branch[1].running_mean - running_var = branch[1].running_var - gamma = branch[1].weight - beta = branch[1].bias - eps = branch[1].eps - else: - assert isinstance(branch, nn.BatchNorm2d) - if not hasattr(self, "id_tensor"): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros( - (self.in_channels, input_dim, 3, 3), dtype=np.float32 - ) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def repvgg_convert(self): - kernel, bias = self.get_equivalent_kernel_bias() - return ( - kernel.detach().cpu().numpy(), - bias.detach().cpu().numpy(), - ) - - def fuse_conv_bn(self, conv, bn): - - std = (bn.running_var + bn.eps).sqrt() - bias = bn.bias - bn.running_mean * bn.weight / std - - t = (bn.weight / std).reshape(-1, 1, 1, 1) - weights = conv.weight * t - - bn = nn.Identity() - conv = nn.Conv2d(in_channels = conv.in_channels, - out_channels = conv.out_channels, - kernel_size = conv.kernel_size, - stride=conv.stride, - padding = conv.padding, - dilation = conv.dilation, - groups = conv.groups, - bias = True, - padding_mode = conv.padding_mode) - - conv.weight = torch.nn.Parameter(weights) - conv.bias = torch.nn.Parameter(bias) - return conv - - def fuse_repvgg_block(self): - if self.deploy: - return - print(f"RepConv.fuse_repvgg_block") - - self.rbr_dense = self.fuse_conv_bn(self.rbr_dense[0], self.rbr_dense[1]) - - self.rbr_1x1 = self.fuse_conv_bn(self.rbr_1x1[0], self.rbr_1x1[1]) - rbr_1x1_bias = self.rbr_1x1.bias - weight_1x1_expanded = torch.nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1]) - - # Fuse self.rbr_identity - if (isinstance(self.rbr_identity, nn.BatchNorm2d) or isinstance(self.rbr_identity, nn.modules.batchnorm.SyncBatchNorm)): - # print(f"fuse: rbr_identity == BatchNorm2d or SyncBatchNorm") - identity_conv_1x1 = nn.Conv2d( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=1, - stride=1, - padding=0, - groups=self.groups, - bias=False) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.to(self.rbr_1x1.weight.data.device) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.squeeze().squeeze() - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - identity_conv_1x1.weight.data.fill_(0.0) - identity_conv_1x1.weight.data.fill_diagonal_(1.0) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.unsqueeze(2).unsqueeze(3) - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - - identity_conv_1x1 = self.fuse_conv_bn(identity_conv_1x1, self.rbr_identity) - bias_identity_expanded = identity_conv_1x1.bias - weight_identity_expanded = torch.nn.functional.pad(identity_conv_1x1.weight, [1, 1, 1, 1]) - else: - # print(f"fuse: rbr_identity != BatchNorm2d, rbr_identity = {self.rbr_identity}") - bias_identity_expanded = torch.nn.Parameter( torch.zeros_like(rbr_1x1_bias) ) - weight_identity_expanded = torch.nn.Parameter( torch.zeros_like(weight_1x1_expanded) ) - - - #print(f"self.rbr_1x1.weight = {self.rbr_1x1.weight.shape}, ") - #print(f"weight_1x1_expanded = {weight_1x1_expanded.shape}, ") - #print(f"self.rbr_dense.weight = {self.rbr_dense.weight.shape}, ") - - self.rbr_dense.weight = torch.nn.Parameter(self.rbr_dense.weight + weight_1x1_expanded + weight_identity_expanded) - self.rbr_dense.bias = torch.nn.Parameter(self.rbr_dense.bias + rbr_1x1_bias + bias_identity_expanded) - - self.rbr_reparam = self.rbr_dense - self.deploy = True - - if self.rbr_identity is not None: - del self.rbr_identity - self.rbr_identity = None - - if self.rbr_1x1 is not None: - del self.rbr_1x1 - self.rbr_1x1 = None - - if self.rbr_dense is not None: - del self.rbr_dense - self.rbr_dense = None - - -class RepBottleneck(Bottleneck): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut=True, g=1, e=0.5) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c2, 3, 1, g=g) - - -class RepBottleneckCSPA(BottleneckCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPB(BottleneckCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPC(BottleneckCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepRes(Res): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResCSPA(ResCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPB(ResCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPC(ResCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResX(ResX): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResXCSPA(ResXCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPB(ResXCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPC(ResXCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - -##### end of repvgg ##### - - -##### transformer ##### - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)]) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2) - p = p.unsqueeze(0) - p = p.transpose(0, 3) - p = p.squeeze(3) - e = self.linear(p) - x = p + e - - x = self.tr(x) - x = x.unsqueeze(3) - x = x.transpose(0, 3) - x = x.reshape(b, self.c2, w, h) - return x - -##### end of transformer ##### - - -##### yolov5 ##### - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Focus, self).__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - # return self.conv(self.contract(x)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1)) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def __init__(self): - super(NMS, self).__init__() - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - - -class autoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super(autoShape, self).__init__() - self.model = model.eval() - - def autoshape(self): - print('autoShape already enabled, skipping... ') # model already converted to model.autoshape() - return self - - @torch.no_grad() - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=640, width=1280, RGB images example inputs are: - # filename: imgs = 'data/samples/zidane.jpg' - # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - t = [time_synchronized()] - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - with amp.autocast(enabled=p.device.type != 'cpu'): - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(imgs): - f = f'image{i}' # filename - if isinstance(im, str): # filename or uri - im, f = np.asarray(Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im)), im - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(im), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = (size / max(s)) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32 - t.append(time_synchronized()) - - with amp.autocast(enabled=p.device.type != 'cpu'): - # Inference - y = self.model(x, augment, profile)[0] # forward - t.append(time_synchronized()) - - # Post-process - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - t.append(time_synchronized()) - return Detections(imgs, y, files, t, self.names, x.shape) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, files, times=None, names=None, shape=None): - super(Detections, self).__init__() - d = pred[0].device # device - gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms) - self.s = shape # inference BCHW shape - - def display(self, pprint=False, show=False, save=False, render=False, save_dir=''): - colors = color_list() - for i, (img, pred) in enumerate(zip(self.imgs, self.pred)): - str = f'image {i + 1}/{len(self.pred)}: {img.shape[0]}x{img.shape[1]} ' - if pred is not None: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - if show or save or render: - for *box, conf, cls in pred: # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - plot_one_box(box, img, label=label, color=colors[int(cls) % 10]) - img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np - if pprint: - print(str.rstrip(', ')) - if show: - img.show(self.files[i]) # show - if save: - f = self.files[i] - img.save(Path(save_dir) / f) # save - print(f"{'Saved' * (i == 0)} {f}", end=',' if i < self.n - 1 else f' to {save_dir}\n') - if render: - self.imgs[i] = np.asarray(img) - - def print(self): - self.display(pprint=True) # print results - print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t) - - def show(self): - self.display(show=True) # show results - - def save(self, save_dir='runs/hub/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp') # increment save_dir - Path(save_dir).mkdir(parents=True, exist_ok=True) - self.display(save=True, save_dir=save_dir) # save results - - def render(self): - self.display(render=True) # render results - return self.imgs - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)] - for d in x: - for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def __len__(self): - return self.n - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super(Classify, self).__init__() - self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) - self.flat = nn.Flatten() - - def forward(self, x): - z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list - return self.flat(self.conv(z)) # flatten to x(b,c2) - -##### end of yolov5 ###### - - -##### orepa ##### - -def transI_fusebn(kernel, bn): - gamma = bn.weight - std = (bn.running_var + bn.eps).sqrt() - return kernel * ((gamma / std).reshape(-1, 1, 1, 1)), bn.bias - bn.running_mean * gamma / std - - -class ConvBN(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, deploy=False, nonlinear=None): - super().__init__() - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - if deploy: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True) - else: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False) - self.bn = nn.BatchNorm2d(num_features=out_channels) - - def forward(self, x): - if hasattr(self, 'bn'): - return self.nonlinear(self.bn(self.conv(x))) - else: - return self.nonlinear(self.conv(x)) - - def switch_to_deploy(self): - kernel, bias = transI_fusebn(self.conv.weight, self.bn) - conv = nn.Conv2d(in_channels=self.conv.in_channels, out_channels=self.conv.out_channels, kernel_size=self.conv.kernel_size, - stride=self.conv.stride, padding=self.conv.padding, dilation=self.conv.dilation, groups=self.conv.groups, bias=True) - conv.weight.data = kernel - conv.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('conv') - self.__delattr__('bn') - self.conv = conv - -class OREPA_3x3_RepConv(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, - internal_channels_1x1_3x3=None, - deploy=False, nonlinear=None, single_init=False): - super(OREPA_3x3_RepConv, self).__init__() - self.deploy = deploy - - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - - self.kernel_size = kernel_size - self.in_channels = in_channels - self.out_channels = out_channels - self.groups = groups - assert padding == kernel_size // 2 - - self.stride = stride - self.padding = padding - self.dilation = dilation - - self.branch_counter = 0 - - self.weight_rbr_origin = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_origin, a=math.sqrt(1.0)) - self.branch_counter += 1 - - - if groups < out_channels: - self.weight_rbr_avg_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - self.weight_rbr_pfir_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_avg_conv, a=1.0) - nn.init.kaiming_uniform_(self.weight_rbr_pfir_conv, a=1.0) - self.weight_rbr_avg_conv.data - self.weight_rbr_pfir_conv.data - self.register_buffer('weight_rbr_avg_avg', torch.ones(kernel_size, kernel_size).mul(1.0/kernel_size/kernel_size)) - self.branch_counter += 1 - - else: - raise NotImplementedError - self.branch_counter += 1 - - if internal_channels_1x1_3x3 is None: - internal_channels_1x1_3x3 = in_channels if groups < out_channels else 2 * in_channels # For mobilenet, it is better to have 2X internal channels - - if internal_channels_1x1_3x3 == in_channels: - self.weight_rbr_1x1_kxk_idconv1 = nn.Parameter(torch.zeros(in_channels, int(in_channels/self.groups), 1, 1)) - id_value = np.zeros((in_channels, int(in_channels/self.groups), 1, 1)) - for i in range(in_channels): - id_value[i, i % int(in_channels/self.groups), 0, 0] = 1 - id_tensor = torch.from_numpy(id_value).type_as(self.weight_rbr_1x1_kxk_idconv1) - self.register_buffer('id_tensor', id_tensor) - - else: - self.weight_rbr_1x1_kxk_conv1 = nn.Parameter(torch.Tensor(internal_channels_1x1_3x3, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv1, a=math.sqrt(1.0)) - self.weight_rbr_1x1_kxk_conv2 = nn.Parameter(torch.Tensor(out_channels, int(internal_channels_1x1_3x3/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv2, a=math.sqrt(1.0)) - self.branch_counter += 1 - - expand_ratio = 8 - self.weight_rbr_gconv_dw = nn.Parameter(torch.Tensor(in_channels*expand_ratio, 1, kernel_size, kernel_size)) - self.weight_rbr_gconv_pw = nn.Parameter(torch.Tensor(out_channels, in_channels*expand_ratio, 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_dw, a=math.sqrt(1.0)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_pw, a=math.sqrt(1.0)) - self.branch_counter += 1 - - if out_channels == in_channels and stride == 1: - self.branch_counter += 1 - - self.vector = nn.Parameter(torch.Tensor(self.branch_counter, self.out_channels)) - self.bn = nn.BatchNorm2d(out_channels) - - self.fre_init() - - nn.init.constant_(self.vector[0, :], 0.25) #origin - nn.init.constant_(self.vector[1, :], 0.25) #avg - nn.init.constant_(self.vector[2, :], 0.0) #prior - nn.init.constant_(self.vector[3, :], 0.5) #1x1_kxk - nn.init.constant_(self.vector[4, :], 0.5) #dws_conv - - - def fre_init(self): - prior_tensor = torch.Tensor(self.out_channels, self.kernel_size, self.kernel_size) - half_fg = self.out_channels/2 - for i in range(self.out_channels): - for h in range(3): - for w in range(3): - if i < half_fg: - prior_tensor[i, h, w] = math.cos(math.pi*(h+0.5)*(i+1)/3) - else: - prior_tensor[i, h, w] = math.cos(math.pi*(w+0.5)*(i+1-half_fg)/3) - - self.register_buffer('weight_rbr_prior', prior_tensor) - - def weight_gen(self): - - weight_rbr_origin = torch.einsum('oihw,o->oihw', self.weight_rbr_origin, self.vector[0, :]) - - weight_rbr_avg = torch.einsum('oihw,o->oihw', torch.einsum('oihw,hw->oihw', self.weight_rbr_avg_conv, self.weight_rbr_avg_avg), self.vector[1, :]) - - weight_rbr_pfir = torch.einsum('oihw,o->oihw', torch.einsum('oihw,ohw->oihw', self.weight_rbr_pfir_conv, self.weight_rbr_prior), self.vector[2, :]) - - weight_rbr_1x1_kxk_conv1 = None - if hasattr(self, 'weight_rbr_1x1_kxk_idconv1'): - weight_rbr_1x1_kxk_conv1 = (self.weight_rbr_1x1_kxk_idconv1 + self.id_tensor).squeeze() - elif hasattr(self, 'weight_rbr_1x1_kxk_conv1'): - weight_rbr_1x1_kxk_conv1 = self.weight_rbr_1x1_kxk_conv1.squeeze() - else: - raise NotImplementedError - weight_rbr_1x1_kxk_conv2 = self.weight_rbr_1x1_kxk_conv2 - - if self.groups > 1: - g = self.groups - t, ig = weight_rbr_1x1_kxk_conv1.size() - o, tg, h, w = weight_rbr_1x1_kxk_conv2.size() - weight_rbr_1x1_kxk_conv1 = weight_rbr_1x1_kxk_conv1.view(g, int(t/g), ig) - weight_rbr_1x1_kxk_conv2 = weight_rbr_1x1_kxk_conv2.view(g, int(o/g), tg, h, w) - weight_rbr_1x1_kxk = torch.einsum('gti,gothw->goihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2).view(o, ig, h, w) - else: - weight_rbr_1x1_kxk = torch.einsum('ti,othw->oihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2) - - weight_rbr_1x1_kxk = torch.einsum('oihw,o->oihw', weight_rbr_1x1_kxk, self.vector[3, :]) - - weight_rbr_gconv = self.dwsc2full(self.weight_rbr_gconv_dw, self.weight_rbr_gconv_pw, self.in_channels) - weight_rbr_gconv = torch.einsum('oihw,o->oihw', weight_rbr_gconv, self.vector[4, :]) - - weight = weight_rbr_origin + weight_rbr_avg + weight_rbr_1x1_kxk + weight_rbr_pfir + weight_rbr_gconv - - return weight - - def dwsc2full(self, weight_dw, weight_pw, groups): - - t, ig, h, w = weight_dw.size() - o, _, _, _ = weight_pw.size() - tg = int(t/groups) - i = int(ig*groups) - weight_dw = weight_dw.view(groups, tg, ig, h, w) - weight_pw = weight_pw.squeeze().view(o, groups, tg) - - weight_dsc = torch.einsum('gtihw,ogt->ogihw', weight_dw, weight_pw) - return weight_dsc.view(o, i, h, w) - - def forward(self, inputs): - weight = self.weight_gen() - out = F.conv2d(inputs, weight, bias=None, stride=self.stride, padding=self.padding, dilation=self.dilation, groups=self.groups) - - return self.nonlinear(self.bn(out)) - -class RepConv_OREPA(nn.Module): - - def __init__(self, c1, c2, k=3, s=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False, nonlinear=nn.SiLU()): - super(RepConv_OREPA, self).__init__() - self.deploy = deploy - self.groups = groups - self.in_channels = c1 - self.out_channels = c2 - - self.padding = padding - self.dilation = dilation - self.groups = groups - - assert k == 3 - assert padding == 1 - - padding_11 = padding - k // 2 - - if nonlinear is None: - self.nonlinearity = nn.Identity() - else: - self.nonlinearity = nonlinear - - if use_se: - self.se = SEBlock(self.out_channels, internal_neurons=self.out_channels // 16) - else: - self.se = nn.Identity() - - if deploy: - self.rbr_reparam = nn.Conv2d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, - padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode) - - else: - self.rbr_identity = nn.BatchNorm2d(num_features=self.in_channels) if self.out_channels == self.in_channels and s == 1 else None - self.rbr_dense = OREPA_3x3_RepConv(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, padding=padding, groups=groups, dilation=1) - self.rbr_1x1 = ConvBN(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=1, stride=s, padding=padding_11, groups=groups, dilation=1) - print('RepVGG Block, identity = ', self.rbr_identity) - - - def forward(self, inputs): - if hasattr(self, 'rbr_reparam'): - return self.nonlinearity(self.se(self.rbr_reparam(inputs))) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - out1 = self.rbr_dense(inputs) - out2 = self.rbr_1x1(inputs) - out3 = id_out - out = out1 + out2 + out3 - - return self.nonlinearity(self.se(out)) - - - # Optional. This improves the accuracy and facilitates quantization. - # 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight. - # 2. Use like this. - # loss = criterion(....) - # for every RepVGGBlock blk: - # loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2() - # optimizer.zero_grad() - # loss.backward() - - # Not used for OREPA - def get_custom_L2(self): - K3 = self.rbr_dense.weight_gen() - K1 = self.rbr_1x1.conv.weight - t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - - l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them. - eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel. - l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2. - return l2_loss_eq_kernel + l2_loss_circle - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return torch.nn.functional.pad(kernel1x1, [1,1,1,1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if not isinstance(branch, nn.BatchNorm2d): - if isinstance(branch, OREPA_3x3_RepConv): - kernel = branch.weight_gen() - elif isinstance(branch, ConvBN): - kernel = branch.conv.weight - else: - raise NotImplementedError - running_mean = branch.bn.running_mean - running_var = branch.bn.running_var - gamma = branch.bn.weight - beta = branch.bn.bias - eps = branch.bn.eps - else: - if not hasattr(self, 'id_tensor'): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def switch_to_deploy(self): - if hasattr(self, 'rbr_reparam'): - return - print(f"RepConv_OREPA.switch_to_deploy") - kernel, bias = self.get_equivalent_kernel_bias() - self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.in_channels, out_channels=self.rbr_dense.out_channels, - kernel_size=self.rbr_dense.kernel_size, stride=self.rbr_dense.stride, - padding=self.rbr_dense.padding, dilation=self.rbr_dense.dilation, groups=self.rbr_dense.groups, bias=True) - self.rbr_reparam.weight.data = kernel - self.rbr_reparam.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('rbr_dense') - self.__delattr__('rbr_1x1') - if hasattr(self, 'rbr_identity'): - self.__delattr__('rbr_identity') - -##### end of orepa ##### - - -##### swin transformer ##### - -class WindowAttention(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - nn.init.normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - # print(attn.dtype, v.dtype) - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - #print(attn.dtype, v.dtype) - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - -class Mlp(nn.Module): - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - -def window_partition(x, window_size): - - B, H, W, C = x.shape - assert H % window_size == 0, 'feature map h and w can not divide by window size' - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - -def window_reverse(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer(nn.Module): - - def __init__(self, dim, num_heads, window_size=8, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - # if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - -class SwinTransformerBlock(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=8): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class STCSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer ##### - - -##### swin transformer v2 ##### - -class WindowAttention_v2(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., - pretrained_window_size=[0, 0]): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.pretrained_window_size = pretrained_window_size - self.num_heads = num_heads - - self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True) - - # mlp to generate continuous relative position bias - self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True), - nn.ReLU(inplace=True), - nn.Linear(512, num_heads, bias=False)) - - # get relative_coords_table - relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32) - relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32) - relative_coords_table = torch.stack( - torch.meshgrid([relative_coords_h, - relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 - if pretrained_window_size[0] > 0: - relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) - else: - relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) - relative_coords_table *= 8 # normalize to -8, 8 - relative_coords_table = torch.sign(relative_coords_table) * torch.log2( - torch.abs(relative_coords_table) + 1.0) / np.log2(8) - - self.register_buffer("relative_coords_table", relative_coords_table) - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(dim)) - self.v_bias = nn.Parameter(torch.zeros(dim)) - else: - self.q_bias = None - self.v_bias = None - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - # cosine attention - attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) - logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01))).exp() - attn = attn * logit_scale - - relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) - relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - relative_position_bias = 16 * torch.sigmoid(relative_position_bias) - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, ' \ - f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - -class Mlp_v2(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition_v2(x, window_size): - - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse_v2(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer_v2(nn.Module): - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm, pretrained_window_size=0): - super().__init__() - self.dim = dim - #self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - #if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention_v2( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, - pretrained_window_size=(pretrained_window_size, pretrained_window_size)) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp_v2(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition_v2(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse_v2(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - x = shortcut + self.drop_path(self.norm1(x)) - - # FFN - x = x + self.drop_path(self.norm2(self.mlp(x))) - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class SwinTransformer2Block(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=7): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer_v2(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class ST2CSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer v2 ##### diff --git a/spaces/Sakil/english_audio_transcriptor/README.md b/spaces/Sakil/english_audio_transcriptor/README.md deleted file mode 100644 index 5110bac49ef53b55f3e9197a8058b6db7fa56a26..0000000000000000000000000000000000000000 --- a/spaces/Sakil/english_audio_transcriptor/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: English_audio_transcriptor -emoji: 🐨 -colorFrom: purple -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/mel_processing.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/mel_processing.py deleted file mode 100644 index 817f03756f64caf8cc54329a9325024c8fb9e0c3..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/SanchezVFX/dis/app.py b/spaces/SanchezVFX/dis/app.py deleted file mode 100644 index b7e31635d2ef54e0efe486050f55dc919d7be12a..0000000000000000000000000000000000000000 --- a/spaces/SanchezVFX/dis/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import cv2 -import gradio as gr -import os -from PIL import Image -import numpy as np -import torch -from torch.autograd import Variable -from torchvision import transforms -import torch.nn.functional as F -import gdown -import matplotlib.pyplot as plt -import warnings -warnings.filterwarnings("ignore") - -os.system("git clone https://github.com/xuebinqin/DIS") -os.system("mv DIS/IS-Net/* .") - -# project imports -from data_loader_cache import normalize, im_reader, im_preprocess -from models import * - -#Helpers -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -# Download official dis weights -if not os.path.exists("saved_models"): - os.mkdir("saved_models") - MODEL_PATH_URL = "https://drive.google.com/uc?id=1nV57qKuy--d5u1yvkng9aXW1KS4sOpOi" - gdown.download(MODEL_PATH_URL, "saved_models/isnet-general-use.pth", use_cookies=False) - -class GOSNormalize(object): - ''' - Normalize the Image using torch.transforms - ''' - def __init__(self, mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]): - self.mean = mean - self.std = std - - def __call__(self,image): - image = normalize(image,self.mean,self.std) - return image - - -transform = transforms.Compose([GOSNormalize([0.5,0.5,0.5],[1.0,1.0,1.0])]) - -def load_image(im_path, hypar): - im = im_reader(im_path) - im, im_shp = im_preprocess(im, hypar["cache_size"]) - im = torch.divide(im,255.0) - shape = torch.from_numpy(np.array(im_shp)) - return transform(im).unsqueeze(0), shape.unsqueeze(0) # make a batch of image, shape - - -def build_model(hypar,device): - net = hypar["model"]#GOSNETINC(3,1) - - # convert to half precision - if(hypar["model_digit"]=="half"): - net.half() - for layer in net.modules(): - if isinstance(layer, nn.BatchNorm2d): - layer.float() - - net.to(device) - - if(hypar["restore_model"]!=""): - net.load_state_dict(torch.load(hypar["model_path"]+"/"+hypar["restore_model"], map_location=device)) - net.to(device) - net.eval() - return net - - -def predict(net, inputs_val, shapes_val, hypar, device): - ''' - Given an Image, predict the mask - ''' - net.eval() - - if(hypar["model_digit"]=="full"): - inputs_val = inputs_val.type(torch.FloatTensor) - else: - inputs_val = inputs_val.type(torch.HalfTensor) - - - inputs_val_v = Variable(inputs_val, requires_grad=False).to(device) # wrap inputs in Variable - - ds_val = net(inputs_val_v)[0] # list of 6 results - - pred_val = ds_val[0][0,:,:,:] # B x 1 x H x W # we want the first one which is the most accurate prediction - - ## recover the prediction spatial size to the orignal image size - pred_val = torch.squeeze(F.upsample(torch.unsqueeze(pred_val,0),(shapes_val[0][0],shapes_val[0][1]),mode='bilinear')) - - ma = torch.max(pred_val) - mi = torch.min(pred_val) - pred_val = (pred_val-mi)/(ma-mi) # max = 1 - - if device == 'cuda': torch.cuda.empty_cache() - return (pred_val.detach().cpu().numpy()*255).astype(np.uint8) # it is the mask we need - -# Set Parameters -hypar = {} # paramters for inferencing - - -hypar["model_path"] ="./saved_models" ## load trained weights from this path -hypar["restore_model"] = "isnet-general-use.pth" ## name of the to-be-loaded weights -hypar["interm_sup"] = False ## indicate if activate intermediate feature supervision - -## choose floating point accuracy -- -hypar["model_digit"] = "full" ## indicates "half" or "full" accuracy of float number -hypar["seed"] = 0 - -hypar["cache_size"] = [1024, 1024] ## cached input spatial resolution, can be configured into different size - -## data augmentation parameters --- -hypar["input_size"] = [1024, 1024] ## mdoel input spatial size, usually use the same value hypar["cache_size"], which means we don't further resize the images -hypar["crop_size"] = [1024, 1024] ## random crop size from the input, it is usually set as smaller than hypar["cache_size"], e.g., [920,920] for data augmentation - -hypar["model"] = ISNetDIS() - - # Build Model -net = build_model(hypar, device) - - -def inference(image: Image): - image_path = image - - image_tensor, orig_size = load_image(image_path, hypar) - mask = predict(net, image_tensor, orig_size, hypar, device) - - pil_mask = Image.fromarray(mask).convert('L') - im_rgb = Image.open(image).convert("RGB") - - im_rgba = im_rgb.copy() - im_rgba.putalpha(pil_mask) - - return [im_rgba, pil_mask] - - -title = "DIS Background Removal" -description = "This is an unofficial demo for DIS, a model that can remove the background from a given image. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.
    GitHub: https://github.com/xuebinqin/DIS" -article = "" - -interface = gr.Interface( - fn=inference, - inputs=gr.Image(type='filepath'), - outputs=["image", "image"], - examples=[['girl.jpg'], ['ship.jpg'], ['bike.jpg']], - title=title, - description=description, - article=article, - allow_flagging='never', - theme="default", - cache_examples=False, - ).launch(enable_queue=True, debug=True) diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/models/human_matting.py b/spaces/SankarSrin/image-matting-app/ppmatting/models/human_matting.py deleted file mode 100644 index cf315edfa563fe231a119dd15b749c41157c988c..0000000000000000000000000000000000000000 --- a/spaces/SankarSrin/image-matting-app/ppmatting/models/human_matting.py +++ /dev/null @@ -1,454 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import defaultdict -import time - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -import paddleseg -from paddleseg.models import layers -from paddleseg import utils -from paddleseg.cvlibs import manager - -from ppmatting.models.losses import MRSD - - -def conv_up_psp(in_channels, out_channels, up_sample): - return nn.Sequential( - layers.ConvBNReLU( - in_channels, out_channels, 3, padding=1), - nn.Upsample( - scale_factor=up_sample, mode='bilinear', align_corners=False)) - - -@manager.MODELS.add_component -class HumanMatting(nn.Layer): - """A model for """ - - def __init__(self, - backbone, - pretrained=None, - backbone_scale=0.25, - refine_kernel_size=3, - if_refine=True): - super().__init__() - if if_refine: - if backbone_scale > 0.5: - raise ValueError( - 'Backbone_scale should not be greater than 1/2, but it is {}' - .format(backbone_scale)) - else: - backbone_scale = 1 - - self.backbone = backbone - self.backbone_scale = backbone_scale - self.pretrained = pretrained - self.if_refine = if_refine - if if_refine: - self.refiner = Refiner(kernel_size=refine_kernel_size) - self.loss_func_dict = None - - self.backbone_channels = backbone.feat_channels - ###################### - ### Decoder part - Glance - ###################### - self.psp_module = layers.PPModule( - self.backbone_channels[-1], - 512, - bin_sizes=(1, 3, 5), - dim_reduction=False, - align_corners=False) - self.psp4 = conv_up_psp(512, 256, 2) - self.psp3 = conv_up_psp(512, 128, 4) - self.psp2 = conv_up_psp(512, 64, 8) - self.psp1 = conv_up_psp(512, 64, 16) - # stage 5g - self.decoder5_g = nn.Sequential( - layers.ConvBNReLU( - 512 + self.backbone_channels[-1], 512, 3, padding=1), - layers.ConvBNReLU( - 512, 512, 3, padding=2, dilation=2), - layers.ConvBNReLU( - 512, 256, 3, padding=2, dilation=2), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 4g - self.decoder4_g = nn.Sequential( - layers.ConvBNReLU( - 512, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 128, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 3g - self.decoder3_g = nn.Sequential( - layers.ConvBNReLU( - 256, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 2g - self.decoder2_g = nn.Sequential( - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 1g - self.decoder1_g = nn.Sequential( - layers.ConvBNReLU( - 128, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 0g - self.decoder0_g = nn.Sequential( - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Conv2D( - 64, 3, 3, padding=1)) - - ########################## - ### Decoder part - FOCUS - ########################## - self.bridge_block = nn.Sequential( - layers.ConvBNReLU( - self.backbone_channels[-1], 512, 3, dilation=2, padding=2), - layers.ConvBNReLU( - 512, 512, 3, dilation=2, padding=2), - layers.ConvBNReLU( - 512, 512, 3, dilation=2, padding=2)) - # stage 5f - self.decoder5_f = nn.Sequential( - layers.ConvBNReLU( - 512 + self.backbone_channels[-1], 512, 3, padding=1), - layers.ConvBNReLU( - 512, 512, 3, padding=2, dilation=2), - layers.ConvBNReLU( - 512, 256, 3, padding=2, dilation=2), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 4f - self.decoder4_f = nn.Sequential( - layers.ConvBNReLU( - 256 + self.backbone_channels[-2], 256, 3, padding=1), - layers.ConvBNReLU( - 256, 256, 3, padding=1), - layers.ConvBNReLU( - 256, 128, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 3f - self.decoder3_f = nn.Sequential( - layers.ConvBNReLU( - 128 + self.backbone_channels[-3], 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 2f - self.decoder2_f = nn.Sequential( - layers.ConvBNReLU( - 64 + self.backbone_channels[-4], 128, 3, padding=1), - layers.ConvBNReLU( - 128, 128, 3, padding=1), - layers.ConvBNReLU( - 128, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 1f - self.decoder1_f = nn.Sequential( - layers.ConvBNReLU( - 64 + self.backbone_channels[-5], 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Upsample( - scale_factor=2, mode='bilinear', align_corners=False)) - # stage 0f - self.decoder0_f = nn.Sequential( - layers.ConvBNReLU( - 64, 64, 3, padding=1), - layers.ConvBNReLU( - 64, 64, 3, padding=1), - nn.Conv2D( - 64, 1 + 1 + 32, 3, padding=1)) - self.init_weight() - - def forward(self, data): - src = data['img'] - src_h, src_w = paddle.shape(src)[2:] - if self.if_refine: - # It is not need when exporting. - if isinstance(src_h, paddle.Tensor): - if (src_h % 4 != 0) or (src_w % 4) != 0: - raise ValueError( - 'The input image must have width and height that are divisible by 4' - ) - - # Downsample src for backbone - src_sm = F.interpolate( - src, - scale_factor=self.backbone_scale, - mode='bilinear', - align_corners=False) - - # Base - fea_list = self.backbone(src_sm) - ########################## - ### Decoder part - GLANCE - ########################## - #psp: N, 512, H/32, W/32 - psp = self.psp_module(fea_list[-1]) - #d6_g: N, 512, H/16, W/16 - d5_g = self.decoder5_g(paddle.concat((psp, fea_list[-1]), 1)) - #d5_g: N, 512, H/8, W/8 - d4_g = self.decoder4_g(paddle.concat((self.psp4(psp), d5_g), 1)) - #d4_g: N, 256, H/4, W/4 - d3_g = self.decoder3_g(paddle.concat((self.psp3(psp), d4_g), 1)) - #d4_g: N, 128, H/2, W/2 - d2_g = self.decoder2_g(paddle.concat((self.psp2(psp), d3_g), 1)) - #d2_g: N, 64, H, W - d1_g = self.decoder1_g(paddle.concat((self.psp1(psp), d2_g), 1)) - #d0_g: N, 3, H, W - d0_g = self.decoder0_g(d1_g) - # The 1st channel is foreground. The 2nd is transition region. The 3rd is background. - # glance_sigmoid = F.sigmoid(d0_g) - glance_sigmoid = F.softmax(d0_g, axis=1) - - ########################## - ### Decoder part - FOCUS - ########################## - bb = self.bridge_block(fea_list[-1]) - #bg: N, 512, H/32, W/32 - d5_f = self.decoder5_f(paddle.concat((bb, fea_list[-1]), 1)) - #d5_f: N, 256, H/16, W/16 - d4_f = self.decoder4_f(paddle.concat((d5_f, fea_list[-2]), 1)) - #d4_f: N, 128, H/8, W/8 - d3_f = self.decoder3_f(paddle.concat((d4_f, fea_list[-3]), 1)) - #d3_f: N, 64, H/4, W/4 - d2_f = self.decoder2_f(paddle.concat((d3_f, fea_list[-4]), 1)) - #d2_f: N, 64, H/2, W/2 - d1_f = self.decoder1_f(paddle.concat((d2_f, fea_list[-5]), 1)) - #d1_f: N, 64, H, W - d0_f = self.decoder0_f(d1_f) - #d0_f: N, 1, H, W - focus_sigmoid = F.sigmoid(d0_f[:, 0:1, :, :]) - pha_sm = self.fusion(glance_sigmoid, focus_sigmoid) - err_sm = d0_f[:, 1:2, :, :] - err_sm = paddle.clip(err_sm, 0., 1.) - hid_sm = F.relu(d0_f[:, 2:, :, :]) - - # Refiner - if self.if_refine: - pha = self.refiner( - src=src, pha=pha_sm, err=err_sm, hid=hid_sm, tri=glance_sigmoid) - # Clamp outputs - pha = paddle.clip(pha, 0., 1.) - - if self.training: - logit_dict = { - 'glance': glance_sigmoid, - 'focus': focus_sigmoid, - 'fusion': pha_sm, - 'error': err_sm - } - if self.if_refine: - logit_dict['refine'] = pha - loss_dict = self.loss(logit_dict, data) - return logit_dict, loss_dict - else: - return pha if self.if_refine else pha_sm - - def loss(self, logit_dict, label_dict, loss_func_dict=None): - if loss_func_dict is None: - if self.loss_func_dict is None: - self.loss_func_dict = defaultdict(list) - self.loss_func_dict['glance'].append(nn.NLLLoss()) - self.loss_func_dict['focus'].append(MRSD()) - self.loss_func_dict['cm'].append(MRSD()) - self.loss_func_dict['err'].append(paddleseg.models.MSELoss()) - self.loss_func_dict['refine'].append(paddleseg.models.L1Loss()) - else: - self.loss_func_dict = loss_func_dict - - loss = {} - - # glance loss computation - # get glance label - glance_label = F.interpolate( - label_dict['trimap'], - logit_dict['glance'].shape[2:], - mode='nearest', - align_corners=False) - glance_label_trans = (glance_label == 128).astype('int64') - glance_label_bg = (glance_label == 0).astype('int64') - glance_label = glance_label_trans + glance_label_bg * 2 - loss_glance = self.loss_func_dict['glance'][0]( - paddle.log(logit_dict['glance'] + 1e-6), glance_label.squeeze(1)) - loss['glance'] = loss_glance - - # focus loss computation - focus_label = F.interpolate( - label_dict['alpha'], - logit_dict['focus'].shape[2:], - mode='bilinear', - align_corners=False) - loss_focus = self.loss_func_dict['focus'][0]( - logit_dict['focus'], focus_label, glance_label_trans) - loss['focus'] = loss_focus - - # collaborative matting loss - loss_cm_func = self.loss_func_dict['cm'] - # fusion_sigmoid loss - loss_cm = loss_cm_func[0](logit_dict['fusion'], focus_label) - loss['cm'] = loss_cm - - # error loss - err = F.interpolate( - logit_dict['error'], - label_dict['alpha'].shape[2:], - mode='bilinear', - align_corners=False) - err_label = (F.interpolate( - logit_dict['fusion'], - label_dict['alpha'].shape[2:], - mode='bilinear', - align_corners=False) - label_dict['alpha']).abs() - loss_err = self.loss_func_dict['err'][0](err, err_label) - loss['err'] = loss_err - - loss_all = 0.25 * loss_glance + 0.25 * loss_focus + 0.25 * loss_cm + loss_err - - # refine loss - if self.if_refine: - loss_refine = self.loss_func_dict['refine'][0](logit_dict['refine'], - label_dict['alpha']) - loss['refine'] = loss_refine - loss_all = loss_all + loss_refine - - loss['all'] = loss_all - return loss - - def fusion(self, glance_sigmoid, focus_sigmoid): - # glance_sigmoid [N, 3, H, W]. - # In index, 0 is foreground, 1 is transition, 2 is backbone. - # After fusion, the foreground is 1, the background is 0, and the transion is between (0, 1). - index = paddle.argmax(glance_sigmoid, axis=1, keepdim=True) - transition_mask = (index == 1).astype('float32') - fg = (index == 0).astype('float32') - fusion_sigmoid = focus_sigmoid * transition_mask + fg - return fusion_sigmoid - - def init_weight(self): - if self.pretrained is not None: - utils.load_entire_model(self, self.pretrained) - - -class Refiner(nn.Layer): - ''' - Refiner refines the coarse output to full resolution. - - Args: - kernel_size: The convolution kernel_size. Options: [1, 3]. Default: 3. - ''' - - def __init__(self, kernel_size=3): - super().__init__() - if kernel_size not in [1, 3]: - raise ValueError("kernel_size must be in [1, 3]") - - self.kernel_size = kernel_size - - channels = [32, 24, 16, 12, 1] - self.conv1 = layers.ConvBNReLU( - channels[0] + 4 + 3, - channels[1], - kernel_size, - padding=0, - bias_attr=False) - self.conv2 = layers.ConvBNReLU( - channels[1], channels[2], kernel_size, padding=0, bias_attr=False) - self.conv3 = layers.ConvBNReLU( - channels[2] + 3, - channels[3], - kernel_size, - padding=0, - bias_attr=False) - self.conv4 = nn.Conv2D( - channels[3], channels[4], kernel_size, padding=0, bias_attr=True) - - def forward(self, src, pha, err, hid, tri): - ''' - Args: - src: (B, 3, H, W) full resolution source image. - pha: (B, 1, Hc, Wc) coarse alpha prediction. - err: (B, 1, Hc, Hc) coarse error prediction. - hid: (B, 32, Hc, Hc) coarse hidden encoding. - tri: (B, 1, Hc, Hc) trimap prediction. - ''' - h_full, w_full = paddle.shape(src)[2:] - h_half, w_half = h_full // 2, w_full // 2 - h_quat, w_quat = h_full // 4, w_full // 4 - - x = paddle.concat([hid, pha, tri], axis=1) - x = F.interpolate( - x, - paddle.concat((h_half, w_half)), - mode='bilinear', - align_corners=False) - y = F.interpolate( - src, - paddle.concat((h_half, w_half)), - mode='bilinear', - align_corners=False) - - if self.kernel_size == 3: - x = F.pad(x, [3, 3, 3, 3]) - y = F.pad(y, [3, 3, 3, 3]) - - x = self.conv1(paddle.concat([x, y], axis=1)) - x = self.conv2(x) - - if self.kernel_size == 3: - x = F.interpolate(x, paddle.concat((h_full + 4, w_full + 4))) - y = F.pad(src, [2, 2, 2, 2]) - else: - x = F.interpolate( - x, paddle.concat((h_full, w_full)), mode='nearest') - y = src - - x = self.conv3(paddle.concat([x, y], axis=1)) - x = self.conv4(x) - - pha = x - return pha diff --git a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/losses/stft_loss.py b/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/losses/stft_loss.py deleted file mode 100644 index 74d2aa21ad30ba094c406366e652067462f49cd2..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/modules/parallel_wavegan/losses/stft_loss.py +++ /dev/null @@ -1,153 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""STFT-based Loss modules.""" - -import torch -import torch.nn.functional as F - - -def stft(x, fft_size, hop_size, win_length, window): - """Perform STFT and convert to magnitude spectrogram. - - Args: - x (Tensor): Input signal tensor (B, T). - fft_size (int): FFT size. - hop_size (int): Hop size. - win_length (int): Window length. - window (str): Window function type. - - Returns: - Tensor: Magnitude spectrogram (B, #frames, fft_size // 2 + 1). - - """ - x_stft = torch.stft(x, fft_size, hop_size, win_length, window) - real = x_stft[..., 0] - imag = x_stft[..., 1] - - # NOTE(kan-bayashi): clamp is needed to avoid nan or inf - return torch.sqrt(torch.clamp(real ** 2 + imag ** 2, min=1e-7)).transpose(2, 1) - - -class SpectralConvergengeLoss(torch.nn.Module): - """Spectral convergence loss module.""" - - def __init__(self): - """Initilize spectral convergence loss module.""" - super(SpectralConvergengeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Spectral convergence loss value. - - """ - return torch.norm(y_mag - x_mag, p="fro") / torch.norm(y_mag, p="fro") - - -class LogSTFTMagnitudeLoss(torch.nn.Module): - """Log STFT magnitude loss module.""" - - def __init__(self): - """Initilize los STFT magnitude loss module.""" - super(LogSTFTMagnitudeLoss, self).__init__() - - def forward(self, x_mag, y_mag): - """Calculate forward propagation. - - Args: - x_mag (Tensor): Magnitude spectrogram of predicted signal (B, #frames, #freq_bins). - y_mag (Tensor): Magnitude spectrogram of groundtruth signal (B, #frames, #freq_bins). - - Returns: - Tensor: Log STFT magnitude loss value. - - """ - return F.l1_loss(torch.log(y_mag), torch.log(x_mag)) - - -class STFTLoss(torch.nn.Module): - """STFT loss module.""" - - def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window"): - """Initialize STFT loss module.""" - super(STFTLoss, self).__init__() - self.fft_size = fft_size - self.shift_size = shift_size - self.win_length = win_length - self.window = getattr(torch, window)(win_length) - self.spectral_convergenge_loss = SpectralConvergengeLoss() - self.log_stft_magnitude_loss = LogSTFTMagnitudeLoss() - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - - Returns: - Tensor: Spectral convergence loss value. - Tensor: Log STFT magnitude loss value. - - """ - x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window) - y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window) - sc_loss = self.spectral_convergenge_loss(x_mag, y_mag) - mag_loss = self.log_stft_magnitude_loss(x_mag, y_mag) - - return sc_loss, mag_loss - - -class MultiResolutionSTFTLoss(torch.nn.Module): - """Multi resolution STFT loss module.""" - - def __init__(self, - fft_sizes=[1024, 2048, 512], - hop_sizes=[120, 240, 50], - win_lengths=[600, 1200, 240], - window="hann_window"): - """Initialize Multi resolution STFT loss module. - - Args: - fft_sizes (list): List of FFT sizes. - hop_sizes (list): List of hop sizes. - win_lengths (list): List of window lengths. - window (str): Window function type. - - """ - super(MultiResolutionSTFTLoss, self).__init__() - assert len(fft_sizes) == len(hop_sizes) == len(win_lengths) - self.stft_losses = torch.nn.ModuleList() - for fs, ss, wl in zip(fft_sizes, hop_sizes, win_lengths): - self.stft_losses += [STFTLoss(fs, ss, wl, window)] - - def forward(self, x, y): - """Calculate forward propagation. - - Args: - x (Tensor): Predicted signal (B, T). - y (Tensor): Groundtruth signal (B, T). - - Returns: - Tensor: Multi resolution spectral convergence loss value. - Tensor: Multi resolution log STFT magnitude loss value. - - """ - sc_loss = 0.0 - mag_loss = 0.0 - for f in self.stft_losses: - sc_l, mag_l = f(x, y) - sc_loss += sc_l - mag_loss += mag_l - sc_loss /= len(self.stft_losses) - mag_loss /= len(self.stft_losses) - - return sc_loss, mag_loss diff --git a/spaces/StephanST/OpenLanderONNXonline/app.py b/spaces/StephanST/OpenLanderONNXonline/app.py deleted file mode 100644 index 20f6721f598059cd534d3d10c9eb1fa717b5b231..0000000000000000000000000000000000000000 --- a/spaces/StephanST/OpenLanderONNXonline/app.py +++ /dev/null @@ -1,165 +0,0 @@ -import cv2 -import numpy as np -import os -import torch -import onnxruntime as ort -import time -from functools import wraps -import argparse -from PIL import Image -from io import BytesIO -import streamlit as st - -# Parse command-line arguments -#parser = argparse.ArgumentParser() -#parser.add_argument("--mosaic", help="Enable mosaic processing mode", action="store_true") -#args = parser.parse_args() -#mosaic = args.mosaic # Set this based on your command line argument - -# For streamlit use let's just set mosaic to "true", but I'm leavind the command-line arg here for anyone to use - -mosaic = True - -def center_crop(img, new_height, new_width): - height, width, _ = img.shape - start_x = width//2 - new_width//2 - start_y = height//2 - new_height//2 - return img[start_y:start_y+new_height, start_x:start_x+new_width] - - -def mosaic_crop(img, size): - height, width, _ = img.shape - padding_height = (size - height % size) % size - padding_width = (size - width % size) % size - - padded_img = cv2.copyMakeBorder(img, 0, padding_height, 0, padding_width, cv2.BORDER_CONSTANT, value=[0, 0, 0]) - tiles = [padded_img[x:x+size, y:y+size] for x in range(0, padded_img.shape[0], size) for y in range(0, padded_img.shape[1], size)] - - return tiles, padded_img.shape[0] // size, padded_img.shape[1] // size, padding_height, padding_width - -def stitch_tiles(tiles, rows, cols, size): - return np.concatenate([np.concatenate([tiles[i*cols + j] for j in range(cols)], axis=1) for i in range(rows)], axis=0) - - -def timing_decorator(func): - @wraps(func) - def wrapper(*args, **kwargs): - start_time = time.time() - result = func(*args, **kwargs) - end_time = time.time() - - duration = end_time - start_time - print(f"Function '{func.__name__}' took {duration:.6f} seconds") - return result - - return wrapper - -@timing_decorator -def process_image(session, img, colors, mosaic=False): - if not mosaic: - # Crop the center of the image to 416x416 pixels - img = center_crop(img, 416, 416) - blob = cv2.dnn.blobFromImage(img, 1/255.0, (416, 416), swapRB=True, crop=False) - - # Perform inference - output = session.run(None, {session.get_inputs()[0].name: blob}) - - # Assuming the output is a probability map where higher values indicate higher probability of a class - output_img = output[0].squeeze(0).transpose(1, 2, 0) - output_img = (output_img * 122).clip(0, 255).astype(np.uint8) - output_mask = output_img.max(axis=2) - - output_mask_color = np.zeros((416, 416, 3), dtype=np.uint8) - - # Assign specific colors to the classes in the mask - for class_idx in np.unique(output_mask): - if class_idx in colors: - output_mask_color[output_mask == class_idx] = colors[class_idx] - - # Mask for the transparent class - transparent_mask = (output_mask == 122) - - # Convert the mask to a 3-channel image - transparent_mask = np.stack([transparent_mask]*3, axis=-1) - - # Where the mask is True, set the output color image to the input image - output_mask_color[transparent_mask] = img[transparent_mask] - - # Make the colorful mask semi-transparent - overlay = cv2.addWeighted(img, 0.6, output_mask_color, 0.4, 0) - - return overlay - - -st.title("OpenLander ONNX app") -st.write("Upload an image to process with the ONNX OpenLander model!") -st.write("Bear in mind that this model is **much less refined** than the embedded models at the moment.") - -models = { - "Embedded model better trained: DeeplabV3+, MobilenetV2, 416px resolution": "20230608_onnx_416_mbnv2_dl3/end2end.onnx", - "Strange model with overfitting" : "test_240000.onnx", - "test model training system V2: DV3+, 40k, 416px": "20230613_40k_test_v2.onnx" - } - - - - - -# Create a Streamlit radio button to select the desired model -selected_model = st.radio("Select a model", list(models.keys())) - - - -# set cuda = true if you have an NVIDIA GPU -cuda = torch.cuda.is_available() - -if cuda: - print("We have a GPU!") -providers = ['CUDAExecutionProvider'] if cuda else ['CPUExecutionProvider'] - -# Get the selected model's path -model_path = models[selected_model] - -session = ort.InferenceSession(model_path, providers=providers) - - -# Define colors for classes 0, 122 and 244 -colors = {0: (0, 0, 255), 122: (0, 0, 0), 244: (0, 255, 255)} # Red, Black, Yellow - -def load_image(uploaded_file): - try: - image = Image.open(uploaded_file) - return cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) - except Exception as e: - st.write("Could not load image: ", e) - return None - - -uploaded_file = st.file_uploader("Choose an image...", type=["jpg", "png"]) -if uploaded_file is not None: - img = load_image(uploaded_file) - if img.shape[2] == 4: - img = img[:, :, :3] # Drop the alpha channel if it exists - img_processed = None - - if st.button('Process'): - with st.spinner('Processing...'): - start = time.time() - if mosaic: - tiles, rows, cols, padding_height, padding_width = mosaic_crop(img, 416) - processed_tiles = [process_image(session, tile, colors, mosaic=True) for tile in tiles] - overlay = stitch_tiles(processed_tiles, rows, cols, 416) - - # Crop the padding back out - overlay = overlay[:overlay.shape[0]-padding_height, :overlay.shape[1]-padding_width] - img_processed = overlay - else: - img_processed = process_image(session, img, colors) - end = time.time() - st.write(f"Processing time: {end - start} seconds") - - st.image(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), caption='Uploaded Image.', use_column_width=True) - - if img_processed is not None: - st.image(cv2.cvtColor(img_processed, cv2.COLOR_BGR2RGB), caption='Processed Image.', use_column_width=True) - st.write("Red => obstacle ||| Yellow => Human obstacle ||| no color => clear for landing or delivery ") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_embed.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_embed.py deleted file mode 100644 index 3f0885e73ccd6e3e1ab848ea503499ebf867f214..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/terminal/tests/test_embed.py +++ /dev/null @@ -1,138 +0,0 @@ -"""Test embedding of IPython""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2013 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -import os -import subprocess -import sys - -from IPython.utils.tempdir import NamedFileInTemporaryDirectory -from IPython.testing.decorators import skip_win32 -from IPython.testing import IPYTHON_TESTING_TIMEOUT_SCALE - -#----------------------------------------------------------------------------- -# Tests -#----------------------------------------------------------------------------- - - -_sample_embed = b""" -import IPython - -a = 3 -b = 14 -print(a, '.', b) - -IPython.embed() - -print('bye!') -""" - -_exit = b"exit\r" - -def test_ipython_embed(): - """test that `IPython.embed()` works""" - with NamedFileInTemporaryDirectory('file_with_embed.py') as f: - f.write(_sample_embed) - f.flush() - f.close() # otherwise msft won't be able to read the file - - # run `python file_with_embed.py` - cmd = [sys.executable, f.name] - env = os.environ.copy() - env['IPY_TEST_SIMPLE_PROMPT'] = '1' - - p = subprocess.Popen(cmd, env=env, stdin=subprocess.PIPE, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - out, err = p.communicate(_exit) - std = out.decode('UTF-8') - - assert p.returncode == 0 - assert "3 . 14" in std - if os.name != "nt": - # TODO: Fix up our different stdout references, see issue gh-14 - assert "IPython" in std - assert "bye!" in std - - -@skip_win32 -def test_nest_embed(): - """test that `IPython.embed()` is nestable""" - import pexpect - ipy_prompt = r']:' #ansi color codes give problems matching beyond this - env = os.environ.copy() - env['IPY_TEST_SIMPLE_PROMPT'] = '1' - - - child = pexpect.spawn(sys.executable, ['-m', 'IPython', '--colors=nocolor'], - env=env) - child.timeout = 15 * IPYTHON_TESTING_TIMEOUT_SCALE - child.expect(ipy_prompt) - child.timeout = 5 * IPYTHON_TESTING_TIMEOUT_SCALE - child.sendline("import IPython") - child.expect(ipy_prompt) - child.sendline("ip0 = get_ipython()") - #enter first nested embed - child.sendline("IPython.embed()") - #skip the banner until we get to a prompt - try: - prompted = -1 - while prompted != 0: - prompted = child.expect([ipy_prompt, '\r\n']) - except pexpect.TIMEOUT as e: - print(e) - #child.interact() - child.sendline("embed1 = get_ipython()") - child.expect(ipy_prompt) - child.sendline("print('true' if embed1 is not ip0 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline("print('true' if IPython.get_ipython() is embed1 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - #enter second nested embed - child.sendline("IPython.embed()") - #skip the banner until we get to a prompt - try: - prompted = -1 - while prompted != 0: - prompted = child.expect([ipy_prompt, '\r\n']) - except pexpect.TIMEOUT as e: - print(e) - #child.interact() - child.sendline("embed2 = get_ipython()") - child.expect(ipy_prompt) - child.sendline("print('true' if embed2 is not embed1 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline("print('true' if embed2 is IPython.get_ipython() else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline('exit') - #back at first embed - child.expect(ipy_prompt) - child.sendline("print('true' if get_ipython() is embed1 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline("print('true' if IPython.get_ipython() is embed1 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline('exit') - #back at launching scope - child.expect(ipy_prompt) - child.sendline("print('true' if get_ipython() is ip0 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline("print('true' if IPython.get_ipython() is ip0 else 'false')") - assert(child.expect(['true\r\n', 'false\r\n']) == 0) - child.expect(ipy_prompt) - child.sendline('exit') - child.close() diff --git a/spaces/TabPFN/TabPFNPrediction/decision_boundary.py b/spaces/TabPFN/TabPFNPrediction/decision_boundary.py deleted file mode 100644 index fe1f84087aba768cba41b2886226cac5fbf17f19..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/decision_boundary.py +++ /dev/null @@ -1,300 +0,0 @@ -import matplotlib.pyplot as plt -from matplotlib.colors import ListedColormap - -from functools import reduce - -import numpy as np - -from sklearn.preprocessing import LabelEncoder -from sklearn.utils import check_matplotlib_support -from sklearn.utils import _safe_indexing -from sklearn.base import is_regressor -from sklearn.utils.validation import check_is_fitted - - -def _check_boundary_response_method(estimator, response_method): - """Return prediction method from the `response_method` for decision boundary. - Parameters - ---------- - estimator : object - Fitted estimator to check. - response_method : {'auto', 'predict_proba', 'decision_function', 'predict'} - Specifies whether to use :term:`predict_proba`, - :term:`decision_function`, :term:`predict` as the target response. - If set to 'auto', the response method is tried in the following order: - :term:`decision_function`, :term:`predict_proba`, :term:`predict`. - Returns - ------- - prediction_method: callable - Prediction method of estimator. - """ - has_classes = hasattr(estimator, "classes_") - - if has_classes and len(estimator.classes_) > 2: - if response_method not in {"auto", "predict"}: - msg = ( - "Multiclass classifiers are only supported when response_method is" - " 'predict' or 'auto'" - ) - raise ValueError(msg) - methods_list = ["predict"] - elif response_method == "auto": - methods_list = ["decision_function", "predict_proba", "predict"] - else: - methods_list = [response_method] - - prediction_method = [getattr(estimator, method, None) for method in methods_list] - prediction_method = reduce(lambda x, y: x or y, prediction_method) - if prediction_method is None: - raise ValueError( - f"{estimator.__class__.__name__} has none of the following attributes: " - f"{', '.join(methods_list)}." - ) - - return prediction_method - - -class DecisionBoundaryDisplay: - """Decisions boundary visualization. - It is recommended to use - :func:`~sklearn.inspection.DecisionBoundaryDisplay.from_estimator` - to create a :class:`DecisionBoundaryDisplay`. All parameters are stored as - attributes. - Read more in the :ref:`User Guide `. - .. versionadded:: 1.1 - Parameters - ---------- - xx0 : ndarray of shape (grid_resolution, grid_resolution) - First output of :func:`meshgrid `. - xx1 : ndarray of shape (grid_resolution, grid_resolution) - Second output of :func:`meshgrid `. - response : ndarray of shape (grid_resolution, grid_resolution) - Values of the response function. - xlabel : str, default=None - Default label to place on x axis. - ylabel : str, default=None - Default label to place on y axis. - Attributes - ---------- - surface_ : matplotlib `QuadContourSet` or `QuadMesh` - If `plot_method` is 'contour' or 'contourf', `surface_` is a - :class:`QuadContourSet `. If - `plot_method is `pcolormesh`, `surface_` is a - :class:`QuadMesh `. - ax_ : matplotlib Axes - Axes with confusion matrix. - figure_ : matplotlib Figure - Figure containing the confusion matrix. - """ - - def __init__(self, *, xx0, xx1, response, xlabel=None, ylabel=None): - self.xx0 = xx0 - self.xx1 = xx1 - self.response = response - self.xlabel = xlabel - self.ylabel = ylabel - - def plot(self, plot_method="contourf", ax=None, xlabel=None, ylabel=None, **kwargs): - """Plot visualization. - Parameters - ---------- - plot_method : {'contourf', 'contour', 'pcolormesh'}, default='contourf' - Plotting method to call when plotting the response. Please refer - to the following matplotlib documentation for details: - :func:`contourf `, - :func:`contour `, - :func:`pcolomesh `. - ax : Matplotlib axes, default=None - Axes object to plot on. If `None`, a new figure and axes is - created. - xlabel : str, default=None - Overwrite the x-axis label. - ylabel : str, default=None - Overwrite the y-axis label. - **kwargs : dict - Additional keyword arguments to be passed to the `plot_method`. - Returns - ------- - display: :class:`~sklearn.inspection.DecisionBoundaryDisplay` - """ - check_matplotlib_support("DecisionBoundaryDisplay.plot") - import matplotlib.pyplot as plt # noqa - - if plot_method not in ("contourf", "contour", "pcolormesh"): - raise ValueError( - "plot_method must be 'contourf', 'contour', or 'pcolormesh'" - ) - - if ax is None: - _, ax = plt.subplots() - - plot_func = getattr(ax, plot_method) - self.surface_ = plot_func(self.xx0, self.xx1, self.response, **kwargs) - - if xlabel is not None or not ax.get_xlabel(): - xlabel = self.xlabel if xlabel is None else xlabel - ax.set_xlabel(xlabel) - if ylabel is not None or not ax.get_ylabel(): - ylabel = self.ylabel if ylabel is None else ylabel - ax.set_ylabel(ylabel) - - self.ax_ = ax - self.figure_ = ax.figure - return self - - @classmethod - def from_estimator( - cls, - estimator, - X, - *, - grid_resolution=100, - eps=1.0, - plot_method="contourf", - response_method="auto", - xlabel=None, - ylabel=None, - ax=None, - **kwargs, - ): - """Plot decision boundary given an estimator. - Read more in the :ref:`User Guide `. - Parameters - ---------- - estimator : object - Trained estimator used to plot the decision boundary. - X : {array-like, sparse matrix, dataframe} of shape (n_samples, 2) - Input data that should be only 2-dimensional. - grid_resolution : int, default=100 - Number of grid points to use for plotting decision boundary. - Higher values will make the plot look nicer but be slower to - render. - eps : float, default=1.0 - Extends the minimum and maximum values of X for evaluating the - response function. - plot_method : {'contourf', 'contour', 'pcolormesh'}, default='contourf' - Plotting method to call when plotting the response. Please refer - to the following matplotlib documentation for details: - :func:`contourf `, - :func:`contour `, - :func:`pcolomesh `. - response_method : {'auto', 'predict_proba', 'decision_function', \ - 'predict'}, default='auto' - Specifies whether to use :term:`predict_proba`, - :term:`decision_function`, :term:`predict` as the target response. - If set to 'auto', the response method is tried in the following order: - :term:`decision_function`, :term:`predict_proba`, :term:`predict`. - For multiclass problems, :term:`predict` is selected when - `response_method="auto"`. - xlabel : str, default=None - The label used for the x-axis. If `None`, an attempt is made to - extract a label from `X` if it is a dataframe, otherwise an empty - string is used. - ylabel : str, default=None - The label used for the y-axis. If `None`, an attempt is made to - extract a label from `X` if it is a dataframe, otherwise an empty - string is used. - ax : Matplotlib axes, default=None - Axes object to plot on. If `None`, a new figure and axes is - created. - **kwargs : dict - Additional keyword arguments to be passed to the - `plot_method`. - Returns - ------- - display : :class:`~sklearn.inspection.DecisionBoundaryDisplay` - Object that stores the result. - See Also - -------- - DecisionBoundaryDisplay : Decision boundary visualization. - ConfusionMatrixDisplay.from_estimator : Plot the confusion matrix - given an estimator, the data, and the label. - ConfusionMatrixDisplay.from_predictions : Plot the confusion matrix - given the true and predicted labels. - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> from sklearn.datasets import load_iris - >>> from sklearn.linear_model import LogisticRegression - >>> from sklearn.inspection import DecisionBoundaryDisplay - >>> iris = load_iris() - >>> X = iris.data[:, :2] - >>> classifier = LogisticRegression().fit(X, iris.target) - >>> disp = DecisionBoundaryDisplay.from_estimator( - ... classifier, X, response_method="predict", - ... xlabel=iris.feature_names[0], ylabel=iris.feature_names[1], - ... alpha=0.5, - ... ) - >>> disp.ax_.scatter(X[:, 0], X[:, 1], c=iris.target, edgecolor="k") - <...> - >>> plt.show() - """ - check_matplotlib_support(f"{cls.__name__}.from_estimator") - check_is_fitted(estimator) - - if not grid_resolution > 1: - raise ValueError( - "grid_resolution must be greater than 1. Got" - f" {grid_resolution} instead." - ) - - if not eps >= 0: - raise ValueError( - f"eps must be greater than or equal to 0. Got {eps} instead." - ) - - possible_plot_methods = ("contourf", "contour", "pcolormesh") - if plot_method not in possible_plot_methods: - available_methods = ", ".join(possible_plot_methods) - raise ValueError( - f"plot_method must be one of {available_methods}. " - f"Got {plot_method} instead." - ) - - x0, x1 = _safe_indexing(X, 0, axis=1), _safe_indexing(X, 1, axis=1) - - x0_min, x0_max = x0.min() - eps, x0.max() + eps - x1_min, x1_max = x1.min() - eps, x1.max() + eps - - xx0, xx1 = np.meshgrid( - np.linspace(x0_min, x0_max, grid_resolution), - np.linspace(x1_min, x1_max, grid_resolution), - ) - if hasattr(X, "iloc"): - # we need to preserve the feature names and therefore get an empty dataframe - X_grid = X.iloc[[], :].copy() - X_grid.iloc[:, 0] = xx0.ravel() - X_grid.iloc[:, 1] = xx1.ravel() - else: - X_grid = np.c_[xx0.ravel(), xx1.ravel()] - - pred_func = _check_boundary_response_method(estimator, response_method) - response = pred_func(X_grid) - - # convert classes predictions into integers - if pred_func.__name__ == "predict" and hasattr(estimator, "classes_"): - encoder = LabelEncoder() - encoder.classes_ = estimator.classes_ - response = encoder.transform(response) - - if response.ndim != 1: - if is_regressor(estimator): - raise ValueError("Multi-output regressors are not supported") - - # TODO: Support pos_label - response = response[:, 1] - - if xlabel is None: - xlabel = X.columns[0] if hasattr(X, "columns") else "" - - if ylabel is None: - ylabel = X.columns[1] if hasattr(X, "columns") else "" - - display = DecisionBoundaryDisplay( - xx0=xx0, - xx1=xx1, - response=response.reshape(xx0.shape), - xlabel=xlabel, - ylabel=ylabel, - ) - return display.plot(ax=ax, plot_method=plot_method, **kwargs) \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dir_util.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dir_util.py deleted file mode 100644 index 23dc3392a2c9b11f93bc8d9e1381848b7c8fd7b3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dir_util.py +++ /dev/null @@ -1,243 +0,0 @@ -"""distutils.dir_util - -Utility functions for manipulating directories and directory trees.""" - -import os -import errno -from .errors import DistutilsInternalError, DistutilsFileError -from ._log import log - -# cache for by mkpath() -- in addition to cheapening redundant calls, -# eliminates redundant "creating /foo/bar/baz" messages in dry-run mode -_path_created = {} - - -def mkpath(name, mode=0o777, verbose=1, dry_run=0): # noqa: C901 - """Create a directory and any missing ancestor directories. - - If the directory already exists (or if 'name' is the empty string, which - means the current directory, which of course exists), then do nothing. - Raise DistutilsFileError if unable to create some directory along the way - (eg. some sub-path exists, but is a file rather than a directory). - If 'verbose' is true, print a one-line summary of each mkdir to stdout. - Return the list of directories actually created. - - os.makedirs is not used because: - - a) It's new to Python 1.5.2, and - b) it blows up if the directory already exists (in which case it should - silently succeed). - """ - - global _path_created - - # Detect a common bug -- name is None - if not isinstance(name, str): - raise DistutilsInternalError( - "mkpath: 'name' must be a string (got {!r})".format(name) - ) - - # XXX what's the better way to handle verbosity? print as we create - # each directory in the path (the current behaviour), or only announce - # the creation of the whole path? (quite easy to do the latter since - # we're not using a recursive algorithm) - - name = os.path.normpath(name) - created_dirs = [] - if os.path.isdir(name) or name == '': - return created_dirs - if _path_created.get(os.path.abspath(name)): - return created_dirs - - (head, tail) = os.path.split(name) - tails = [tail] # stack of lone dirs to create - - while head and tail and not os.path.isdir(head): - (head, tail) = os.path.split(head) - tails.insert(0, tail) # push next higher dir onto stack - - # now 'head' contains the deepest directory that already exists - # (that is, the child of 'head' in 'name' is the highest directory - # that does *not* exist) - for d in tails: - # print "head = %s, d = %s: " % (head, d), - head = os.path.join(head, d) - abs_head = os.path.abspath(head) - - if _path_created.get(abs_head): - continue - - if verbose >= 1: - log.info("creating %s", head) - - if not dry_run: - try: - os.mkdir(head, mode) - except OSError as exc: - if not (exc.errno == errno.EEXIST and os.path.isdir(head)): - raise DistutilsFileError( - "could not create '{}': {}".format(head, exc.args[-1]) - ) - created_dirs.append(head) - - _path_created[abs_head] = 1 - return created_dirs - - -def create_tree(base_dir, files, mode=0o777, verbose=1, dry_run=0): - """Create all the empty directories under 'base_dir' needed to put 'files' - there. - - 'base_dir' is just the name of a directory which doesn't necessarily - exist yet; 'files' is a list of filenames to be interpreted relative to - 'base_dir'. 'base_dir' + the directory portion of every file in 'files' - will be created if it doesn't already exist. 'mode', 'verbose' and - 'dry_run' flags are as for 'mkpath()'. - """ - # First get the list of directories to create - need_dir = set() - for file in files: - need_dir.add(os.path.join(base_dir, os.path.dirname(file))) - - # Now create them - for dir in sorted(need_dir): - mkpath(dir, mode, verbose=verbose, dry_run=dry_run) - - -def copy_tree( # noqa: C901 - src, - dst, - preserve_mode=1, - preserve_times=1, - preserve_symlinks=0, - update=0, - verbose=1, - dry_run=0, -): - """Copy an entire directory tree 'src' to a new location 'dst'. - - Both 'src' and 'dst' must be directory names. If 'src' is not a - directory, raise DistutilsFileError. If 'dst' does not exist, it is - created with 'mkpath()'. The end result of the copy is that every - file in 'src' is copied to 'dst', and directories under 'src' are - recursively copied to 'dst'. Return the list of files that were - copied or might have been copied, using their output name. The - return value is unaffected by 'update' or 'dry_run': it is simply - the list of all files under 'src', with the names changed to be - under 'dst'. - - 'preserve_mode' and 'preserve_times' are the same as for - 'copy_file'; note that they only apply to regular files, not to - directories. If 'preserve_symlinks' is true, symlinks will be - copied as symlinks (on platforms that support them!); otherwise - (the default), the destination of the symlink will be copied. - 'update' and 'verbose' are the same as for 'copy_file'. - """ - from distutils.file_util import copy_file - - if not dry_run and not os.path.isdir(src): - raise DistutilsFileError("cannot copy tree '%s': not a directory" % src) - try: - names = os.listdir(src) - except OSError as e: - if dry_run: - names = [] - else: - raise DistutilsFileError( - "error listing files in '{}': {}".format(src, e.strerror) - ) - - if not dry_run: - mkpath(dst, verbose=verbose) - - outputs = [] - - for n in names: - src_name = os.path.join(src, n) - dst_name = os.path.join(dst, n) - - if n.startswith('.nfs'): - # skip NFS rename files - continue - - if preserve_symlinks and os.path.islink(src_name): - link_dest = os.readlink(src_name) - if verbose >= 1: - log.info("linking %s -> %s", dst_name, link_dest) - if not dry_run: - os.symlink(link_dest, dst_name) - outputs.append(dst_name) - - elif os.path.isdir(src_name): - outputs.extend( - copy_tree( - src_name, - dst_name, - preserve_mode, - preserve_times, - preserve_symlinks, - update, - verbose=verbose, - dry_run=dry_run, - ) - ) - else: - copy_file( - src_name, - dst_name, - preserve_mode, - preserve_times, - update, - verbose=verbose, - dry_run=dry_run, - ) - outputs.append(dst_name) - - return outputs - - -def _build_cmdtuple(path, cmdtuples): - """Helper for remove_tree().""" - for f in os.listdir(path): - real_f = os.path.join(path, f) - if os.path.isdir(real_f) and not os.path.islink(real_f): - _build_cmdtuple(real_f, cmdtuples) - else: - cmdtuples.append((os.remove, real_f)) - cmdtuples.append((os.rmdir, path)) - - -def remove_tree(directory, verbose=1, dry_run=0): - """Recursively remove an entire directory tree. - - Any errors are ignored (apart from being reported to stdout if 'verbose' - is true). - """ - global _path_created - - if verbose >= 1: - log.info("removing '%s' (and everything under it)", directory) - if dry_run: - return - cmdtuples = [] - _build_cmdtuple(directory, cmdtuples) - for cmd in cmdtuples: - try: - cmd[0](cmd[1]) - # remove dir from cache if it's already there - abspath = os.path.abspath(cmd[1]) - if abspath in _path_created: - _path_created.pop(abspath) - except OSError as exc: - log.warning("error removing %s: %s", directory, exc) - - -def ensure_relative(path): - """Take the full path 'path', and make it a relative path. - - This is useful to make 'path' the second argument to os.path.join(). - """ - drive, path = os.path.splitdrive(path) - if path[0:1] == os.sep: - path = drive + path[1:] - return path diff --git a/spaces/TheRealZoink/Zoink_OV3RL0AD/README.md b/spaces/TheRealZoink/Zoink_OV3RL0AD/README.md deleted file mode 100644 index 2951c710b6b41ba2b2f3a4f305e16747d5b0eded..0000000000000000000000000000000000000000 --- a/spaces/TheRealZoink/Zoink_OV3RL0AD/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Zoink OV3RL0AD -emoji: 😻 -colorFrom: pink -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/TornikeO/dis-background-removal/models/isnet.py b/spaces/TornikeO/dis-background-removal/models/isnet.py deleted file mode 100644 index 8be74bb03954496e6cb8af6e1c8e1cd2c0bbb80c..0000000000000000000000000000000000000000 --- a/spaces/TornikeO/dis-background-removal/models/isnet.py +++ /dev/null @@ -1,610 +0,0 @@ -import torch -import torch.nn as nn -from torchvision import models -import torch.nn.functional as F - - -bce_loss = nn.BCELoss(size_average=True) -def muti_loss_fusion(preds, target): - loss0 = 0.0 - loss = 0.0 - - for i in range(0,len(preds)): - # print("i: ", i, preds[i].shape) - if(preds[i].shape[2]!=target.shape[2] or preds[i].shape[3]!=target.shape[3]): - # tmp_target = _upsample_like(target,preds[i]) - tmp_target = F.interpolate(target, size=preds[i].size()[2:], mode='bilinear', align_corners=True) - loss = loss + bce_loss(preds[i],tmp_target) - else: - loss = loss + bce_loss(preds[i],target) - if(i==0): - loss0 = loss - return loss0, loss - -fea_loss = nn.MSELoss(size_average=True) -kl_loss = nn.KLDivLoss(size_average=True) -l1_loss = nn.L1Loss(size_average=True) -smooth_l1_loss = nn.SmoothL1Loss(size_average=True) -def muti_loss_fusion_kl(preds, target, dfs, fs, mode='MSE'): - loss0 = 0.0 - loss = 0.0 - - for i in range(0,len(preds)): - # print("i: ", i, preds[i].shape) - if(preds[i].shape[2]!=target.shape[2] or preds[i].shape[3]!=target.shape[3]): - # tmp_target = _upsample_like(target,preds[i]) - tmp_target = F.interpolate(target, size=preds[i].size()[2:], mode='bilinear', align_corners=True) - loss = loss + bce_loss(preds[i],tmp_target) - else: - loss = loss + bce_loss(preds[i],target) - if(i==0): - loss0 = loss - - for i in range(0,len(dfs)): - if(mode=='MSE'): - loss = loss + fea_loss(dfs[i],fs[i]) ### add the mse loss of features as additional constraints - # print("fea_loss: ", fea_loss(dfs[i],fs[i]).item()) - elif(mode=='KL'): - loss = loss + kl_loss(F.log_softmax(dfs[i],dim=1),F.softmax(fs[i],dim=1)) - # print("kl_loss: ", kl_loss(F.log_softmax(dfs[i],dim=1),F.softmax(fs[i],dim=1)).item()) - elif(mode=='MAE'): - loss = loss + l1_loss(dfs[i],fs[i]) - # print("ls_loss: ", l1_loss(dfs[i],fs[i])) - elif(mode=='SmoothL1'): - loss = loss + smooth_l1_loss(dfs[i],fs[i]) - # print("SmoothL1: ", smooth_l1_loss(dfs[i],fs[i]).item()) - - return loss0, loss - -class REBNCONV(nn.Module): - def __init__(self,in_ch=3,out_ch=3,dirate=1,stride=1): - super(REBNCONV,self).__init__() - - self.conv_s1 = nn.Conv2d(in_ch,out_ch,3,padding=1*dirate,dilation=1*dirate,stride=stride) - self.bn_s1 = nn.BatchNorm2d(out_ch) - self.relu_s1 = nn.ReLU(inplace=True) - - def forward(self,x): - - hx = x - xout = self.relu_s1(self.bn_s1(self.conv_s1(hx))) - - return xout - -## upsample tensor 'src' to have the same spatial size with tensor 'tar' -def _upsample_like(src,tar): - - src = F.upsample(src,size=tar.shape[2:],mode='bilinear') - - return src - - -### RSU-7 ### -class RSU7(nn.Module): - - def __init__(self, in_ch=3, mid_ch=12, out_ch=3, img_size=512): - super(RSU7,self).__init__() - - self.in_ch = in_ch - self.mid_ch = mid_ch - self.out_ch = out_ch - - self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) ## 1 -> 1/2 - - self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1) - self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool4 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool5 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv6 = REBNCONV(mid_ch,mid_ch,dirate=1) - - self.rebnconv7 = REBNCONV(mid_ch,mid_ch,dirate=2) - - self.rebnconv6d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv5d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1) - - def forward(self,x): - b, c, h, w = x.shape - - hx = x - hxin = self.rebnconvin(hx) - - hx1 = self.rebnconv1(hxin) - hx = self.pool1(hx1) - - hx2 = self.rebnconv2(hx) - hx = self.pool2(hx2) - - hx3 = self.rebnconv3(hx) - hx = self.pool3(hx3) - - hx4 = self.rebnconv4(hx) - hx = self.pool4(hx4) - - hx5 = self.rebnconv5(hx) - hx = self.pool5(hx5) - - hx6 = self.rebnconv6(hx) - - hx7 = self.rebnconv7(hx6) - - hx6d = self.rebnconv6d(torch.cat((hx7,hx6),1)) - hx6dup = _upsample_like(hx6d,hx5) - - hx5d = self.rebnconv5d(torch.cat((hx6dup,hx5),1)) - hx5dup = _upsample_like(hx5d,hx4) - - hx4d = self.rebnconv4d(torch.cat((hx5dup,hx4),1)) - hx4dup = _upsample_like(hx4d,hx3) - - hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1)) - hx3dup = _upsample_like(hx3d,hx2) - - hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1)) - hx2dup = _upsample_like(hx2d,hx1) - - hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1)) - - return hx1d + hxin - - -### RSU-6 ### -class RSU6(nn.Module): - - def __init__(self, in_ch=3, mid_ch=12, out_ch=3): - super(RSU6,self).__init__() - - self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) - - self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1) - self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool4 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=1) - - self.rebnconv6 = REBNCONV(mid_ch,mid_ch,dirate=2) - - self.rebnconv5d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1) - - def forward(self,x): - - hx = x - - hxin = self.rebnconvin(hx) - - hx1 = self.rebnconv1(hxin) - hx = self.pool1(hx1) - - hx2 = self.rebnconv2(hx) - hx = self.pool2(hx2) - - hx3 = self.rebnconv3(hx) - hx = self.pool3(hx3) - - hx4 = self.rebnconv4(hx) - hx = self.pool4(hx4) - - hx5 = self.rebnconv5(hx) - - hx6 = self.rebnconv6(hx5) - - - hx5d = self.rebnconv5d(torch.cat((hx6,hx5),1)) - hx5dup = _upsample_like(hx5d,hx4) - - hx4d = self.rebnconv4d(torch.cat((hx5dup,hx4),1)) - hx4dup = _upsample_like(hx4d,hx3) - - hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1)) - hx3dup = _upsample_like(hx3d,hx2) - - hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1)) - hx2dup = _upsample_like(hx2d,hx1) - - hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1)) - - return hx1d + hxin - -### RSU-5 ### -class RSU5(nn.Module): - - def __init__(self, in_ch=3, mid_ch=12, out_ch=3): - super(RSU5,self).__init__() - - self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) - - self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1) - self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1) - - self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=2) - - self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1) - - def forward(self,x): - - hx = x - - hxin = self.rebnconvin(hx) - - hx1 = self.rebnconv1(hxin) - hx = self.pool1(hx1) - - hx2 = self.rebnconv2(hx) - hx = self.pool2(hx2) - - hx3 = self.rebnconv3(hx) - hx = self.pool3(hx3) - - hx4 = self.rebnconv4(hx) - - hx5 = self.rebnconv5(hx4) - - hx4d = self.rebnconv4d(torch.cat((hx5,hx4),1)) - hx4dup = _upsample_like(hx4d,hx3) - - hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1)) - hx3dup = _upsample_like(hx3d,hx2) - - hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1)) - hx2dup = _upsample_like(hx2d,hx1) - - hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1)) - - return hx1d + hxin - -### RSU-4 ### -class RSU4(nn.Module): - - def __init__(self, in_ch=3, mid_ch=12, out_ch=3): - super(RSU4,self).__init__() - - self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) - - self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1) - self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1) - self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1) - - self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=2) - - self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1) - self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1) - - def forward(self,x): - - hx = x - - hxin = self.rebnconvin(hx) - - hx1 = self.rebnconv1(hxin) - hx = self.pool1(hx1) - - hx2 = self.rebnconv2(hx) - hx = self.pool2(hx2) - - hx3 = self.rebnconv3(hx) - - hx4 = self.rebnconv4(hx3) - - hx3d = self.rebnconv3d(torch.cat((hx4,hx3),1)) - hx3dup = _upsample_like(hx3d,hx2) - - hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1)) - hx2dup = _upsample_like(hx2d,hx1) - - hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1)) - - return hx1d + hxin - -### RSU-4F ### -class RSU4F(nn.Module): - - def __init__(self, in_ch=3, mid_ch=12, out_ch=3): - super(RSU4F,self).__init__() - - self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) - - self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1) - self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=2) - self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=4) - - self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=8) - - self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=4) - self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=2) - self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1) - - def forward(self,x): - - hx = x - - hxin = self.rebnconvin(hx) - - hx1 = self.rebnconv1(hxin) - hx2 = self.rebnconv2(hx1) - hx3 = self.rebnconv3(hx2) - - hx4 = self.rebnconv4(hx3) - - hx3d = self.rebnconv3d(torch.cat((hx4,hx3),1)) - hx2d = self.rebnconv2d(torch.cat((hx3d,hx2),1)) - hx1d = self.rebnconv1d(torch.cat((hx2d,hx1),1)) - - return hx1d + hxin - - -class myrebnconv(nn.Module): - def __init__(self, in_ch=3, - out_ch=1, - kernel_size=3, - stride=1, - padding=1, - dilation=1, - groups=1): - super(myrebnconv,self).__init__() - - self.conv = nn.Conv2d(in_ch, - out_ch, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups) - self.bn = nn.BatchNorm2d(out_ch) - self.rl = nn.ReLU(inplace=True) - - def forward(self,x): - return self.rl(self.bn(self.conv(x))) - - -class ISNetGTEncoder(nn.Module): - - def __init__(self,in_ch=1,out_ch=1): - super(ISNetGTEncoder,self).__init__() - - self.conv_in = myrebnconv(in_ch,16,3,stride=2,padding=1) # nn.Conv2d(in_ch,64,3,stride=2,padding=1) - - self.stage1 = RSU7(16,16,64) - self.pool12 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage2 = RSU6(64,16,64) - self.pool23 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage3 = RSU5(64,32,128) - self.pool34 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage4 = RSU4(128,32,256) - self.pool45 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage5 = RSU4F(256,64,512) - self.pool56 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage6 = RSU4F(512,64,512) - - - self.side1 = nn.Conv2d(64,out_ch,3,padding=1) - self.side2 = nn.Conv2d(64,out_ch,3,padding=1) - self.side3 = nn.Conv2d(128,out_ch,3,padding=1) - self.side4 = nn.Conv2d(256,out_ch,3,padding=1) - self.side5 = nn.Conv2d(512,out_ch,3,padding=1) - self.side6 = nn.Conv2d(512,out_ch,3,padding=1) - - def compute_loss(self, preds, targets): - - return muti_loss_fusion(preds,targets) - - def forward(self,x): - - hx = x - - hxin = self.conv_in(hx) - # hx = self.pool_in(hxin) - - #stage 1 - hx1 = self.stage1(hxin) - hx = self.pool12(hx1) - - #stage 2 - hx2 = self.stage2(hx) - hx = self.pool23(hx2) - - #stage 3 - hx3 = self.stage3(hx) - hx = self.pool34(hx3) - - #stage 4 - hx4 = self.stage4(hx) - hx = self.pool45(hx4) - - #stage 5 - hx5 = self.stage5(hx) - hx = self.pool56(hx5) - - #stage 6 - hx6 = self.stage6(hx) - - - #side output - d1 = self.side1(hx1) - d1 = _upsample_like(d1,x) - - d2 = self.side2(hx2) - d2 = _upsample_like(d2,x) - - d3 = self.side3(hx3) - d3 = _upsample_like(d3,x) - - d4 = self.side4(hx4) - d4 = _upsample_like(d4,x) - - d5 = self.side5(hx5) - d5 = _upsample_like(d5,x) - - d6 = self.side6(hx6) - d6 = _upsample_like(d6,x) - - # d0 = self.outconv(torch.cat((d1,d2,d3,d4,d5,d6),1)) - - return [F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6)], [hx1,hx2,hx3,hx4,hx5,hx6] - -class ISNetDIS(nn.Module): - - def __init__(self,in_ch=3,out_ch=1): - super(ISNetDIS,self).__init__() - - self.conv_in = nn.Conv2d(in_ch,64,3,stride=2,padding=1) - self.pool_in = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage1 = RSU7(64,32,64) - self.pool12 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage2 = RSU6(64,32,128) - self.pool23 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage3 = RSU5(128,64,256) - self.pool34 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage4 = RSU4(256,128,512) - self.pool45 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage5 = RSU4F(512,256,512) - self.pool56 = nn.MaxPool2d(2,stride=2,ceil_mode=True) - - self.stage6 = RSU4F(512,256,512) - - # decoder - self.stage5d = RSU4F(1024,256,512) - self.stage4d = RSU4(1024,128,256) - self.stage3d = RSU5(512,64,128) - self.stage2d = RSU6(256,32,64) - self.stage1d = RSU7(128,16,64) - - self.side1 = nn.Conv2d(64,out_ch,3,padding=1) - self.side2 = nn.Conv2d(64,out_ch,3,padding=1) - self.side3 = nn.Conv2d(128,out_ch,3,padding=1) - self.side4 = nn.Conv2d(256,out_ch,3,padding=1) - self.side5 = nn.Conv2d(512,out_ch,3,padding=1) - self.side6 = nn.Conv2d(512,out_ch,3,padding=1) - - # self.outconv = nn.Conv2d(6*out_ch,out_ch,1) - - def compute_loss_kl(self, preds, targets, dfs, fs, mode='MSE'): - - # return muti_loss_fusion(preds,targets) - return muti_loss_fusion_kl(preds, targets, dfs, fs, mode=mode) - - def compute_loss(self, preds, targets): - - # return muti_loss_fusion(preds,targets) - return muti_loss_fusion(preds, targets) - - def forward(self,x): - - hx = x - - hxin = self.conv_in(hx) - #hx = self.pool_in(hxin) - - #stage 1 - hx1 = self.stage1(hxin) - hx = self.pool12(hx1) - - #stage 2 - hx2 = self.stage2(hx) - hx = self.pool23(hx2) - - #stage 3 - hx3 = self.stage3(hx) - hx = self.pool34(hx3) - - #stage 4 - hx4 = self.stage4(hx) - hx = self.pool45(hx4) - - #stage 5 - hx5 = self.stage5(hx) - hx = self.pool56(hx5) - - #stage 6 - hx6 = self.stage6(hx) - hx6up = _upsample_like(hx6,hx5) - - #-------------------- decoder -------------------- - hx5d = self.stage5d(torch.cat((hx6up,hx5),1)) - hx5dup = _upsample_like(hx5d,hx4) - - hx4d = self.stage4d(torch.cat((hx5dup,hx4),1)) - hx4dup = _upsample_like(hx4d,hx3) - - hx3d = self.stage3d(torch.cat((hx4dup,hx3),1)) - hx3dup = _upsample_like(hx3d,hx2) - - hx2d = self.stage2d(torch.cat((hx3dup,hx2),1)) - hx2dup = _upsample_like(hx2d,hx1) - - hx1d = self.stage1d(torch.cat((hx2dup,hx1),1)) - - - #side output - d1 = self.side1(hx1d) - d1 = _upsample_like(d1,x) - - d2 = self.side2(hx2d) - d2 = _upsample_like(d2,x) - - d3 = self.side3(hx3d) - d3 = _upsample_like(d3,x) - - d4 = self.side4(hx4d) - d4 = _upsample_like(d4,x) - - d5 = self.side5(hx5d) - d5 = _upsample_like(d5,x) - - d6 = self.side6(hx6) - d6 = _upsample_like(d6,x) - - # d0 = self.outconv(torch.cat((d1,d2,d3,d4,d5,d6),1)) - - return [F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6)],[hx1d,hx2d,hx3d,hx4d,hx5d,hx6] diff --git a/spaces/Tune-A-Video-library/Tune-A-Video-inference/Dockerfile b/spaces/Tune-A-Video-library/Tune-A-Video-inference/Dockerfile deleted file mode 100644 index e8711cb816d416037617dc0b72b33d866790c3d4..0000000000000000000000000000000000000000 --- a/spaces/Tune-A-Video-library/Tune-A-Video-inference/Dockerfile +++ /dev/null @@ -1,57 +0,0 @@ -FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 -ENV DEBIAN_FRONTEND=noninteractive -RUN apt-get update && \ - apt-get upgrade -y && \ - apt-get install -y --no-install-recommends \ - git \ - git-lfs \ - wget \ - curl \ - # ffmpeg \ - ffmpeg \ - x264 \ - # python build dependencies \ - build-essential \ - libssl-dev \ - zlib1g-dev \ - libbz2-dev \ - libreadline-dev \ - libsqlite3-dev \ - libncursesw5-dev \ - xz-utils \ - tk-dev \ - libxml2-dev \ - libxmlsec1-dev \ - libffi-dev \ - liblzma-dev && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:${PATH} -WORKDIR ${HOME}/app - -RUN curl https://pyenv.run | bash -ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH} -ENV PYTHON_VERSION=3.10.9 -RUN pyenv install ${PYTHON_VERSION} && \ - pyenv global ${PYTHON_VERSION} && \ - pyenv rehash && \ - pip install --no-cache-dir -U pip setuptools wheel - -RUN pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1 -COPY --chown=1000 requirements.txt /tmp/requirements.txt -RUN pip install --no-cache-dir -U -r /tmp/requirements.txt - -COPY --chown=1000 . ${HOME}/app -RUN cd Tune-A-Video && patch -p1 < ../patch -ENV PYTHONPATH=${HOME}/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces -CMD ["python", "app.py"] diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/randaugment.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/randaugment.py deleted file mode 100644 index 7034a49ad5fc63b97910790017432617ff4c6d7b..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/randaugment.py +++ /dev/null @@ -1,398 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import cv2 -import numpy as np - -import torch - - -## aug functions -def identity_func(img): - return img - - -def autocontrast_func(img, cutoff=0): - """ - same output as PIL.ImageOps.autocontrast - """ - n_bins = 256 - - def tune_channel(ch): - n = ch.size - cut = cutoff * n // 100 - if cut == 0: - high, low = ch.max(), ch.min() - else: - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - low = np.argwhere(np.cumsum(hist) > cut) - low = 0 if low.shape[0] == 0 else low[0] - high = np.argwhere(np.cumsum(hist[::-1]) > cut) - high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0] - if high <= low: - table = np.arange(n_bins) - else: - scale = (n_bins - 1) / (high - low) - offset = -low * scale - table = np.arange(n_bins) * scale + offset - table[table < 0] = 0 - table[table > n_bins - 1] = n_bins - 1 - table = table.clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def equalize_func(img): - """ - same output as PIL.ImageOps.equalize - PIL's implementation is different from cv2.equalize - """ - n_bins = 256 - - def tune_channel(ch): - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - non_zero_hist = hist[hist != 0].reshape(-1) - step = np.sum(non_zero_hist[:-1]) // (n_bins - 1) - if step == 0: - return ch - n = np.empty_like(hist) - n[0] = step // 2 - n[1:] = hist[:-1] - table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def rotate_func(img, degree, fill=(0, 0, 0)): - """ - like PIL, rotate by degree, not radians - """ - H, W = img.shape[0], img.shape[1] - center = W / 2, H / 2 - M = cv2.getRotationMatrix2D(center, degree, 1) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill) - return out - - -def solarize_func(img, thresh=128): - """ - same output as PIL.ImageOps.posterize - """ - table = np.array([el if el < thresh else 255 - el for el in range(256)]) - table = table.clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def color_func(img, factor): - """ - same output as PIL.ImageEnhance.Color - """ - ## implementation according to PIL definition, quite slow - # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis] - # out = blend(degenerate, img, factor) - # M = ( - # np.eye(3) * factor - # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor) - # )[np.newaxis, np.newaxis, :] - M = np.float32( - [[0.886, -0.114, -0.114], [-0.587, 0.413, -0.587], [-0.299, -0.299, 0.701]] - ) * factor + np.float32([[0.114], [0.587], [0.299]]) - out = np.matmul(img, M).clip(0, 255).astype(np.uint8) - return out - - -def contrast_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299])) - table = ( - np.array([(el - mean) * factor + mean for el in range(256)]) - .clip(0, 255) - .astype(np.uint8) - ) - out = table[img] - return out - - -def brightness_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def sharpness_func(img, factor): - """ - The differences the this result and PIL are all on the 4 boundaries, the center - areas are same - """ - kernel = np.ones((3, 3), dtype=np.float32) - kernel[1][1] = 5 - kernel /= 13 - degenerate = cv2.filter2D(img, -1, kernel) - if factor == 0.0: - out = degenerate - elif factor == 1.0: - out = img - else: - out = img.astype(np.float32) - degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :] - out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate) - out = out.astype(np.uint8) - return out - - -def shear_x_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, factor, 0], [0, 1, 0]]) - out = cv2.warpAffine( - img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR - ).astype(np.uint8) - return out - - -def translate_x_func(img, offset, fill=(0, 0, 0)): - """ - same output as PIL.Image.transform - """ - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, -offset], [0, 1, 0]]) - out = cv2.warpAffine( - img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR - ).astype(np.uint8) - return out - - -def translate_y_func(img, offset, fill=(0, 0, 0)): - """ - same output as PIL.Image.transform - """ - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [0, 1, -offset]]) - out = cv2.warpAffine( - img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR - ).astype(np.uint8) - return out - - -def posterize_func(img, bits): - """ - same output as PIL.ImageOps.posterize - """ - out = np.bitwise_and(img, np.uint8(255 << (8 - bits))) - return out - - -def shear_y_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [factor, 1, 0]]) - out = cv2.warpAffine( - img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR - ).astype(np.uint8) - return out - - -def cutout_func(img, pad_size, replace=(0, 0, 0)): - replace = np.array(replace, dtype=np.uint8) - H, W = img.shape[0], img.shape[1] - rh, rw = np.random.random(2) - pad_size = pad_size // 2 - ch, cw = int(rh * H), int(rw * W) - x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H) - y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W) - out = img.copy() - out[x1:x2, y1:y2, :] = replace - return out - - -### level to args -def enhance_level_to_args(MAX_LEVEL): - def level_to_args(level): - return ((level / MAX_LEVEL) * 1.8 + 0.1,) - - return level_to_args - - -def shear_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 0.3 - if np.random.random() > 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -def translate_level_to_args(translate_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * float(translate_const) - if np.random.random() > 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = int((level / MAX_LEVEL) * cutout_const) - return (level, replace_value) - - return level_to_args - - -def solarize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 256) - return (level,) - - return level_to_args - - -def none_level_to_args(level): - return () - - -def posterize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 4) - return (level,) - - return level_to_args - - -def rotate_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 30 - if np.random.random() < 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -func_dict = { - "Identity": identity_func, - "AutoContrast": autocontrast_func, - "Equalize": equalize_func, - "Rotate": rotate_func, - "Solarize": solarize_func, - "Color": color_func, - "Contrast": contrast_func, - "Brightness": brightness_func, - "Sharpness": sharpness_func, - "ShearX": shear_x_func, - "TranslateX": translate_x_func, - "TranslateY": translate_y_func, - "Posterize": posterize_func, - "ShearY": shear_y_func, -} - -translate_const = 10 -MAX_LEVEL = 10 -replace_value = (128, 128, 128) -arg_dict = { - "Identity": none_level_to_args, - "AutoContrast": none_level_to_args, - "Equalize": none_level_to_args, - "Rotate": rotate_level_to_args(MAX_LEVEL, replace_value), - "Solarize": solarize_level_to_args(MAX_LEVEL), - "Color": enhance_level_to_args(MAX_LEVEL), - "Contrast": enhance_level_to_args(MAX_LEVEL), - "Brightness": enhance_level_to_args(MAX_LEVEL), - "Sharpness": enhance_level_to_args(MAX_LEVEL), - "ShearX": shear_level_to_args(MAX_LEVEL, replace_value), - "TranslateX": translate_level_to_args(translate_const, MAX_LEVEL, replace_value), - "TranslateY": translate_level_to_args(translate_const, MAX_LEVEL, replace_value), - "Posterize": posterize_level_to_args(MAX_LEVEL), - "ShearY": shear_level_to_args(MAX_LEVEL, replace_value), -} - - -class RandomAugment(object): - def __init__(self, N=2, M=10, isPIL=False, augs=[]): - self.N = N - self.M = M - self.isPIL = isPIL - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N) - return [(op, 0.5, self.M) for op in sampled_ops] - - def __call__(self, img): - if self.isPIL: - img = np.array(img) - ops = self.get_random_ops() - for name, prob, level in ops: - if np.random.random() > prob: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return img - - -class VideoRandomAugment(object): - def __init__(self, N=2, M=10, p=0.0, tensor_in_tensor_out=True, augs=[]): - self.N = N - self.M = M - self.p = p - self.tensor_in_tensor_out = tensor_in_tensor_out - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N, replace=False) - return [(op, self.M) for op in sampled_ops] - - def __call__(self, frames): - assert ( - frames.shape[-1] == 3 - ), "Expecting last dimension for 3-channels RGB (b, h, w, c)." - - if self.tensor_in_tensor_out: - frames = frames.numpy().astype(np.uint8) - - num_frames = frames.shape[0] - - ops = num_frames * [self.get_random_ops()] - apply_or_not = num_frames * [np.random.random(size=self.N) > self.p] - - frames = torch.stack( - list(map(self._aug, frames, ops, apply_or_not)), dim=0 - ).float() - - return frames - - def _aug(self, img, ops, apply_or_not): - for i, (name, level) in enumerate(ops): - if not apply_or_not[i]: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return torch.from_numpy(img) - - -if __name__ == "__main__": - a = RandomAugment() - img = np.random.randn(32, 32, 3) - a(img) diff --git a/spaces/VoiceHero69/changer/webui/extensionlib/callbacks.py b/spaces/VoiceHero69/changer/webui/extensionlib/callbacks.py deleted file mode 100644 index 9ca04d6422894f69eca912b0ea8e9501a8fab5ae..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/extensionlib/callbacks.py +++ /dev/null @@ -1,23 +0,0 @@ -class CallBack: - def __init__(self, priority, value): - self.priority = priority - self.callback = value - - def call(self, *args, **kwargs): - self.callback(*args, **kwargs) - - -class CallBackManager: - def __init__(self, name): - self.name = name - self.callbacks: list[CallBack] = [] - - -callbacks: list[CallBackManager] = [] - - -def by_name(name): - matches = [callback for callback in callbacks if callback.name.casefold() == name.casefold()] - if len(matches) == 0: - return None - return matches[0] diff --git a/spaces/Whatcoldwind/csgo_investment/README.md b/spaces/Whatcoldwind/csgo_investment/README.md deleted file mode 100644 index 7ff6fcd427999166a73a891d353d4a0e78daea7f..0000000000000000000000000000000000000000 --- a/spaces/Whatcoldwind/csgo_investment/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Csgo Investment -emoji: 🚀 -colorFrom: red -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- \ No newline at end of file diff --git a/spaces/XingHe0127/Chatbot/modules/pdf_func.py b/spaces/XingHe0127/Chatbot/modules/pdf_func.py deleted file mode 100644 index 0aba6b7b891fc527c79b887256b0cbaa81ae5b3d..0000000000000000000000000000000000000000 --- a/spaces/XingHe0127/Chatbot/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from llama_index import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(text=text, extra_info={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/XzJosh/LittleTaffy-Bert-VITS2/data_utils.py b/spaces/XzJosh/LittleTaffy-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LittleTaffy-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/XzJosh/LittleTaffy-Bert-VITS2/text/symbols.py b/spaces/XzJosh/LittleTaffy-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LittleTaffy-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/Yassine/Stego/sse_mathfun.h b/spaces/Yassine/Stego/sse_mathfun.h deleted file mode 100644 index 9773e2961ad1f5af5d07b02484fd7edc0f6a2a53..0000000000000000000000000000000000000000 --- a/spaces/Yassine/Stego/sse_mathfun.h +++ /dev/null @@ -1,762 +0,0 @@ -/* SIMD (SSE1+MMX or SSE2) implementation of sin, cos, exp and log - - Inspired by Intel Approximate Math library, and based on the - corresponding algorithms of the cephes math library - - The default is to use the SSE1 version. If you define USE_SSE2 the - the SSE2 intrinsics will be used in place of the MMX intrinsics. Do - not expect any significant performance improvement with SSE2. -*/ - -/* Copyright (C) 2007 Julien Pommier - - This software is provided 'as-is', without any express or implied - warranty. In no event will the authors be held liable for any damages - arising from the use of this software. - - Permission is granted to anyone to use this software for any purpose, - including commercial applications, and to alter it and redistribute it - freely, subject to the following restrictions: - - 1. The origin of this software must not be misrepresented; you must not - claim that you wrote the original software. If you use this software - in a product, an acknowledgment in the product documentation would be - appreciated but is not required. - 2. Altered source versions must be plainly marked as such, and must not be - misrepresented as being the original software. - 3. This notice may not be removed or altered from any source distribution. - - (this is the zlib license) -*/ - -#include - -/* yes I know, the top of this file is quite ugly */ - -#define USE_SSE2 // use SSE2 version - -#ifdef _MSC_VER /* visual c++ */ -# define ALIGN16_BEG __declspec(align(16)) -# define ALIGN16_END -#else /* gcc or icc */ -# define ALIGN16_BEG -# define ALIGN16_END __attribute__((aligned(16))) -#endif - -/* __m128 is ugly to write */ -typedef __m128 v4sf; // vector of 4 float (sse1) - -#ifdef USE_SSE2 -# include -typedef __m128i v4si; // vector of 4 int (sse2) -#else -typedef __m64 v2si; // vector of 2 int (mmx) -#endif - -/* declare some SSE constants -- why can't I figure a better way to do that? */ -#define _PS_CONST(Name, Val) \ - static const ALIGN16_BEG float _ps_##Name[4] ALIGN16_END = { Val, Val, Val, Val } -#define _PI32_CONST(Name, Val) \ - static const ALIGN16_BEG int _pi32_##Name[4] ALIGN16_END = { Val, Val, Val, Val } -#define _PS_CONST_TYPE(Name, Type, Val) \ - static const ALIGN16_BEG Type _ps_##Name[4] ALIGN16_END = { Val, Val, Val, Val } - -_PS_CONST(1 , 1.0f); -_PS_CONST(0p5, 0.5f); -/* the smallest non denormalized float number */ -_PS_CONST_TYPE(min_norm_pos, int, 0x00800000); -_PS_CONST_TYPE(mant_mask, int, 0x7f800000); -_PS_CONST_TYPE(inv_mant_mask, int, ~0x7f800000); - -_PS_CONST_TYPE(sign_mask, int, 0x80000000); -_PS_CONST_TYPE(inv_sign_mask, int, ~0x80000000); - -_PI32_CONST(1, 1); -_PI32_CONST(inv1, ~1); -_PI32_CONST(2, 2); -_PI32_CONST(4, 4); -_PI32_CONST(0x7f, 0x7f); - -_PS_CONST(cephes_SQRTHF, 0.707106781186547524); -_PS_CONST(cephes_log_p0, 7.0376836292E-2); -_PS_CONST(cephes_log_p1, - 1.1514610310E-1); -_PS_CONST(cephes_log_p2, 1.1676998740E-1); -_PS_CONST(cephes_log_p3, - 1.2420140846E-1); -_PS_CONST(cephes_log_p4, + 1.4249322787E-1); -_PS_CONST(cephes_log_p5, - 1.6668057665E-1); -_PS_CONST(cephes_log_p6, + 2.0000714765E-1); -_PS_CONST(cephes_log_p7, - 2.4999993993E-1); -_PS_CONST(cephes_log_p8, + 3.3333331174E-1); -_PS_CONST(cephes_log_q1, -2.12194440e-4); -_PS_CONST(cephes_log_q2, 0.693359375); - -#if defined (__MINGW32__) - -/* the ugly part below: many versions of gcc used to be completely buggy with respect to some intrinsics - The movehl_ps is fixed in mingw 3.4.5, but I found out that all the _mm_cmp* intrinsics were completely - broken on my mingw gcc 3.4.5 ... - - Note that the bug on _mm_cmp* does occur only at -O0 optimization level -*/ - -inline __m128 my_movehl_ps(__m128 a, const __m128 b) { - asm ( - "movhlps %2,%0\n\t" - : "=x" (a) - : "0" (a), "x"(b) - ); - return a; } -#warning "redefined _mm_movehl_ps (see gcc bug 21179)" -#define _mm_movehl_ps my_movehl_ps - -inline __m128 my_cmplt_ps(__m128 a, const __m128 b) { - asm ( - "cmpltps %2,%0\n\t" - : "=x" (a) - : "0" (a), "x"(b) - ); - return a; - } -inline __m128 my_cmpgt_ps(__m128 a, const __m128 b) { - asm ( - "cmpnleps %2,%0\n\t" - : "=x" (a) - : "0" (a), "x"(b) - ); - return a; -} -inline __m128 my_cmpeq_ps(__m128 a, const __m128 b) { - asm ( - "cmpeqps %2,%0\n\t" - : "=x" (a) - : "0" (a), "x"(b) - ); - return a; -} -#warning "redefined _mm_cmpxx_ps functions..." -#define _mm_cmplt_ps my_cmplt_ps -#define _mm_cmpgt_ps my_cmpgt_ps -#define _mm_cmpeq_ps my_cmpeq_ps -#endif - -#ifndef USE_SSE2 -typedef union xmm_mm_union { - __m128 xmm; - __m64 mm[2]; -} xmm_mm_union; - -#define COPY_XMM_TO_MM(xmm_, mm0_, mm1_) { \ - xmm_mm_union u; u.xmm = xmm_; \ - mm0_ = u.mm[0]; \ - mm1_ = u.mm[1]; \ -} - -#define COPY_MM_TO_XMM(mm0_, mm1_, xmm_) { \ - xmm_mm_union u; u.mm[0]=mm0_; u.mm[1]=mm1_; xmm_ = u.xmm; \ - } - -#endif // USE_SSE2 - -/* natural logarithm computed for 4 simultaneous float - return NaN for x <= 0 -*/ -v4sf log_ps(v4sf x) { -#ifdef USE_SSE2 - v4si emm0; -#else - v2si mm0, mm1; -#endif - v4sf one = *(v4sf*)_ps_1; - - v4sf invalid_mask = _mm_cmple_ps(x, _mm_setzero_ps()); - - x = _mm_max_ps(x, *(v4sf*)_ps_min_norm_pos); /* cut off denormalized stuff */ - -#ifndef USE_SSE2 - /* part 1: x = frexpf(x, &e); */ - COPY_XMM_TO_MM(x, mm0, mm1); - mm0 = _mm_srli_pi32(mm0, 23); - mm1 = _mm_srli_pi32(mm1, 23); -#else - emm0 = _mm_srli_epi32(_mm_castps_si128(x), 23); -#endif - /* keep only the fractional part */ - x = _mm_and_ps(x, *(v4sf*)_ps_inv_mant_mask); - x = _mm_or_ps(x, *(v4sf*)_ps_0p5); - -#ifndef USE_SSE2 - /* now e=mm0:mm1 contain the really base-2 exponent */ - mm0 = _mm_sub_pi32(mm0, *(v2si*)_pi32_0x7f); - mm1 = _mm_sub_pi32(mm1, *(v2si*)_pi32_0x7f); - v4sf e = _mm_cvtpi32x2_ps(mm0, mm1); - _mm_empty(); /* bye bye mmx */ -#else - emm0 = _mm_sub_epi32(emm0, *(v4si*)_pi32_0x7f); - v4sf e = _mm_cvtepi32_ps(emm0); -#endif - - e = _mm_add_ps(e, one); - - /* part2: - if( x < SQRTHF ) { - e -= 1; - x = x + x - 1.0; - } else { x = x - 1.0; } - */ - v4sf mask = _mm_cmplt_ps(x, *(v4sf*)_ps_cephes_SQRTHF); - v4sf tmp = _mm_and_ps(x, mask); - x = _mm_sub_ps(x, one); - e = _mm_sub_ps(e, _mm_and_ps(one, mask)); - x = _mm_add_ps(x, tmp); - - - v4sf z = _mm_mul_ps(x,x); - - v4sf y = *(v4sf*)_ps_cephes_log_p0; - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p1); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p2); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p3); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p4); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p5); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p6); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p7); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_log_p8); - y = _mm_mul_ps(y, x); - - y = _mm_mul_ps(y, z); - - - tmp = _mm_mul_ps(e, *(v4sf*)_ps_cephes_log_q1); - y = _mm_add_ps(y, tmp); - - - tmp = _mm_mul_ps(z, *(v4sf*)_ps_0p5); - y = _mm_sub_ps(y, tmp); - - tmp = _mm_mul_ps(e, *(v4sf*)_ps_cephes_log_q2); - x = _mm_add_ps(x, y); - x = _mm_add_ps(x, tmp); - x = _mm_or_ps(x, invalid_mask); // negative arg will be NAN - return x; -} - -_PS_CONST(exp_hi, 88.3762626647949f); -_PS_CONST(exp_lo, -88.3762626647949f); - -_PS_CONST(cephes_LOG2EF, 1.44269504088896341); -_PS_CONST(cephes_exp_C1, 0.693359375); -_PS_CONST(cephes_exp_C2, -2.12194440e-4); - -_PS_CONST(cephes_exp_p0, 1.9875691500E-4); -_PS_CONST(cephes_exp_p1, 1.3981999507E-3); -_PS_CONST(cephes_exp_p2, 8.3334519073E-3); -_PS_CONST(cephes_exp_p3, 4.1665795894E-2); -_PS_CONST(cephes_exp_p4, 1.6666665459E-1); -_PS_CONST(cephes_exp_p5, 5.0000001201E-1); - -v4sf exp_ps(v4sf x) { - v4sf tmp = _mm_setzero_ps(), fx; -#ifdef USE_SSE2 - v4si emm0; -#else - v2si mm0, mm1; -#endif - v4sf one = *(v4sf*)_ps_1; - - x = _mm_min_ps(x, *(v4sf*)_ps_exp_hi); - x = _mm_max_ps(x, *(v4sf*)_ps_exp_lo); - - /* express exp(x) as exp(g + n*log(2)) */ - fx = _mm_mul_ps(x, *(v4sf*)_ps_cephes_LOG2EF); - fx = _mm_add_ps(fx, *(v4sf*)_ps_0p5); - - /* how to perform a floorf with SSE: just below */ -#ifndef USE_SSE2 - /* step 1 : cast to int */ - tmp = _mm_movehl_ps(tmp, fx); - mm0 = _mm_cvttps_pi32(fx); - mm1 = _mm_cvttps_pi32(tmp); - /* step 2 : cast back to float */ - tmp = _mm_cvtpi32x2_ps(mm0, mm1); -#else - emm0 = _mm_cvttps_epi32(fx); - tmp = _mm_cvtepi32_ps(emm0); -#endif - /* if greater, substract 1 */ - v4sf mask = _mm_cmpgt_ps(tmp, fx); - mask = _mm_and_ps(mask, one); - fx = _mm_sub_ps(tmp, mask); - - tmp = _mm_mul_ps(fx, *(v4sf*)_ps_cephes_exp_C1); - v4sf z = _mm_mul_ps(fx, *(v4sf*)_ps_cephes_exp_C2); - x = _mm_sub_ps(x, tmp); - x = _mm_sub_ps(x, z); - - z = _mm_mul_ps(x,x); - - v4sf y = *(v4sf*)_ps_cephes_exp_p0; - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_exp_p1); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_exp_p2); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_exp_p3); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_exp_p4); - y = _mm_mul_ps(y, x); - y = _mm_add_ps(y, *(v4sf*)_ps_cephes_exp_p5); - y = _mm_mul_ps(y, z); - y = _mm_add_ps(y, x); - y = _mm_add_ps(y, one); - - /* build 2^n */ -#ifndef USE_SSE2 - z = _mm_movehl_ps(z, fx); - mm0 = _mm_cvttps_pi32(fx); - mm1 = _mm_cvttps_pi32(z); - mm0 = _mm_add_pi32(mm0, *(v2si*)_pi32_0x7f); - mm1 = _mm_add_pi32(mm1, *(v2si*)_pi32_0x7f); - mm0 = _mm_slli_pi32(mm0, 23); - mm1 = _mm_slli_pi32(mm1, 23); - - v4sf pow2n; - COPY_MM_TO_XMM(mm0, mm1, pow2n); - _mm_empty(); -#else - emm0 = _mm_cvttps_epi32(fx); - emm0 = _mm_add_epi32(emm0, *(v4si*)_pi32_0x7f); - emm0 = _mm_slli_epi32(emm0, 23); - v4sf pow2n = _mm_castsi128_ps(emm0); -#endif - y = _mm_mul_ps(y, pow2n); - return y; -} - -_PS_CONST(minus_cephes_DP1, -0.78515625); -_PS_CONST(minus_cephes_DP2, -2.4187564849853515625e-4); -_PS_CONST(minus_cephes_DP3, -3.77489497744594108e-8); -_PS_CONST(sincof_p0, -1.9515295891E-4); -_PS_CONST(sincof_p1, 8.3321608736E-3); -_PS_CONST(sincof_p2, -1.6666654611E-1); -_PS_CONST(coscof_p0, 2.443315711809948E-005); -_PS_CONST(coscof_p1, -1.388731625493765E-003); -_PS_CONST(coscof_p2, 4.166664568298827E-002); -_PS_CONST(cephes_FOPI, 1.27323954473516); // 4 / M_PI - - -/* evaluation of 4 sines at onces, using only SSE1+MMX intrinsics so - it runs also on old athlons XPs and the pentium III of your grand - mother. - - The code is the exact rewriting of the cephes sinf function. - Precision is excellent as long as x < 8192 (I did not bother to - take into account the special handling they have for greater values - -- it does not return garbage for arguments over 8192, though, but - the extra precision is missing). - - Note that it is such that sinf((float)M_PI) = 8.74e-8, which is the - surprising but correct result. - - Performance is also surprisingly good, 1.33 times faster than the - macos vsinf SSE2 function, and 1.5 times faster than the - __vrs4_sinf of amd's ACML (which is only available in 64 bits). Not - too bad for an SSE1 function (with no special tuning) ! - However the latter libraries probably have a much better handling of NaN, - Inf, denormalized and other special arguments.. - - On my core 1 duo, the execution of this function takes approximately 95 cycles. - - From what I have observed on the experiments with Intel AMath lib, switching to an - SSE2 version would improve the perf by only 10%. - - Since it is based on SSE intrinsics, it has to be compiled at -O2 to - deliver full speed. -*/ -v4sf sin_ps(v4sf x) { // any x - v4sf xmm1, xmm2 = _mm_setzero_ps(), xmm3, sign_bit, y; - -#ifdef USE_SSE2 - v4si emm0, emm2; -#else - v2si mm0, mm1, mm2, mm3; -#endif - sign_bit = x; - /* take the absolute value */ - x = _mm_and_ps(x, *(v4sf*)_ps_inv_sign_mask); - /* extract the sign bit (upper one) */ - sign_bit = _mm_and_ps(sign_bit, *(v4sf*)_ps_sign_mask); - - /* scale by 4/Pi */ - y = _mm_mul_ps(x, *(v4sf*)_ps_cephes_FOPI); - - //printf("plop:"); print4(y); -#ifdef USE_SSE2 - /* store the integer part of y in mm0 */ - emm2 = _mm_cvttps_epi32(y); - /* j=(j+1) & (~1) (see the cephes sources) */ - emm2 = _mm_add_epi32(emm2, *(v4si*)_pi32_1); - emm2 = _mm_and_si128(emm2, *(v4si*)_pi32_inv1); - y = _mm_cvtepi32_ps(emm2); - /* get the swap sign flag */ - emm0 = _mm_and_si128(emm2, *(v4si*)_pi32_4); - emm0 = _mm_slli_epi32(emm0, 29); - /* get the polynom selection mask - there is one polynom for 0 <= x <= Pi/4 - and another one for Pi/4 -#include -#include -#include -#include "stc_extract_c.h" - -// {{{ stc_extract() -int stc_extract(const u8 *vector, int vectorlength, u8 *message, int syndromelength, int matrixheight) -{ - int i, j, k, index, index2, base, height; - - u8 *binmat[2]; - int *matrices, *widths; - - height = matrixheight; - - if(matrixheight > 31) { - fprintf(stderr, "Submatrix height must not exceed 31."); - return -1; - } - - { - double invalpha; - int shorter, longer, worm; - u32 *columns[2]; - - matrices = (int *)malloc(syndromelength * sizeof(int)); - widths = (int *)malloc(syndromelength * sizeof(int)); - - invalpha = (double)vectorlength / syndromelength; - if(invalpha < 1) { - fprintf(stderr, "The message cannot be longer than the cover object.\n"); - return -1; - } - shorter = (int)floor(invalpha); - longer = (int)ceil(invalpha); - if((columns[0] = getMatrix(shorter, matrixheight)) == NULL) { - free(widths); - free(matrices); - return -1; - } - if((columns[1] = getMatrix(longer, matrixheight)) == NULL) { - free(columns[0]); - free(widths); - free(matrices); - return -1; - } - worm = 0; - for(i = 0; i < syndromelength; i++) { - if(worm + longer <= (i + 1) * invalpha + 0.5) { - matrices[i] = 1; - widths[i] = longer; - worm += longer; - } else { - matrices[i] = 0; - widths[i] = shorter; - worm += shorter; - } - } - binmat[0] = (u8*)malloc(shorter * matrixheight * sizeof(u8)); - binmat[1] = (u8*)malloc(longer * matrixheight * sizeof(u8)); - for(i = 0, index = 0; i < shorter; i++) { - for(j = 0; j < matrixheight; j++, index++) { - binmat[0][index] = (columns[0][i] & (1 << j)) ? 1 : 0; - } - } - for(i = 0, index = 0; i < longer; i++) { - for(j = 0; j < matrixheight; j++, index++) { - binmat[1][index] = (columns[1][i] & (1 << j)) ? 1 : 0; - } - } - free(columns[0]); - free(columns[1]); - } - - for(i = 0; i < syndromelength; i++) { - message[i] = 0; - } - - for(index = 0, index2 = 0; index2 < syndromelength; index2++) { - for(k = 0, base = 0; k < widths[index2]; k++, index++, base += matrixheight) { - if(vector[index]) { - for(i = 0; i < height; i++) { - message[index2 + i] ^= binmat[matrices[index2]][base + i]; - } - } - } - if(syndromelength - index2 <= matrixheight) - height--; - } - - free(matrices); - free(widths); - free(binmat[0]); - free(binmat[1]); - - return 0; -} -// }}} - - diff --git a/spaces/YueMafighting/FollowYourPose/inference_mmpose.py b/spaces/YueMafighting/FollowYourPose/inference_mmpose.py deleted file mode 100644 index dafbb96f0661abe8ff1cf72d0698bb32e49c65ff..0000000000000000000000000000000000000000 --- a/spaces/YueMafighting/FollowYourPose/inference_mmpose.py +++ /dev/null @@ -1,105 +0,0 @@ -import gradio as gr - -import os -import cv2 -import numpy as np -from PIL import Image -from moviepy.editor import * - -import sys -sys.path.append('FollowYourPose') - -def get_frames(video_in): - frames = [] - #resize the video - clip = VideoFileClip(video_in) - start_frame = 0 # 起始帧数 - end_frame = 50 # 结束帧数 - - if not os.path.exists('./raw_frames'): - os.makedirs('./raw_frames') - - if not os.path.exists('./mmpose_frames'): - os.makedirs('./mmpose_frames') - - #check fps - if clip.fps > 30: - print("vide rate is over 30, resetting to 30") - clip_resized = clip.resize(height=512) - clip_resized = clip_resized.subclip(start_frame / clip_resized.fps, end_frame / clip_resized.fps) # subclip 2 seconds - clip_resized.write_videofile("./video_resized.mp4", fps=30) - else: - print("video rate is OK") - clip_resized = clip.resize(height=512) - clip_resized = clip_resized.subclip(start_frame / clip.fps, end_frame / clip.fps) # subclip 5 seconds - clip_resized.write_videofile("./video_resized.mp4", fps=clip.fps) - - print("video resized to 512 height") - - # Opens the Video file with CV2 - cap= cv2.VideoCapture("./video_resized.mp4") - - fps = cap.get(cv2.CAP_PROP_FPS) - print("video fps: " + str(fps)) - i=0 - while(cap.isOpened()): - ret, frame = cap.read() - if ret == False: - break - cv2.imwrite('./raw_frames/kang'+str(i)+'.jpg',frame) - frames.append('./raw_frames/kang'+str(i)+'.jpg') - i+=1 - - cap.release() - cv2.destroyAllWindows() - print("broke the video into frames") - - return frames, fps - -def get_mmpose_filter(mmpose, i): - #image = Image.open(i) - - #image = np.array(image) - image = mmpose(i, fn_index=0)[1] - image = Image.open(image) - #image = Image.fromarray(image) - image.save("./mmpose_frames/mmpose_frame_" + str(i).split('/')[-1][:-4] + ".jpeg") - return "./mmpose_frames/mmpose_frame_" + str(i).split('/')[-1][:-4] + ".jpeg" - -def create_video(frames, fps, type): - print("building video result") - clip = ImageSequenceClip(frames, fps=fps) - clip.write_videofile(type + "_result.mp4", fps=fps) - - return type + "_result.mp4" - - -def infer_skeleton(mmpose, video_in): - - - # 1. break video into frames and get FPS - - break_vid = get_frames(video_in) - frames_list= break_vid[0] - fps = break_vid[1] - #n_frame = int(trim_value*fps) - n_frame = len(frames_list) - - if n_frame >= len(frames_list): - print("video is shorter than the cut value") - n_frame = len(frames_list) - - # 2. prepare frames result arrays - result_frames = [] - print("set stop frames to: " + str(n_frame)) - - for i in frames_list[0:int(n_frame)]: - mmpose_frame = get_mmpose_filter(mmpose, i) - result_frames.append(mmpose_frame) - print("frame " + i + "/" + str(n_frame) + ": done;") - - - final_vid = create_video(result_frames, fps, "mmpose") - files = [final_vid] - - return final_vid, files \ No newline at end of file diff --git a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/cookie_manager.py b/spaces/a-v-bely/spanish-task-generator/utilities_cookies/cookie_manager.py deleted file mode 100644 index 296182195a040e3dc6130499bff23508e861eab4..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/cookie_manager.py +++ /dev/null @@ -1,101 +0,0 @@ -import streamlit as st -from pathlib import Path -from typing import Mapping -from datetime import datetime -from datetime import timedelta -from urllib.parse import unquote -from typing import MutableMapping -from streamlit.components.v1 import components - - -build_path = Path(__file__).parent / 'build' -_component_func = components.declare_component("CookieManager.sync_cookies", path=str(build_path)) - - -class CookieManager(MutableMapping[str, str]): - def __init__(self, *, path: str = None, prefix=""): - self._queue = st.session_state.setdefault('CookieManager.queue', {}) - self._prefix = prefix - raw_cookie = self._run_component(save_only=False, key="CookieManager.sync_cookies") - if raw_cookie is None: - self._cookies = None - else: - self._cookies = parse_cookies(raw_cookie) - self._clean_queue() - self._default_expiry = datetime.now() + timedelta(days=365) - self._path = path if path is not None else "/" - - def ready(self) -> bool: - return self._cookies is not None - - def save(self): - if self._queue: - self._run_component(save_only=True, key="CookieManager.sync_cookies.save") - - def _run_component(self, save_only: bool, key: str): - queue = { - self._prefix + k: v for k, v in self._queue.items() - } - return _component_func(queue=queue, saveOnly=save_only, key=key) - - def _clean_queue(self): - for name, spec in list(self._queue.items()): - value = self._cookies.get(self._prefix + name) - if value == spec['value']: - del self._queue[name] - - def __repr__(self): - if self.ready(): - return f'' - return '' - - def __getitem__(self, k: str) -> str: - return self._get_cookies()[k] - - def __iter__(self): - return iter(self._get_cookies()) - - def __len__(self): - return len(self._get_cookies()) - - def __setitem__(self, key: str, value: str) -> None: - if self._cookies.get(key) != value: - self._queue[key] = dict( - value=value, - expires_at=self._default_expiry.isoformat(), - path=self._path, - ) - - def __delitem__(self, key: str) -> None: - if key in self._cookies: - self._queue[key] = dict(value=None, path=self._path) - - def _get_cookies(self) -> Mapping[str, str]: - if self._cookies is None: - raise CookiesNotReady() - cookies = { - k[len(self._prefix):]: v - for k, v in self._cookies.items() - if k.startswith(self._prefix) - } - for name, spec in self._queue.items(): - if spec['value'] is not None: - cookies[name] = spec['value'] - else: - cookies.pop(name, None) - return cookies - - -def parse_cookies(raw_cookie): - cookies = {} - for part in raw_cookie.split(';'): - part = part.strip() - if not part: - continue - name, value = part.split('=', 1) - cookies[unquote(name)] = unquote(value) - return cookies - - -class CookiesNotReady(Exception): - pass diff --git a/spaces/abdvl/datahub_qa_bot/docs/modeling/metadata-model.md b/spaces/abdvl/datahub_qa_bot/docs/modeling/metadata-model.md deleted file mode 100644 index 63f6cb6177c04790a8913fa0aba5998c9a21b4e1..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/modeling/metadata-model.md +++ /dev/null @@ -1,621 +0,0 @@ ---- -title: The Metadata Model -sidebar_label: The Metadata Model -slug: /metadata-modeling/metadata-model ---- - -# How does DataHub model metadata? - -DataHub takes a schema-first approach to modeling metadata. We use the open-source Pegasus schema language ([PDL](https://linkedin.github.io/rest.li/pdl_schema)) extended with a custom set of annotations to model metadata. The DataHub storage, serving, indexing and ingestion layer operates directly on top of the metadata model and supports strong types all the way from the client to the storage layer. - -Conceptually, metadata is modeled using the following abstractions - -- **Entities**: An entity is the primary node in the metadata graph. For example, an instance of a Dataset or a CorpUser is an Entity. An entity is made up of a type, e.g. 'dataset', a unique identifier (e.g. an 'urn') and groups of metadata attributes (e.g. documents) which we call aspects. - - -- **Aspects**: An aspect is a collection of attributes that describes a particular facet of an entity. They are the smallest atomic unit of write in DataHub. That is, multiple aspects associated with the same Entity can be updated independently. For example, DatasetProperties contains a collection of attributes that describes a Dataset. Aspects can be shared across entities, for example "Ownership" is an aspect that is re-used across all the Entities that have owners. Common aspects include - - - [ownership](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Ownership.pdl): Captures the users and groups who own an Entity. - - [globalTags](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/GlobalTags.pdl): Captures references to the Tags associated with an Entity. - - [glossaryTerms](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/GlossaryTerms.pdl): Captures references to the Glossary Terms associated with an Entity. - - [institutionalMemory](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/InstitutionalMemory.pdl): Captures internal company Documents associated with an Entity (e.g. links!) - - [status](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Status.pdl): Captures the "deletion" status of an Entity, i.e. whether it should be soft-deleted. - - [subTypes](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/SubTypes.pdl): Captures one or more "sub types" of a more generic Entity type. An example can be a "Looker Explore" Dataset, a "View" Dataset. Specific sub types can imply that certain additional aspects are present for a given Entity. - - -- **Relationships**: A relationship represents a named edge between 2 entities. They are declared via foreign key attributes within Aspects along with a custom annotation (@Relationship). Relationships permit edges to be traversed bi-directionally. For example, a Chart may refer to a CorpUser as its owner via a relationship named "OwnedBy". This edge would be walkable starting from the Chart *or* the CorpUser instance. - -- **Identifiers (Keys & Urns)**: A key is a special type of aspect that contains the fields that uniquely identify an individual Entity. Key aspects can be serialized into *Urns*, which represent a stringified form of the key fields used for primary-key lookup. Moreover, *Urns* can be converted back into key aspect structs, making key aspects a type of "virtual" aspect. Key aspects provide a mechanism for clients to easily read fields comprising the primary key, which are usually generally useful like Dataset names, platform names etc. Urns provide a friendly handle by which Entities can be queried without requiring a fully materialized struct. - - -Here is an example graph consisting of 3 types of entity (CorpUser, Chart, Dashboard), 2 types of relationship (OwnedBy, Contains), and 3 types of metadata aspect (Ownership, ChartInfo, and DashboardInfo). - -![metadata-modeling](../imgs/metadata-model-chart.png) - -## The Core Entities - -DataHub's "core" Entity types model the Data Assets that comprise the Modern Data Stack. They include - -1. **[Data Platform](docs/generated/metamodel/entities/dataPlatform.md)**: A type of Data "Platform". That is, an external system that is involved in processing, storing, or visualizing Data Assets. Examples include MySQL, Snowflake, Redshift, and S3. -2. **[Dataset](docs/generated/metamodel/entities/dataset.md)**: A collection of data. Tables, Views, Streams, Document Collections, and Files are all modeled as "Datasets" on DataHub. Datasets can have tags, owners, links, glossary terms, and descriptions attached to them. They can also have specific sub-types, such as "View", "Collection", "Stream", "Explore", and more. Examples include Postgres Tables, MongoDB Collections, or S3 files. -3. **[Chart](docs/generated/metamodel/entities/chart.md)**: A single data vizualization derived from a Dataset. A single Chart can be a part of multiple Dashboards. Charts can have tags, owners, links, glossary terms, and descriptions attached to them. Examples include a Superset or Looker Chart. -4. **[Dashboard](docs/generated/metamodel/entities/dashboard.md)**: A collection of Charts for visualization. Dashboards can have tags, owners, links, glossary terms, and descriptions attached to them. Examples include a Superset or Mode Dashboard. -5. **[Data Job](docs/generated/metamodel/entities/dataJob.md)** (Task): An executable job that processes data assets, where "processing" implies consuming data, producing data, or both. Data Jobs can have tags, owners, links, glossary terms, and descriptions attached to them. They must belong to a single Data Flow. Examples include an Airflow Task. -6. **[Data Flow](docs/generated/metamodel/entities/dataFlow.md)** (Pipeline): An executable collection of Data Jobs with dependencies among them, or a DAG. Data Jobs can have tags, owners, links, glossary terms, and descriptions attached to them. Examples include an Airflow DAG. - -See the **Metadata Modeling/Entities** section on the left to explore the entire model. - -## The Entity Registry - -Where are Entities and their aspects defined in DataHub? Where does the Metadata Model "live"? The Metadata Model is stitched together by means -of an **Entity Registry**, a catalog of Entities that comprise the Metadata Graph along with the aspects associated with each. Put -simply, this is where the "schema" of the model is defined. - -Traditionally, the Entity Registry was constructed using [Snapshot](https://github.com/datahub-project/datahub/tree/master/metadata-models/src/main/pegasus/com/linkedin/metadata/snapshot) models, which are schemas that explicitly tie -an Entity to the Aspects associated with it. An example is [DatasetSnapshot](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/snapshot/DatasetSnapshot.pdl), which defines the core `Dataset` Entity. -The Aspects of the Dataset entity are captured via a union field inside a special "Aspect" schema. An example is -[DatasetAspect](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/metadata/aspect/DatasetAspect.pdl). -This file associates dataset-specific aspects (like [DatasetProperties](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetProperties.pdl)) and common aspects (like [Ownership](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Ownership.pdl), -[InstitutionalMemory](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/InstitutionalMemory.pdl), -and [Status](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Status.pdl)) -to the Dataset Entity. This approach to defining Entities will soon be deprecated in favor of a new approach. - -As of January 2022, DataHub has deprecated support for Snapshot models as a means of adding new entities. Instead, -the Entity Registry is defined inside a YAML configuration file called [entity-registry.yml](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/resources/entity-registry.yml), -which is provided to DataHub's Metadata Service at start up. This file declares Entities and Aspects by referring to their [names](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Ownership.pdl#L7). -At boot time, DataHub validates the structure of the registry file and ensures that it can find PDL schemas associated with -each aspect name provided by configuration (via the [@Aspect](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/common/Ownership.pdl#L6) annotation). - -By moving to this format, evolving the Metadata Model becomes much easier. Adding Entities & Aspects becomes a matter of adding a -to the YAML configuration, instead of creating new Snapshot / Aspect files. - - -## Exploring DataHub's Metadata Model - -To explore the current DataHub metadata model, you can inspect this high-level picture that shows the different entities and edges between them showing the relationships between them. -![Metadata Model Graph](../imgs/datahub-metadata-model.png) - -To navigate the aspect model for specific entities and explore relationships using the `foreign-key` concept, you can view them in our demo environment or navigate the auto-generated docs in the **Metadata Modeling/Entities** section on the left. - -For example, here are helpful links to the most popular entities in DataHub's metadata model: -* [Dataset](docs/generated/metamodel/entities/dataset.md): [Profile](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Dataset,PROD)/Schema?is_lineage_mode=false) [Documentation](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Dataset,PROD)/Documentation?is_lineage_mode=false) -* [Dashboard](docs/generated/metamodel/entities/dashboard.md): [Profile](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Dashboard,PROD)/Schema?is_lineage_mode=false) [Documentation](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Dashboard,PROD)/Documentation?is_lineage_mode=false) -* [User (a.k.a CorpUser)](docs/generated/metamodel/entities/corpuser.md): [Profile](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Corpuser,PROD)/Schema?is_lineage_mode=false) [Documentation](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,Corpuser,PROD)/Documentation?is_lineage_mode=false) -* [Pipeline (a.k.a DataFlow)](docs/generated/metamodel/entities/dataFlow.md): [Profile](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,DataFlow,PROD)/Schema?is_lineage_mode=false) [Documentation](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,DataFlow,PROD)/Documentation?is_lineage_mode=false) -* [Feature Table (a.k.a. MLFeatureTable)](docs/generated/metamodel/entities/mlFeatureTable.md): [Profile](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,MlFeatureTable,PROD)/Schema?is_lineage_mode=false) [Documentation](https://demo.datahubproject.io/dataset/urn:li:dataset:(urn:li:dataPlatform:datahub,MlFeatureTable,PROD)/Documentation?is_lineage_mode=false) -* For the full list of entities in the metadata model, browse them [here](https://demo.datahubproject.io/browse/dataset/prod/datahub/entities) or use the **Metadata Modeling/Entities** section on the left. - -### Generating documentation for the Metadata Model - -- This website: Metadata model documentation for this website is generated using `./gradlew :docs-website:yarnBuild`, which delegates the model doc generation to the `modelDocGen` task in the `metadata-ingestion` module. -- Uploading documentation to a running DataHub Instance: The metadata model documentation can be generated and uploaded into a running DataHub instance using the command `./gradlew :metadata-ingestion:modelDocUpload`. **_NOTE_**: This will upload the model documentation to the DataHub instance running at the environment variable `$DATAHUB_SERVER` (http://localhost:8080 by default) - -## Querying the Metadata Graph - -DataHub’s modeling language allows you to optimize metadata persistence to align with query patterns. - -There are three supported ways to query the metadata graph: by primary key lookup, a search query, and via relationship traversal. - -> New to [PDL](https://linkedin.github.io/rest.li/pdl_schema) files? Don't fret. They are just a way to define a JSON document "schema" for Aspects in DataHub. All Data ingested to DataHub's Metadata Service is validated against a PDL schema, with each @Aspect corresponding to a single schema. Structurally, PDL is quite similar to [Protobuf](https://developers.google.com/protocol-buffers) and conveniently maps to JSON. - -### Querying an Entity - -#### Fetching Latest Entity Aspects (Snapshot) - -Querying an Entity by primary key means using the "entities" endpoint, passing in the -urn of the entity to retrieve. - -For example, to fetch a Chart entity, we can use the following `curl`: - -``` -curl --location --request GET 'http://localhost:8080/entities/urn%3Ali%3Achart%3Acustomers -``` - -This request will return a set of versioned aspects, each at the latest version. - -As you'll notice, we perform the lookup using the url-encoded *Urn* associated with an entity. -The response would be an "Entity" record containing the Entity Snapshot (which in turn contains the latest aspects associated with the Entity). - -#### Fetching Versioned Aspects - -DataHub also supports fetching individual pieces of metadata about an Entity, which we call aspects. To do so, -you'll provide both an Entity's primary key (urn) along with the aspect name and version that you'd like to retrieve. - -For example, to fetch the latest version of a Dataset's SchemaMetadata aspect, you would issue the following query: - -``` -curl 'http://localhost:8080/aspects/urn%3Ali%3Adataset%3A(urn%3Ali%3AdataPlatform%3Afoo%2Cbar%2CPROD)?aspect=schemaMetadata&version=0' - -{ - "version":0, - "aspect":{ - "com.linkedin.schema.SchemaMetadata":{ - "created":{ - "actor":"urn:li:corpuser:fbar", - "time":0 - }, - "platformSchema":{ - "com.linkedin.schema.KafkaSchema":{ - "documentSchema":"{\"type\":\"record\",\"name\":\"MetadataChangeEvent\",\"namespace\":\"com.linkedin.mxe\",\"doc\":\"Kafka event for proposing a metadata change for an entity.\",\"fields\":[{\"name\":\"auditHeader\",\"type\":{\"type\":\"record\",\"name\":\"KafkaAuditHeader\",\"namespace\":\"com.linkedin.avro2pegasus.events\",\"doc\":\"Header\"}}]}" - } - }, - "lastModified":{ - "actor":"urn:li:corpuser:fbar", - "time":0 - }, - "schemaName":"FooEvent", - "fields":[ - { - "fieldPath":"foo", - "description":"Bar", - "type":{ - "type":{ - "com.linkedin.schema.StringType":{ - - } - } - }, - "nativeDataType":"string" - } - ], - "version":0, - "hash":"", - "platform":"urn:li:dataPlatform:foo" - } - } -} -``` - -#### Fetching Timeseries Aspects - -DataHub supports an API for fetching a group of Timeseries aspects about an Entity. For example, you may want to use this API -to fetch recent profiling runs & statistics about a Dataset. To do so, you can issue a "get" request against the `/aspects` endpoint. - -For example, to fetch dataset profiles (ie. stats) for a Dataset, you would issue the following query: - -``` -curl -X POST 'http://localhost:8080/aspects?action=getTimeseriesAspectValues' \ ---data '{ - "urn": "urn:li:dataset:(urn:li:dataPlatform:redshift,global_dev.larxynx_carcinoma_data_2020,PROD)", - "entity": "dataset", - "aspect": "datasetProfile", - "startTimeMillis": 1625122800000, - "endTimeMillis": 1627455600000 -}' - -{ - "value":{ - "limit":10000, - "aspectName":"datasetProfile", - "endTimeMillis":1627455600000, - "startTimeMillis":1625122800000, - "entityName":"dataset", - "values":[ - { - "aspect":{ - "value":"{\"timestampMillis\":1626912000000,\"fieldProfiles\":[{\"uniqueProportion\":1.0,\"sampleValues\":[\"123MMKK12\",\"13KDFMKML\",\"123NNJJJL\"],\"fieldPath\":\"id\",\"nullCount\":0,\"nullProportion\":0.0,\"uniqueCount\":3742},{\"uniqueProportion\":1.0,\"min\":\"1524406400000\",\"max\":\"1624406400000\",\"sampleValues\":[\"1640023230002\",\"1640343012207\",\"16303412330117\"],\"mean\":\"1555406400000\",\"fieldPath\":\"date\",\"nullCount\":0,\"nullProportion\":0.0,\"uniqueCount\":3742},{\"uniqueProportion\":0.037,\"min\":\"21\",\"median\":\"68\",\"max\":\"92\",\"sampleValues\":[\"45\",\"65\",\"81\"],\"mean\":\"65\",\"distinctValueFrequencies\":[{\"value\":\"12\",\"frequency\":103},{\"value\":\"54\",\"frequency\":12}],\"fieldPath\":\"patient_age\",\"nullCount\":0,\"nullProportion\":0.0,\"uniqueCount\":79},{\"uniqueProportion\":0.00820873786407767,\"sampleValues\":[\"male\",\"female\"],\"fieldPath\":\"patient_gender\",\"nullCount\":120,\"nullProportion\":0.03,\"uniqueCount\":2}],\"rowCount\":3742,\"columnCount\":4}", - "contentType":"application/json" - } - }, - ] - } -} -``` - -You'll notice that the aspect itself is serialized as escaped JSON. This is part of a shift toward a more generic set of READ / WRITE APIs -that permit serialization of aspects in different ways. By default, the content type will be JSON, and the aspect can be deserialized into a normal JSON object -in the language of your choice. Note that this will soon become the de-facto way to both write and read individual aspects. - - - -### Search Query - -A search query allows you to search for entities matching an arbitrary string. - -For example, to search for entities matching the term "customers", we can use the following CURL: - -``` -curl --location --request POST 'http://localhost:8080/entities?action=search' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "input": "\"customers\"", - "entity": "chart", - "start": 0, - "count": 10 -}' -``` - -The notable parameters are `input` and `entity`. `input` specifies the query we are issuing and `entity` specifies the Entity Type we want to search over. This is the common name of the Entity as defined in the @Entity definition. The response contains a list of Urns, that can be used to fetch the full entity. - -### Relationship Query - -A relationship query allows you to find Entity connected to a particular source Entity via an edge of a particular type. - -For example, to find the owners of a particular Chart, we can use the following CURL: - -``` -curl --location --request GET --header 'X-RestLi-Protocol-Version: 2.0.0' 'http://localhost:8080/relationships?direction=OUTGOING&urn=urn%3Ali%3Achart%3Acustomers&types=List(OwnedBy)' -``` - -The notable parameters are `direction`, `urn` and `types`. The response contains *Urns* associated with all entities connected -to the primary entity (urn:li:chart:customer) by an relationship named "OwnedBy". That is, it permits fetching the owners of a given -chart. - -### Special Aspects - -There are a few special aspects worth mentioning: - -1. Key aspects: Contain the properties that uniquely identify an Entity. -2. Browse Paths aspect: Represents a hierarchical path associated with an Entity. - -#### Key aspects - -As introduced above, Key aspects are structs / records that contain the fields that uniquely identify an Entity. There are -some constraints about the fields that can be present in Key aspects: - -- All fields must be of STRING or ENUM type -- All fields must be REQUIRED - -Keys can be created from and turned into *Urns*, which represent the stringified version of the Key record. -The algorithm used to do the conversion is straightforward: the fields of the Key aspect are substituted into a -string template based on their index (order of definition) using the following template: - -```aidl -// Case 1: # key fields == 1 -urn:li::key-field-1 - -// Case 2: # key fields > 1 -urn:li::(key-field-1, key-field-2, ... key-field-n) -``` - -By convention, key aspects are defined under [metadata-models/src/main/pegasus/com/linkedin/metadata/key](https://github.com/datahub-project/datahub/tree/master/metadata-models/src/main/pegasus/com/linkedin/metadata/key). - -##### Example - -A CorpUser can be uniquely identified by a "username", which should typically correspond to an LDAP name. - -Thus, it's Key Aspect is defined as the following: - -```aidl -namespace com.linkedin.metadata.key - -/** - * Key for a CorpUser - */ -@Aspect = { - "name": "corpUserKey" -} -record CorpUserKey { - /** - * The name of the AD/LDAP user. - */ - username: string -} -``` - -and it's Entity Snapshot model is defined as - -```aidl -/** - * A metadata snapshot for a specific CorpUser entity. - */ -@Entity = { - "name": "corpuser", - "keyAspect": "corpUserKey" -} -record CorpUserSnapshot { - - /** - * URN for the entity the metadata snapshot is associated with. - */ - urn: CorpuserUrn - - /** - * The list of metadata aspects associated with the CorpUser. Depending on the use case, this can either be all, or a selection, of supported aspects. - */ - aspects: array[CorpUserAspect] -} -``` - -Using a combination of the information provided by these models, we are able to generate the Urn corresponding to a CorpUser as - -``` -urn:li:corpuser: -``` - -Imagine we have a CorpUser Entity with the username "johnsmith". In this world, the JSON version of the Key Aspect associated with the Entity would be - -```aidl -{ - "username": "johnsmith" -} -``` - -and its corresponding Urn would be - -```aidl -urn:li:corpuser:johnsmith -``` - -#### BrowsePaths aspect - -The BrowsePaths aspect allows you to define a custom "browse path" for an Entity. A browse path is a way to hierarchically organize -entities. They manifest within the "Explore" features on the UI, allowing users to navigate through trees of related entities of a given type. - -To support browsing a particular entity, add the "browsePaths" aspect to the entity in your `entity-registry.yml` file. - -```aidl -/// entity-registry.yml -entities: - - name: dataset - doc: Datasets represent logical or physical data assets stored or represented in various data platforms. Tables, Views, Streams are all instances of datasets. - keyAspect: datasetKey - aspects: - ... - - browsePaths -``` - -By declaring this aspect, you can produce custom browse paths as well as query for browse paths manually using a CURL like the following: - -```aidl -curl --location --request POST 'http://localhost:8080/entities?action=browse' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "path": "/my/custom/browse/path", - "entity": "dataset", - "start": 0, - "limit": 10 -}' -``` - -Please note you must provide: -- The "/"-delimited root path for which to fetch results. -- An entity "type" using its common name ("dataset" in the example above). - -### Types of Aspect - -There are 2 "types" of Metadata Aspects. Both are modeled using PDL schemas, and both can be ingested in the same way. -However, they differ in what they represent and how they are handled by DataHub's Metadata Service. - -#### 1. Versioned Aspects - -Versioned Aspects each have a **numeric version** associated with them. When a field in an aspect changes, a new -version is automatically created and stored within DataHub's backend. In practice, all versioned aspects are stored inside a relational database -that can be backed up and restored. Versioned aspects power much of the UI experience you're used to, including Ownership, Descriptions, -Tags, Glossary Terms, and more. Examples include Ownership, Global Tags, and Glossary Terms. - -#### 2. Timeseries Aspects - -Timeseries Aspects each have a **timestamp** associated with them. They are useful for representing -time-ordered events about an Entity. For example, the results of profiling a Dataset, or a set of Data Quality checks that -run every day. It is important to note that Timeseries aspects are NOT persisted inside the relational store, and are instead -persisted only in the search index (e.g. elasticsearch) and the message queue (Kafka). This makes restoring timeseries aspects -in a disaster scenario a bit more challenge. Timeseries aspects can be queried by time range, which is what makes them most different from Versioned Aspects. -A timeseries aspect can be identified by the "timeseries" [type](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetProfile.pdl#L10) in its [@Aspect](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetProfile.pdl#L8) annotation. -Examples include [DatasetProfile](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetProfile.pdl) & [DatasetUsageStatistics](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetUsageStatistics.pdl). - -Timeseries aspects are aspects that have a timestampMillis field, and are meant for aspects that continuously change on a -timely basis e.g. data profiles, usage statistics, etc. - -Each timeseries aspect must be declared "type": "timeseries" and must -include [TimeseriesAspectBase](https://github.com/datahub-project/datahub/tree/master/metadata-models/src/main/pegasus/com/linkedin/timeseries/TimeseriesAspectBase.pdl) -, which contains a timestampMillis field. - -Timeseries aspect cannot have any fields that have the @Searchable or @Relationship annotation, as it goes through a -completely different flow. - -Please refer -to [DatasetProfile](https://github.com/datahub-project/datahub/tree/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetProfile.pdl) -to see an example of a timeseries aspect. - -Because timeseries aspects are updated on a frequent basis, ingests of these aspects go straight to elastic search ( -instead of being stored in local DB). - -You can retrieve timeseries aspects using the "aspects?action=getTimeseriesAspectValues" end point. - -##### Aggregatable Timeseries aspects -Being able to perform SQL like *group by + aggregate* operations on the timeseries aspects is a very natural use-case for -this kind of data (dataset profiles, usage statistics etc.). This section describes how to define, ingest and perform an -aggregation query against a timeseries aspect. - -###### Defining a new aggregatable Timeseries aspect. - -The *@TimeseriesField* and the *@TimeseriesFieldCollection* are two new annotations that can be attached to a field of -a *Timeseries aspect* that allows it to be part of an aggregatable query. The kinds of aggregations allowed on these -annotated fields depends on the type of the field, as well as the kind of aggregation, as -described [here](#Performing-an-aggregation-on-a-Timeseries-aspect). - -* `@TimeseriesField = {}` - this annotation can be used with any type of non-collection type field of the aspect such as - primitive types and records (see the fields *stat*, *strStat* and *strArray* fields - of [TestEntityProfile.pdl](https://github.com/datahub-project/datahub/blob/master/test-models/src/main/pegasus/com/datahub/test/TestEntityProfile.pdl)). - -* The `@TimeseriesFieldCollection {"key":""}` annotation allows for -aggregation support on the items of a collection type (supported only for the array type collections for now), where the -value of `"key"` is the name of the field in the collection item type that will be used to specify the group-by clause ( -see *userCounts* and *fieldCounts* fields of [DatasetUsageStatistics.pdl](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/pegasus/com/linkedin/dataset/DatasetUsageStatistics.pdl)). - -In addition to defining the new aspect with appropriate Timeseries annotations, -the [entity-registry.yml](https://github.com/datahub-project/datahub/blob/master/metadata-models/src/main/resources/entity-registry.yml) -file needs to be updated as well. Just add the new aspect name under the list of aspects against the appropriate entity as shown below, such as `datasetUsageStatistics` for the aspect DatasetUsageStatistics. -```yaml -entities: - - name: dataset - keyAspect: datasetKey - aspects: - - datasetProfile - - datasetUsageStatistics -``` - -###### Ingesting a Timeseries aspect -The timeseries aspects can be ingested via the GMS REST endpoint `/aspects?action=ingestProposal` or via the python API. - -Example1: Via GMS REST API using curl. - -```shell -curl --location --request POST 'http://localhost:8080/aspects?action=ingestProposal' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "proposal" : { - "entityType": "dataset", - "entityUrn" : "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)", - "changeType" : "UPSERT", - "aspectName" : "datasetUsageStatistics", - "aspect" : { - "value" : "{ \"timestampMillis\":1629840771000,\"uniqueUserCount\" : 10, \"totalSqlQueries\": 20, \"fieldCounts\": [ {\"fieldPath\": \"col1\", \"count\": 20}, {\"fieldPath\" : \"col2\", \"count\": 5} ]}", - "contentType": "application/json" - } - } -}' -``` -Example2: Via Python API to Kafka(or REST) -```python -from datahub.metadata.schema_classes import ( - ChangeTypeClass, - DatasetFieldUsageCountsClass, - DatasetUsageStatisticsClass, -) -from datahub.emitter.kafka_emitter import DatahubKafkaEmitter -from datahub.emitter.rest_emitter import DatahubRestEmitter - -usageStats = DatasetUsageStatisticsClass( - timestampMillis=1629840771000, - uniqueUserCount=10, - totalSqlQueries=20, - fieldCounts=[ - DatasetFieldUsageCountsClass( - fieldPath="col1", - count=10 - ) - ] - ) - -mcpw = MetadataChangeProposalWrapper( - entityType="dataset", - aspectName="datasetUsageStatistics", - changeType=ChangeTypeClass.UPSERT, - entityUrn="urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)", - aspect=usageStats, -) - -# Instantiate appropriate emitter (kafk_emitter/rest_emitter) -my_emitter = DatahubKafkaEmitter("""""") -my_emitter.emit(mcpw) -``` - -###### Performing an aggregation on a Timeseries aspect. - -Aggreations on timeseries aspects can be performed by the GMS REST API for `/analytics?action=getTimeseriesStats` which -accepts the following params. -* `entityName` - The name of the entity the aspect is associated with. -* `aspectName` - The name of the aspect. -* `filter` - Any pre-filtering criteria before grouping and aggregations are performed. -* `metrics` - A list of aggregation specification. The `fieldPath` member of an aggregation specification refers to the - field name against which the aggregation needs to be performed, and the `aggregationType` specifies the kind of aggregation. -* `buckets` - A list of grouping bucket specifications. Each grouping bucket has a `key` field that refers to the field - to use for grouping. The `type` field specifies the kind of grouping bucket. - -We support three kinds of aggregations that can be specified in an aggregation query on the Timeseries annotated fields. -The values that `aggregationType` can take are: - -* `LATEST`: The latest value of the field in each bucket. Supported for any type of field. -* `SUM`: The cumulative sum of the field in each bucket. Supported only for integral types. -* `CARDINALITY`: The number of unique values or the cardinality of the set in each bucket. Supported for string and - record types. - -We support two types of grouping for defining the buckets to perform aggregations against: - -* `DATE_GROUPING_BUCKET`: Allows for creating time-based buckets such as by second, minute, hour, day, week, month, - quarter, year etc. Should be used in conjunction with a timestamp field whose value is in milliseconds since *epoch*. - The `timeWindowSize` param specifies the date histogram bucket width. -* `STRING_GROUPING_BUCKET`: Allows for creating buckets grouped by the unique values of a field. Should always be used in - conjunction with a string type field. - -The API returns a generic SQL like table as the `table` member of the output that contains the results of -the `group-by/aggregate` query, in addition to echoing the input params. - -* `columnNames`: the names of the table columns. The group-by `key` names appear in the same order as they are specified - in the request. Aggregation specifications follow the grouping fields in the same order as specified in the request, - and will be named `_`. -* `columnTypes`: the data types of the columns. -* `rows`: the data values, each row corresponding to the respective bucket(s). - -Example: Latest unique user count for each day. -```shell -# QUERY -curl --location --request POST 'http://localhost:8080/analytics?action=getTimeseriesStats' \ ---header 'X-RestLi-Protocol-Version: 2.0.0' \ ---header 'Content-Type: application/json' \ ---data-raw '{ - "entityName": "dataset", - "aspectName": "datasetUsageStatistics", - "filter": { - "criteria": [] - }, - "metrics": [ - { - "fieldPath": "uniqueUserCount", - "aggregationType": "LATEST" - } - ], - "buckets": [ - { - "key": "timestampMillis", - "type": "DATE_GROUPING_BUCKET", - "timeWindowSize": { - "multiple": 1, - "unit": "DAY" - } - } - ] -}' - -# SAMPLE RESPOSNE -{ - "value": { - "filter": { - "criteria": [] - }, - "aspectName": "datasetUsageStatistics", - "entityName": "dataset", - "groupingBuckets": [ - { - "type": "DATE_GROUPING_BUCKET", - "timeWindowSize": { - "multiple": 1, - "unit": "DAY" - }, - "key": "timestampMillis" - } - ], - "aggregationSpecs": [ - { - "fieldPath": "uniqueUserCount", - "aggregationType": "LATEST" - } - ], - "table": { - "columnNames": [ - "timestampMillis", - "latest_uniqueUserCount" - ], - "rows": [ - [ - "1631491200000", - "1" - ] - ], - "columnTypes": [ - "long", - "int" - ] - } - } -} -``` -For more examples on the complex types of group-by/aggregations, refer to the tests in the group `getAggregatedStats` of [ElasticSearchTimeseriesAspectServiceTest.java](https://github.com/datahub-project/datahub/blob/master/metadata-io/src/test/java/com/linkedin/metadata/timeseries/elastic/ElasticSearchTimeseriesAspectServiceTest.java). - - - diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/hrf.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/hrf.py deleted file mode 100644 index 242d790eb1b83e75cf6b7eaa7a35c674099311ad..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/hrf.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'HRFDataset' -data_root = 'data/HRF' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (2336, 3504) -crop_size = (256, 256) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/bbox_overlaps.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/bbox_overlaps.py deleted file mode 100644 index 93559ea0f25369d552a5365312fa32b9ffec9226..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/evaluation/bbox_overlaps.py +++ /dev/null @@ -1,48 +0,0 @@ -import numpy as np - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', eps=1e-6): - """Calculate the ious between each bbox of bboxes1 and bboxes2. - - Args: - bboxes1(ndarray): shape (n, 4) - bboxes2(ndarray): shape (k, 4) - mode(str): iou (intersection over union) or iof (intersection - over foreground) - - Returns: - ious(ndarray): shape (n, k) - """ - - assert mode in ['iou', 'iof'] - - bboxes1 = bboxes1.astype(np.float32) - bboxes2 = bboxes2.astype(np.float32) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - ious = np.zeros((rows, cols), dtype=np.float32) - if rows * cols == 0: - return ious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - ious = np.zeros((cols, rows), dtype=np.float32) - exchange = True - area1 = (bboxes1[:, 2] - bboxes1[:, 0]) * (bboxes1[:, 3] - bboxes1[:, 1]) - area2 = (bboxes2[:, 2] - bboxes2[:, 0]) * (bboxes2[:, 3] - bboxes2[:, 1]) - for i in range(bboxes1.shape[0]): - x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0]) - y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1]) - x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2]) - y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3]) - overlap = np.maximum(x_end - x_start, 0) * np.maximum( - y_end - y_start, 0) - if mode == 'iou': - union = area1[i] + area2 - overlap - else: - union = area1[i] if not exchange else area2 - union = np.maximum(union, eps) - ious[i, :] = overlap / union - if exchange: - ious = ious.T - return ious diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/xml_style.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/xml_style.py deleted file mode 100644 index 71069488b0f6da3b37e588228f44460ce5f00679..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,170 +0,0 @@ -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class XMLDataset(CustomDataset): - """XML dataset for detection. - - Args: - min_size (int | float, optional): The minimum size of bounding - boxes in the images. If the size of a bounding box is less than - ``min_size``, it would be add to ignored field. - """ - - def __init__(self, min_size=None, **kwargs): - assert self.CLASSES or kwargs.get( - 'classes', None), 'CLASSES in `XMLDataset` can not be None.' - super(XMLDataset, self).__init__(**kwargs) - self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)} - self.min_size = min_size - - def load_annotations(self, ann_file): - """Load annotation from XML style ann_file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = f'JPEGImages/{img_id}.jpg' - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_path = osp.join(self.img_prefix, 'JPEGImages', - '{}.jpg'.format(img_id)) - img = Image.open(img_path) - width, height = img.size - data_infos.append( - dict(id=img_id, filename=filename, width=width, height=height)) - - return data_infos - - def _filter_imgs(self, min_size=32): - """Filter images too small or without annotation.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) < min_size: - continue - if self.filter_empty_gt: - img_id = img_info['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name in self.CLASSES: - valid_inds.append(i) - break - else: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, idx): - """Get annotation from XML file by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - # TODO: check whether it is necessary to use int - # Coordinates may be float type - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - ignore = False - if self.min_size: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.min_size or h < self.min_size: - ignore = True - if difficult or ignore: - bboxes_ignore.append(bbox) - labels_ignore.append(label) - else: - bboxes.append(bbox) - labels.append(label) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes, ndmin=2) - 1 - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1 - labels_ignore = np.array(labels_ignore) - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64)) - return ann - - def get_cat_ids(self, idx): - """Get category ids in XML file by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - cat_ids = [] - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - cat_ids.append(label) - - return cat_ids diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/base_dense_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/base_dense_head.py deleted file mode 100644 index de11e4a2197b1dfe241ce7a66daa1907a8fc5661..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/base_dense_head.py +++ /dev/null @@ -1,59 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch.nn as nn - - -class BaseDenseHead(nn.Module, metaclass=ABCMeta): - """Base class for DenseHeads.""" - - def __init__(self): - super(BaseDenseHead, self).__init__() - - @abstractmethod - def loss(self, **kwargs): - """Compute losses of the head.""" - pass - - @abstractmethod - def get_bboxes(self, **kwargs): - """Transform network output for a batch into bbox predictions.""" - pass - - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """ - Args: - x (list[Tensor]): Features from FPN. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - - Returns: - tuple: - losses: (dict[str, Tensor]): A dictionary of loss components. - proposal_list (list[Tensor]): Proposals of each image. - """ - outs = self(x) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - if proposal_cfg is None: - return losses - else: - proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg) - return losses, proposal_list diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/__init__.py deleted file mode 100644 index 7246c897430f0cc7ce12719ad8608824fc734446..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .alexnet import AlexNet -# yapf: disable -from .bricks import (ACTIVATION_LAYERS, CONV_LAYERS, NORM_LAYERS, - PADDING_LAYERS, PLUGIN_LAYERS, UPSAMPLE_LAYERS, - ContextBlock, Conv2d, Conv3d, ConvAWS2d, ConvModule, - ConvTranspose2d, ConvTranspose3d, ConvWS2d, - DepthwiseSeparableConvModule, GeneralizedAttention, - HSigmoid, HSwish, Linear, MaxPool2d, MaxPool3d, - NonLocal1d, NonLocal2d, NonLocal3d, Scale, Swish, - build_activation_layer, build_conv_layer, - build_norm_layer, build_padding_layer, build_plugin_layer, - build_upsample_layer, conv_ws_2d, is_norm) -from .builder import MODELS, build_model_from_cfg -# yapf: enable -from .resnet import ResNet, make_res_layer -from .utils import (INITIALIZERS, Caffe2XavierInit, ConstantInit, KaimingInit, - NormalInit, PretrainedInit, TruncNormalInit, UniformInit, - XavierInit, bias_init_with_prob, caffe2_xavier_init, - constant_init, fuse_conv_bn, get_model_complexity_info, - initialize, kaiming_init, normal_init, trunc_normal_init, - uniform_init, xavier_init) -from .vgg import VGG, make_vgg_layer - -__all__ = [ - 'AlexNet', 'VGG', 'make_vgg_layer', 'ResNet', 'make_res_layer', - 'constant_init', 'xavier_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'kaiming_init', 'caffe2_xavier_init', - 'bias_init_with_prob', 'ConvModule', 'build_activation_layer', - 'build_conv_layer', 'build_norm_layer', 'build_padding_layer', - 'build_upsample_layer', 'build_plugin_layer', 'is_norm', 'NonLocal1d', - 'NonLocal2d', 'NonLocal3d', 'ContextBlock', 'HSigmoid', 'Swish', 'HSwish', - 'GeneralizedAttention', 'ACTIVATION_LAYERS', 'CONV_LAYERS', 'NORM_LAYERS', - 'PADDING_LAYERS', 'UPSAMPLE_LAYERS', 'PLUGIN_LAYERS', 'Scale', - 'get_model_complexity_info', 'conv_ws_2d', 'ConvAWS2d', 'ConvWS2d', - 'fuse_conv_bn', 'DepthwiseSeparableConvModule', 'Linear', 'Conv2d', - 'ConvTranspose2d', 'MaxPool2d', 'ConvTranspose3d', 'MaxPool3d', 'Conv3d', - 'initialize', 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'MODELS', 'build_model_from_cfg' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/parallel/distributed_deprecated.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/parallel/distributed_deprecated.py deleted file mode 100644 index 676937a2085d4da20fa87923041a200fca6214eb..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super(MMDistributedDataParallel, self).__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/models/conditional_unet.py b/spaces/akhaliq/Music_Source_Separation/bytesep/models/conditional_unet.py deleted file mode 100644 index 1e925c11308b04ba195db83b08c2718930b1b4c6..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/bytesep/models/conditional_unet.py +++ /dev/null @@ -1,496 +0,0 @@ -import math -from typing import List - -import numpy as np -import matplotlib.pyplot as plt -import pytorch_lightning as pl -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim -from torch.optim.lr_scheduler import LambdaLR -from torchlibrosa.stft import STFT, ISTFT, magphase - -from bytesep.models.pytorch_modules import ( - Base, - init_bn, - init_embedding, - init_layer, - act, - Subband, -) - - -class ConvBlock(nn.Module): - def __init__( - self, - in_channels, - out_channels, - condition_size, - kernel_size, - activation, - momentum, - ): - super(ConvBlock, self).__init__() - - self.activation = activation - padding = (kernel_size[0] // 2, kernel_size[1] // 2) - - self.conv1 = nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - self.bn1 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.conv2 = nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=kernel_size, - stride=(1, 1), - dilation=(1, 1), - padding=padding, - bias=False, - ) - - self.bn2 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.beta1 = nn.Linear(condition_size, out_channels, bias=True) - self.beta2 = nn.Linear(condition_size, out_channels, bias=True) - - self.init_weights() - - def init_weights(self): - init_layer(self.conv1) - init_layer(self.conv2) - init_bn(self.bn1) - init_bn(self.bn2) - init_embedding(self.beta1) - init_embedding(self.beta2) - - def forward(self, x, condition): - - b1 = self.beta1(condition)[:, :, None, None] - b2 = self.beta2(condition)[:, :, None, None] - - x = act(self.bn1(self.conv1(x)) + b1, self.activation) - x = act(self.bn2(self.conv2(x)) + b2, self.activation) - return x - - -class EncoderBlock(nn.Module): - def __init__( - self, - in_channels, - out_channels, - condition_size, - kernel_size, - downsample, - activation, - momentum, - ): - super(EncoderBlock, self).__init__() - - self.conv_block = ConvBlock( - in_channels, out_channels, condition_size, kernel_size, activation, momentum - ) - self.downsample = downsample - - def forward(self, x, condition): - encoder = self.conv_block(x, condition) - encoder_pool = F.avg_pool2d(encoder, kernel_size=self.downsample) - return encoder_pool, encoder - - -class DecoderBlock(nn.Module): - def __init__( - self, - in_channels, - out_channels, - condition_size, - kernel_size, - upsample, - activation, - momentum, - ): - super(DecoderBlock, self).__init__() - self.kernel_size = kernel_size - self.stride = upsample - self.activation = activation - - self.conv1 = torch.nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=self.stride, - stride=self.stride, - padding=(0, 0), - bias=False, - dilation=(1, 1), - ) - - self.bn1 = nn.BatchNorm2d(out_channels, momentum=momentum) - - self.conv_block2 = ConvBlock( - out_channels * 2, - out_channels, - condition_size, - kernel_size, - activation, - momentum, - ) - - self.beta1 = nn.Linear(condition_size, out_channels, bias=True) - - self.init_weights() - - def init_weights(self): - init_layer(self.conv1) - init_bn(self.bn1) - init_embedding(self.beta1) - - def forward(self, input_tensor, concat_tensor, condition): - b1 = self.beta1(condition)[:, :, None, None] - x = act(self.bn1(self.conv1(input_tensor)) + b1, self.activation) - x = torch.cat((x, concat_tensor), dim=1) - x = self.conv_block2(x, condition) - return x - - -class ConditionalUNet(nn.Module, Base): - def __init__(self, input_channels, target_sources_num): - super(ConditionalUNet, self).__init__() - - self.input_channels = input_channels - condition_size = target_sources_num - self.output_sources_num = 1 - - window_size = 2048 - hop_size = 441 - center = True - pad_mode = "reflect" - window = "hann" - activation = "relu" - momentum = 0.01 - - self.subbands_num = 4 - self.K = 3 # outputs: |M|, cos∠M, sin∠M - - self.downsample_ratio = 2 ** 6 # This number equals 2^{#encoder_blcoks} - - self.stft = STFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.istft = ISTFT( - n_fft=window_size, - hop_length=hop_size, - win_length=window_size, - window=window, - center=center, - pad_mode=pad_mode, - freeze_parameters=True, - ) - - self.bn0 = nn.BatchNorm2d(window_size // 2 + 1, momentum=momentum) - - self.subband = Subband(subbands_num=self.subbands_num) - - self.encoder_block1 = EncoderBlock( - in_channels=input_channels * self.subbands_num, - out_channels=32, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block2 = EncoderBlock( - in_channels=32, - out_channels=64, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block3 = EncoderBlock( - in_channels=64, - out_channels=128, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block4 = EncoderBlock( - in_channels=128, - out_channels=256, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block5 = EncoderBlock( - in_channels=256, - out_channels=384, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.encoder_block6 = EncoderBlock( - in_channels=384, - out_channels=384, - condition_size=condition_size, - kernel_size=(3, 3), - downsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.conv_block7 = ConvBlock( - in_channels=384, - out_channels=384, - condition_size=condition_size, - kernel_size=(3, 3), - activation=activation, - momentum=momentum, - ) - self.decoder_block1 = DecoderBlock( - in_channels=384, - out_channels=384, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block2 = DecoderBlock( - in_channels=384, - out_channels=384, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block3 = DecoderBlock( - in_channels=384, - out_channels=256, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block4 = DecoderBlock( - in_channels=256, - out_channels=128, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block5 = DecoderBlock( - in_channels=128, - out_channels=64, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - self.decoder_block6 = DecoderBlock( - in_channels=64, - out_channels=32, - condition_size=condition_size, - kernel_size=(3, 3), - upsample=(2, 2), - activation=activation, - momentum=momentum, - ) - - self.after_conv_block1 = ConvBlock( - in_channels=32, - out_channels=32, - condition_size=condition_size, - kernel_size=(3, 3), - activation=activation, - momentum=momentum, - ) - - self.after_conv2 = nn.Conv2d( - in_channels=32, - out_channels=input_channels - * self.subbands_num - * self.output_sources_num - * self.K, - kernel_size=(1, 1), - stride=(1, 1), - padding=(0, 0), - bias=True, - ) - - self.init_weights() - - def init_weights(self): - init_bn(self.bn0) - init_layer(self.after_conv2) - - def feature_maps_to_wav(self, x, sp, sin_in, cos_in, audio_length): - - batch_size, _, time_steps, freq_bins = x.shape - - x = x.reshape( - batch_size, - self.output_sources_num, - self.input_channels, - self.K, - time_steps, - freq_bins, - ) - # x: (batch_size, output_sources_num, input_channles, K, time_steps, freq_bins) - - mask_mag = torch.sigmoid(x[:, :, :, 0, :, :]) - _mask_real = torch.tanh(x[:, :, :, 1, :, :]) - _mask_imag = torch.tanh(x[:, :, :, 2, :, :]) - _, mask_cos, mask_sin = magphase(_mask_real, _mask_imag) - # mask_cos, mask_sin: (batch_size, output_sources_num, input_channles, time_steps, freq_bins) - - # Y = |Y|cos∠Y + j|Y|sin∠Y - # = |Y|cos(∠X + ∠M) + j|Y|sin(∠X + ∠M) - # = |Y|(cos∠X cos∠M - sin∠X sin∠M) + j|Y|(sin∠X cos∠M + cos∠X sin∠M) - out_cos = ( - cos_in[:, None, :, :, :] * mask_cos - sin_in[:, None, :, :, :] * mask_sin - ) - out_sin = ( - sin_in[:, None, :, :, :] * mask_cos + cos_in[:, None, :, :, :] * mask_sin - ) - # out_cos, out_sin: (batch_size, output_sources_num, input_channles, time_steps, freq_bins) - - # Calculate |Y|. - out_mag = F.relu_(sp[:, None, :, :, :] * mask_mag) - # out_mag: (batch_size, output_sources_num, input_channles, time_steps, freq_bins) - - # Calculate Y_{real} and Y_{imag} for ISTFT. - out_real = out_mag * out_cos - out_imag = out_mag * out_sin - # out_real, out_imag: (batch_size, output_sources_num, input_channles, time_steps, freq_bins) - - # Reformat shape to (n, 1, time_steps, freq_bins) for ISTFT. - shape = ( - batch_size * self.output_sources_num * self.input_channels, - 1, - time_steps, - freq_bins, - ) - out_real = out_real.reshape(shape) - out_imag = out_imag.reshape(shape) - - # ISTFT. - wav_out = self.istft(out_real, out_imag, audio_length) - # (batch_size * output_sources_num * input_channels, segments_num) - - # Reshape. - wav_out = wav_out.reshape( - batch_size, self.output_sources_num * self.input_channels, audio_length - ) - # (batch_size, output_sources_num * input_channels, segments_num) - - return wav_out - - def forward(self, input_dict): - """ - Args: - input: (batch_size, segment_samples, channels_num) - - Outputs: - output_dict: { - 'wav': (batch_size, segment_samples, channels_num), - 'sp': (batch_size, channels_num, time_steps, freq_bins)} - """ - - mixture = input_dict['waveform'] - condition = input_dict['condition'] - - sp, cos_in, sin_in = self.wav_to_spectrogram_phase(mixture) - """(batch_size, channels_num, time_steps, freq_bins)""" - - # Batch normalization - x = sp.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - """(batch_size, chanenls, time_steps, freq_bins)""" - - # Pad spectrogram to be evenly divided by downsample ratio. - origin_len = x.shape[2] - pad_len = ( - int(np.ceil(x.shape[2] / self.downsample_ratio)) * self.downsample_ratio - - origin_len - ) - x = F.pad(x, pad=(0, 0, 0, pad_len)) - """(batch_size, channels, padded_time_steps, freq_bins)""" - - # Let frequency bins be evenly divided by 2, e.g., 513 -> 512 - x = x[..., 0 : x.shape[-1] - 1] # (bs, channels, T, F) - - x = self.subband.analysis(x) - - # UNet - (x1_pool, x1) = self.encoder_block1( - x, condition - ) # x1_pool: (bs, 32, T / 2, F / 2) - (x2_pool, x2) = self.encoder_block2( - x1_pool, condition - ) # x2_pool: (bs, 64, T / 4, F / 4) - (x3_pool, x3) = self.encoder_block3( - x2_pool, condition - ) # x3_pool: (bs, 128, T / 8, F / 8) - (x4_pool, x4) = self.encoder_block4( - x3_pool, condition - ) # x4_pool: (bs, 256, T / 16, F / 16) - (x5_pool, x5) = self.encoder_block5( - x4_pool, condition - ) # x5_pool: (bs, 512, T / 32, F / 32) - (x6_pool, x6) = self.encoder_block6( - x5_pool, condition - ) # x6_pool: (bs, 1024, T / 64, F / 64) - x_center = self.conv_block7(x6_pool, condition) # (bs, 2048, T / 64, F / 64) - x7 = self.decoder_block1(x_center, x6, condition) # (bs, 1024, T / 32, F / 32) - x8 = self.decoder_block2(x7, x5, condition) # (bs, 512, T / 16, F / 16) - x9 = self.decoder_block3(x8, x4, condition) # (bs, 256, T / 8, F / 8) - x10 = self.decoder_block4(x9, x3, condition) # (bs, 128, T / 4, F / 4) - x11 = self.decoder_block5(x10, x2, condition) # (bs, 64, T / 2, F / 2) - x12 = self.decoder_block6(x11, x1, condition) # (bs, 32, T, F) - x = self.after_conv_block1(x12, condition) # (bs, 32, T, F) - x = self.after_conv2(x) - # (batch_size, input_channles * subbands_num * targets_num * k, T, F // subbands_num) - - x = self.subband.synthesis(x) - # (batch_size, input_channles * targets_num * K, T, F) - - # Recover shape - x = F.pad(x, pad=(0, 1)) # Pad frequency, e.g., 1024 -> 1025. - x = x[:, :, 0:origin_len, :] # (bs, feature_maps, T, F) - - audio_length = mixture.shape[2] - - separated_audio = self.feature_maps_to_wav(x, sp, sin_in, cos_in, audio_length) - # separated_audio: (batch_size, output_sources_num * input_channels, segments_num) - - output_dict = {'waveform': separated_audio} - - return output_dict diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_data.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_data.sh deleted file mode 100644 index baf97b6664c37b714213bafd0260bd7aa600b69a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/utils/split_data.sh +++ /dev/null @@ -1,104 +0,0 @@ -#!/bin/bash - -# Split data direcoty into two data direcotries - -# Copyright 2019 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -# shellcheck disable=SC1091 -. ./path.sh || exit 1; - -shuffle=false -num_first=0 -num_second=0 - -# shellcheck disable=SC1091 -. utils/parse_options.sh || exit 1; - -if [ $# -ne 3 ]; then - echo "Usage: $0 ..." - echo "e.g.: $0 data/all data/train data/deveval" - echo "" - echo "Options:" - echo " --shuffle: Whether to perform shuffle (default=false)." - echo " --num_first: Number of utts in the first dist dir." - echo " If set to 0, it will be automatically decided (default=0)." - echo " --num_second: Number of utts in the second dist dir." - echo " If set to 0, it will be automatically decided (default=0)." - exit 1 -fi - -set -eu - -src_dir=$1 -first_dist_dir=$2 -second_dist_dir=$3 - -src_scp=${src_dir}/wav.scp -if [ -e "${src_dir}/segments" ]; then - has_segments=true - src_segments=${src_dir}/segments - num_src_utts=$(wc -l < "${src_segments}") -else - has_segments=false - num_src_utts=$(wc -l < "${src_scp}") -fi - -# check number of utts -if [ "${num_first}" -eq 0 ] && [ "${num_second}" -eq 0 ]; then - num_first=$((num_src_utts / 2 )) - num_second=$((num_src_utts - num_first)) -elif [ "${num_first}" -gt 0 ] && [ "${num_second}" -eq 0 ]; then - [ "${num_src_utts}" -le "${num_first}" ] && \ - echo "ERROR: num_first must be less than # utts in src. (${num_first} vs ${num_src_utts})" >&2 && \ - exit 1 - num_second=$((num_src_utts - num_first)) -elif [ "${num_first}" -eq 0 ] && [ "${num_second}" -gt 0 ]; then - [ "${num_src_utts}" -le "${num_second}" ] && \ - echo "ERROR: num_second must be less than # utts in src. (${num_second} vs ${num_src_utts})" >&2 && \ - exit 1 - num_first=$((num_src_utts - num_second)) -elif [ "${num_first}" -gt 0 ] && [ "${num_second}" -gt 0 ]; then - [ "${num_src_utts}" -ne "$((num_first + num_second))" ] && \ - echo "ERROR: num_first + num_second must be the same # utts in src. ($((num_first + num_second)) vs ${num_src_utts})" >&2 && \ - exit 1 -fi - -# check directory existence -[ ! -e "${first_dist_dir}" ] && mkdir -p "${first_dist_dir}" -[ ! -e "${second_dist_dir}" ] && mkdir -p "${second_dist_dir}" - -# split -if ! "${has_segments}"; then - if "${shuffle}"; then - sort -R "${src_scp}" > "${src_scp}.unsorted" - head -n "${num_first}" "${src_scp}.unsorted" | sort > "${first_dist_dir}/wav.scp" - tail -n "${num_second}" "${src_scp}.unsorted" | sort > "${second_dist_dir}/wav.scp" - rm "${src_scp}.unsorted" - else - head -n "${num_first}" "${src_scp}" | sort > "${first_dist_dir}/wav.scp" - tail -n "${num_second}" "${src_scp}" | sort > "${second_dist_dir}/wav.scp" - fi -else - # split segments at first - if "${shuffle}"; then - sort -R "${src_segments}" > "${src_segments}.unsorted" - head -n "${num_first}" "${src_segments}.unsorted" | sort > "${first_dist_dir}/segments" - tail -n "${num_second}" "${src_segments}.unsorted" | sort > "${second_dist_dir}/segments" - rm "${src_segments}.unsorted" - else - head -n "${num_first}" "${src_segments}" | sort > "${first_dist_dir}/segments" - tail -n "${num_second}" "${src_segments}" | sort > "${second_dist_dir}/segments" - fi - # split wav.scp - rm -rf "${first_dist_dir}/wav.scp" - awk '{print $2}' < "${first_dist_dir}/segments" | sort | uniq | while read -r wav_id; do - grep "^${wav_id} " < "${src_scp}" >> "${first_dist_dir}/wav.scp" - done - rm -rf "${second_dist_dir}/wav.scp" - awk '{print $2}' < "${second_dist_dir}/segments" | sort | uniq | while read -r wav_id; do - grep "^${wav_id} " < "${src_scp}" >> "${second_dist_dir}/wav.scp" - done -fi - -echo "Successfully split data directory." diff --git a/spaces/akhaliq/deeplab2/data/build_dvps_data.py b/spaces/akhaliq/deeplab2/data/build_dvps_data.py deleted file mode 100644 index 7057aae62cb23d8571e7c65f5bb3bf789a02b2f2..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/data/build_dvps_data.py +++ /dev/null @@ -1,264 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -r"""Converts Depth-aware Video Panoptic Segmentation (DVPS) data to sharded TFRecord file format with tf.train.Example protos. - -The expected directory structure of the DVPS dataset should be as follows: - - + DVPS_ROOT - + train | val - - ground-truth depth maps (*_depth.png) - - ground-truth panoptic maps (*_gtFine_instanceTrainIds.png) - - images (*_leftImg8bit.png) - + test - - images (*_leftImg8bit.png) - -The ground-truth panoptic map is encoded as the following in PNG format: - - panoptic ID = semantic ID * panoptic divisor (1000) + instance ID - - -The output Example proto contains the following fields: - - image/encoded: encoded image content. - image/filename: image filename. - image/format: image file format. - image/height: image height. - image/width: image width. - image/channels: image channels. - image/segmentation/class/encoded: encoded panoptic segmentation content. - image/segmentation/class/format: segmentation encoding format. - image/depth/encoded: encoded depth content. - image/depth/format: depth encoding format. - video/sequence_id: sequence ID of the frame. - video/frame_id: ID of the frame of the video sequence. - next_image/encoded: encoded next-frame image content. - next_image/segmentation/class/encoded: encoded panoptic segmentation content - of the next frame. - -The output panoptic segmentation map stored in the Example will be the raw bytes -of an int32 panoptic map, where each pixel is assigned to a panoptic ID: - - panoptic ID = semantic ID * panoptic divisor (1000) + instance ID - -where semantic ID will be the same with `category_id` for each segment, and -ignore label for pixels not belong to any segment. - -The depth map will be the raw bytes of an int32 depth map, where each pixel is: - - depth map = depth ground truth * 256 - -Example to run the scipt: - - python deeplab2/data/build_dvps_data.py \ - --dvps_root=${DVPS_ROOT} \ - --output_dir=${OUTPUT_DIR} -""" - -import math -import os - -from typing import Sequence, Tuple, Optional - -from absl import app -from absl import flags -from absl import logging -import numpy as np - -from PIL import Image - -import tensorflow as tf - -from deeplab2.data import data_utils - -FLAGS = flags.FLAGS - -flags.DEFINE_string('dvps_root', None, 'DVPS dataset root folder.') - -flags.DEFINE_string('output_dir', None, - 'Path to save converted TFRecord of TensorFlow examples.') - -_PANOPTIC_DEPTH_FORMAT = 'raw' -_NUM_SHARDS = 1000 -_TF_RECORD_PATTERN = '%s-%05d-of-%05d.tfrecord' -_IMAGE_SUFFIX = '_leftImg8bit.png' -_LABEL_SUFFIX = '_gtFine_instanceTrainIds.png' -_DEPTH_SUFFIX = '_depth.png' - - -def _get_image_info_from_path(image_path: str) -> Tuple[str, str]: - """Gets image info including sequence id and image id. - - Image path is in the format of '{sequence_id}_{image_id}_*.png', - where `sequence_id` refers to the id of the video sequence, and `image_id` is - the id of the image in the video sequence. - - Args: - image_path: Absolute path of the image. - - Returns: - sequence_id, and image_id as strings. - """ - image_path = os.path.basename(image_path) - return tuple(image_path.split('_')[:2]) - - -def _get_images(dvps_root: str, dataset_split: str) -> Sequence[str]: - """Gets files for the specified data type and dataset split. - - Args: - dvps_root: String, path to DVPS dataset root folder. - dataset_split: String, dataset split ('train', 'val', 'test'). - - Returns: - A list of sorted file names under dvps_root and dataset_split. - """ - search_files = os.path.join(dvps_root, dataset_split, '*' + _IMAGE_SUFFIX) - filenames = tf.io.gfile.glob(search_files) - return sorted(filenames) - - -def _decode_panoptic_or_depth_map(map_path: str) -> Optional[str]: - """Decodes the panoptic or depth map from encoded image file. - - Args: - map_path: Path to the panoptic or depth map image file. - - Returns: - Panoptic or depth map as an encoded int32 numpy array bytes or None if not - existing. - """ - if not tf.io.gfile.exists(map_path): - return None - with tf.io.gfile.GFile(map_path, 'rb') as f: - decoded_map = np.array(Image.open(f)).astype(np.int32) - return decoded_map.tobytes() - - -def _get_next_frame_path(image_path: str) -> Optional[str]: - """Gets next frame path. - - If not exists, return None. - - The files are named {sequence_id}_{frame_id}*. To get the path of the next - frame, this function keeps sequence_id and increase the frame_id by 1. It - finds all the files matching this pattern, and returns the corresponding - file path matching the input type. - - Args: - image_path: String, path to the image. - - Returns: - A string for the path of the next frame of the given image path or None if - the given image path is the last frame of the sequence. - """ - sequence_id, image_id = _get_image_info_from_path(image_path) - next_image_id = '{:06d}'.format(int(image_id) + 1) - next_image_name = sequence_id + '_' + next_image_id - next_image_path = None - for suffix in (_IMAGE_SUFFIX, _LABEL_SUFFIX): - if image_path.endswith(suffix): - next_image_path = os.path.join( - os.path.dirname(image_path), next_image_name + suffix) - if not tf.io.gfile.exists(next_image_path): - return None - return next_image_path - - -def _create_tfexample(image_path: str, panoptic_map_path: str, - depth_map_path: str) -> Optional[tf.train.Example]: - """Creates a TF example for each image. - - Args: - image_path: Path to the image. - panoptic_map_path: Path to the panoptic map (as an image file). - depth_map_path: Path to the depth map (as an image file). - - Returns: - TF example proto. - """ - with tf.io.gfile.GFile(image_path, 'rb') as f: - image_data = f.read() - label_data = _decode_panoptic_or_depth_map(panoptic_map_path) - depth_data = _decode_panoptic_or_depth_map(depth_map_path) - image_name = os.path.basename(image_path) - image_format = image_name.split('.')[1].lower() - sequence_id, frame_id = _get_image_info_from_path(image_path) - next_image_data = None - next_label_data = None - # Next image. - next_image_path = _get_next_frame_path(image_path) - # If there is no next image, no examples will be created. - if next_image_path is None: - return None - with tf.io.gfile.GFile(next_image_path, 'rb') as f: - next_image_data = f.read() - # Next panoptic map. - next_panoptic_map_path = _get_next_frame_path(panoptic_map_path) - next_label_data = _decode_panoptic_or_depth_map(next_panoptic_map_path) - return data_utils.create_video_and_depth_tfexample( - image_data, - image_format, - image_name, - label_format=_PANOPTIC_DEPTH_FORMAT, - sequence_id=sequence_id, - image_id=frame_id, - label_data=label_data, - next_image_data=next_image_data, - next_label_data=next_label_data, - depth_data=depth_data, - depth_format=_PANOPTIC_DEPTH_FORMAT) - - -def _convert_dataset(dvps_root: str, dataset_split: str, output_dir: str): - """Converts the specified dataset split to TFRecord format. - - Args: - dvps_root: String, path to DVPS dataset root folder. - dataset_split: String, the dataset split (e.g., train, val, test). - output_dir: String, directory to write output TFRecords to. - """ - image_files = _get_images(dvps_root, dataset_split) - num_images = len(image_files) - - num_per_shard = int(math.ceil(len(image_files) / _NUM_SHARDS)) - - for shard_id in range(_NUM_SHARDS): - shard_filename = _TF_RECORD_PATTERN % (dataset_split, shard_id, _NUM_SHARDS) - output_filename = os.path.join(output_dir, shard_filename) - with tf.io.TFRecordWriter(output_filename) as tfrecord_writer: - start_idx = shard_id * num_per_shard - end_idx = min((shard_id + 1) * num_per_shard, num_images) - for i in range(start_idx, end_idx): - image_path = image_files[i] - panoptic_map_path = image_path.replace(_IMAGE_SUFFIX, _LABEL_SUFFIX) - depth_map_path = image_path.replace(_IMAGE_SUFFIX, _DEPTH_SUFFIX) - example = _create_tfexample(image_path, panoptic_map_path, - depth_map_path) - if example is not None: - tfrecord_writer.write(example.SerializeToString()) - - -def main(argv: Sequence[str]) -> None: - if len(argv) > 1: - raise app.UsageError('Too many command-line arguments.') - tf.io.gfile.makedirs(FLAGS.output_dir) - for dataset_split in ('train', 'val', 'test'): - logging.info('Starts to processing DVPS dataset split %s.', dataset_split) - _convert_dataset(FLAGS.dvps_root, dataset_split, FLAGS.output_dir) - - -if __name__ == '__main__': - app.run(main) diff --git a/spaces/akhaliq/deeplab2/data/data_utils_test.py b/spaces/akhaliq/deeplab2/data/data_utils_test.py deleted file mode 100644 index e87ba80eaa2f7099bff65f84d725dbbdcd99f161..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/data/data_utils_test.py +++ /dev/null @@ -1,94 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for data_utils.""" - -import io -import numpy as np -from PIL import Image -import tensorflow as tf - -from deeplab2.data import data_utils - - -def _encode_png_image(image): - """Helper method to encode input image in PNG format.""" - buffer = io.BytesIO() - Image.fromarray(image).save(buffer, format='png') - return buffer.getvalue() - - -class DataUtilsTest(tf.test.TestCase): - - def _create_test_image(self, height, width): - rng = np.random.RandomState(319281498) - return rng.randint(0, 255, size=(height, width, 3), dtype=np.uint8) - - def test_encode_and_decode(self): - """Checks decode created tf.Example for semantic segmentation.""" - test_image_height = 20 - test_image_width = 15 - filename = 'dummy' - - image = self._create_test_image(test_image_height, test_image_width) - # Take the last channel as dummy label. - label = image[..., 0] - - example = data_utils.create_tfexample( - image_data=_encode_png_image(image), - image_format='png', filename=filename, - label_data=_encode_png_image(label), label_format='png') - - # Parse created example, expect getting identical results. - parser = data_utils.SegmentationDecoder(is_panoptic_dataset=False) - parsed_tensors = parser(example.SerializeToString()) - - self.assertIn('image', parsed_tensors) - self.assertIn('image_name', parsed_tensors) - self.assertIn('label', parsed_tensors) - self.assertEqual(filename, parsed_tensors['image_name']) - np.testing.assert_array_equal(image, parsed_tensors['image'].numpy()) - # Decoded label is a 3-D array with last dimension of 1. - decoded_label = parsed_tensors['label'].numpy() - np.testing.assert_array_equal(label, decoded_label[..., 0]) - - def test_encode_and_decode_panoptic(self): - test_image_height = 31 - test_image_width = 17 - filename = 'dummy' - - image = self._create_test_image(test_image_height, test_image_width) - # Create dummy panoptic label in np.int32 dtype. - label = np.dot(image.astype(np.int32), [1, 256, 256 * 256]).astype(np.int32) - example = data_utils.create_tfexample( - image_data=_encode_png_image(image), - image_format='png', filename=filename, - label_data=label.tostring(), label_format='raw') - - parser = data_utils.SegmentationDecoder(is_panoptic_dataset=True) - parsed_tensors = parser(example.SerializeToString()) - - self.assertIn('image', parsed_tensors) - self.assertIn('image_name', parsed_tensors) - self.assertIn('label', parsed_tensors) - self.assertEqual(filename, parsed_tensors['image_name']) - np.testing.assert_array_equal(image, parsed_tensors['image'].numpy()) - # Decoded label is a 3-D array with last dimension of 1. - decoded_label = parsed_tensors['label'].numpy() - np.testing.assert_array_equal(label, decoded_label[..., 0]) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/terminal256.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/terminal256.py deleted file mode 100644 index b5eab1400563dadbd4f5f7deb7c12c1c8c23e066..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/terminal256.py +++ /dev/null @@ -1,338 +0,0 @@ -""" - pygments.formatters.terminal256 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for 256-color terminal output with ANSI sequences. - - RGB-to-XTERM color conversion routines adapted from xterm256-conv - tool (http://frexx.de/xterm-256-notes/data/xterm256-conv2.tar.bz2) - by Wolfgang Frisch. - - Formatter version 1. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -# TODO: -# - Options to map style's bold/underline/italic/border attributes -# to some ANSI attrbutes (something like 'italic=underline') -# - An option to output "style RGB to xterm RGB/index" conversion table -# - An option to indicate that we are running in "reverse background" -# xterm. This means that default colors are white-on-black, not -# black-on-while, so colors like "white background" need to be converted -# to "white background, black foreground", etc... - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.console import codes -from pip._vendor.pygments.style import ansicolors - - -__all__ = ['Terminal256Formatter', 'TerminalTrueColorFormatter'] - - -class EscapeSequence: - def __init__(self, fg=None, bg=None, bold=False, underline=False, italic=False): - self.fg = fg - self.bg = bg - self.bold = bold - self.underline = underline - self.italic = italic - - def escape(self, attrs): - if len(attrs): - return "\x1b[" + ";".join(attrs) + "m" - return "" - - def color_string(self): - attrs = [] - if self.fg is not None: - if self.fg in ansicolors: - esc = codes[self.fg.replace('ansi','')] - if ';01m' in esc: - self.bold = True - # extract fg color code. - attrs.append(esc[2:4]) - else: - attrs.extend(("38", "5", "%i" % self.fg)) - if self.bg is not None: - if self.bg in ansicolors: - esc = codes[self.bg.replace('ansi','')] - # extract fg color code, add 10 for bg. - attrs.append(str(int(esc[2:4])+10)) - else: - attrs.extend(("48", "5", "%i" % self.bg)) - if self.bold: - attrs.append("01") - if self.underline: - attrs.append("04") - if self.italic: - attrs.append("03") - return self.escape(attrs) - - def true_color_string(self): - attrs = [] - if self.fg: - attrs.extend(("38", "2", str(self.fg[0]), str(self.fg[1]), str(self.fg[2]))) - if self.bg: - attrs.extend(("48", "2", str(self.bg[0]), str(self.bg[1]), str(self.bg[2]))) - if self.bold: - attrs.append("01") - if self.underline: - attrs.append("04") - if self.italic: - attrs.append("03") - return self.escape(attrs) - - def reset_string(self): - attrs = [] - if self.fg is not None: - attrs.append("39") - if self.bg is not None: - attrs.append("49") - if self.bold or self.underline or self.italic: - attrs.append("00") - return self.escape(attrs) - - -class Terminal256Formatter(Formatter): - """ - Format tokens with ANSI color sequences, for output in a 256-color - terminal or console. Like in `TerminalFormatter` color sequences - are terminated at newlines, so that paging the output works correctly. - - The formatter takes colors from a style defined by the `style` option - and converts them to nearest ANSI 256-color escape sequences. Bold and - underline attributes from the style are preserved (and displayed). - - .. versionadded:: 0.9 - - .. versionchanged:: 2.2 - If the used style defines foreground colors in the form ``#ansi*``, then - `Terminal256Formatter` will map these to non extended foreground color. - See :ref:`AnsiTerminalStyle` for more information. - - .. versionchanged:: 2.4 - The ANSI color names have been updated with names that are easier to - understand and align with colornames of other projects and terminals. - See :ref:`this table ` for more information. - - - Options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `linenos` - Set to ``True`` to have line numbers on the terminal output as well - (default: ``False`` = no line numbers). - """ - name = 'Terminal256' - aliases = ['terminal256', 'console256', '256'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - - self.xterm_colors = [] - self.best_match = {} - self.style_string = {} - - self.usebold = 'nobold' not in options - self.useunderline = 'nounderline' not in options - self.useitalic = 'noitalic' not in options - - self._build_color_table() # build an RGB-to-256 color conversion table - self._setup_styles() # convert selected style's colors to term. colors - - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def _build_color_table(self): - # colors 0..15: 16 basic colors - - self.xterm_colors.append((0x00, 0x00, 0x00)) # 0 - self.xterm_colors.append((0xcd, 0x00, 0x00)) # 1 - self.xterm_colors.append((0x00, 0xcd, 0x00)) # 2 - self.xterm_colors.append((0xcd, 0xcd, 0x00)) # 3 - self.xterm_colors.append((0x00, 0x00, 0xee)) # 4 - self.xterm_colors.append((0xcd, 0x00, 0xcd)) # 5 - self.xterm_colors.append((0x00, 0xcd, 0xcd)) # 6 - self.xterm_colors.append((0xe5, 0xe5, 0xe5)) # 7 - self.xterm_colors.append((0x7f, 0x7f, 0x7f)) # 8 - self.xterm_colors.append((0xff, 0x00, 0x00)) # 9 - self.xterm_colors.append((0x00, 0xff, 0x00)) # 10 - self.xterm_colors.append((0xff, 0xff, 0x00)) # 11 - self.xterm_colors.append((0x5c, 0x5c, 0xff)) # 12 - self.xterm_colors.append((0xff, 0x00, 0xff)) # 13 - self.xterm_colors.append((0x00, 0xff, 0xff)) # 14 - self.xterm_colors.append((0xff, 0xff, 0xff)) # 15 - - # colors 16..232: the 6x6x6 color cube - - valuerange = (0x00, 0x5f, 0x87, 0xaf, 0xd7, 0xff) - - for i in range(217): - r = valuerange[(i // 36) % 6] - g = valuerange[(i // 6) % 6] - b = valuerange[i % 6] - self.xterm_colors.append((r, g, b)) - - # colors 233..253: grayscale - - for i in range(1, 22): - v = 8 + i * 10 - self.xterm_colors.append((v, v, v)) - - def _closest_color(self, r, g, b): - distance = 257*257*3 # "infinity" (>distance from #000000 to #ffffff) - match = 0 - - for i in range(0, 254): - values = self.xterm_colors[i] - - rd = r - values[0] - gd = g - values[1] - bd = b - values[2] - d = rd*rd + gd*gd + bd*bd - - if d < distance: - match = i - distance = d - return match - - def _color_index(self, color): - index = self.best_match.get(color, None) - if color in ansicolors: - # strip the `ansi/#ansi` part and look up code - index = color - self.best_match[color] = index - if index is None: - try: - rgb = int(str(color), 16) - except ValueError: - rgb = 0 - - r = (rgb >> 16) & 0xff - g = (rgb >> 8) & 0xff - b = rgb & 0xff - index = self._closest_color(r, g, b) - self.best_match[color] = index - return index - - def _setup_styles(self): - for ttype, ndef in self.style: - escape = EscapeSequence() - # get foreground from ansicolor if set - if ndef['ansicolor']: - escape.fg = self._color_index(ndef['ansicolor']) - elif ndef['color']: - escape.fg = self._color_index(ndef['color']) - if ndef['bgansicolor']: - escape.bg = self._color_index(ndef['bgansicolor']) - elif ndef['bgcolor']: - escape.bg = self._color_index(ndef['bgcolor']) - if self.usebold and ndef['bold']: - escape.bold = True - if self.useunderline and ndef['underline']: - escape.underline = True - if self.useitalic and ndef['italic']: - escape.italic = True - self.style_string[str(ttype)] = (escape.color_string(), - escape.reset_string()) - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno)) - - def format(self, tokensource, outfile): - return Formatter.format(self, tokensource, outfile) - - def format_unencoded(self, tokensource, outfile): - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - not_found = True - while ttype and not_found: - try: - # outfile.write( "<" + str(ttype) + ">" ) - on, off = self.style_string[str(ttype)] - - # Like TerminalFormatter, add "reset colors" escape sequence - # on newline. - spl = value.split('\n') - for line in spl[:-1]: - if line: - outfile.write(on + line + off) - if self.linenos: - self._write_lineno(outfile) - else: - outfile.write('\n') - - if spl[-1]: - outfile.write(on + spl[-1] + off) - - not_found = False - # outfile.write( '#' + str(ttype) + '#' ) - - except KeyError: - # ottype = ttype - ttype = ttype.parent - # outfile.write( '!' + str(ottype) + '->' + str(ttype) + '!' ) - - if not_found: - outfile.write(value) - - if self.linenos: - outfile.write("\n") - - - -class TerminalTrueColorFormatter(Terminal256Formatter): - r""" - Format tokens with ANSI color sequences, for output in a true-color - terminal or console. Like in `TerminalFormatter` color sequences - are terminated at newlines, so that paging the output works correctly. - - .. versionadded:: 2.1 - - Options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - """ - name = 'TerminalTrueColor' - aliases = ['terminal16m', 'console16m', '16m'] - filenames = [] - - def _build_color_table(self): - pass - - def _color_tuple(self, color): - try: - rgb = int(str(color), 16) - except ValueError: - return None - r = (rgb >> 16) & 0xff - g = (rgb >> 8) & 0xff - b = rgb & 0xff - return (r, g, b) - - def _setup_styles(self): - for ttype, ndef in self.style: - escape = EscapeSequence() - if ndef['color']: - escape.fg = self._color_tuple(ndef['color']) - if ndef['bgcolor']: - escape.bg = self._color_tuple(ndef['bgcolor']) - if self.usebold and ndef['bold']: - escape.bold = True - if self.useunderline and ndef['underline']: - escape.underline = True - if self.useitalic and ndef['italic']: - escape.italic = True - self.style_string[str(ttype)] = (escape.true_color_string(), - escape.reset_string()) diff --git a/spaces/aliabid94/AutoGPT/autogpt/commands/web_selenium.py b/spaces/aliabid94/AutoGPT/autogpt/commands/web_selenium.py deleted file mode 100644 index 11bdfeb1f1630fc6ff6f55d68e8d7233281c5098..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/commands/web_selenium.py +++ /dev/null @@ -1,154 +0,0 @@ -"""Selenium web scraping module.""" -from __future__ import annotations - -import logging -from pathlib import Path -from sys import platform - -from bs4 import BeautifulSoup -from selenium import webdriver -from selenium.webdriver.chrome.options import Options as ChromeOptions -from selenium.webdriver.common.by import By -from selenium.webdriver.firefox.options import Options as FirefoxOptions -from selenium.webdriver.remote.webdriver import WebDriver -from selenium.webdriver.safari.options import Options as SafariOptions -from selenium.webdriver.support import expected_conditions as EC -from selenium.webdriver.support.wait import WebDriverWait -from webdriver_manager.chrome import ChromeDriverManager -from webdriver_manager.firefox import GeckoDriverManager - -import autogpt.processing.text as summary -from autogpt.config import Config -from autogpt.processing.html import extract_hyperlinks, format_hyperlinks - -FILE_DIR = Path(__file__).parent.parent -CFG = Config() - - -def browse_website(url: str, question: str) -> tuple[str, WebDriver]: - """Browse a website and return the answer and links to the user - - Args: - url (str): The url of the website to browse - question (str): The question asked by the user - - Returns: - Tuple[str, WebDriver]: The answer and links to the user and the webdriver - """ - driver, text = scrape_text_with_selenium(url) - add_header(driver) - summary_text = summary.summarize_text(url, text, question, driver) - links = scrape_links_with_selenium(driver, url) - - # Limit links to 5 - if len(links) > 5: - links = links[:5] - close_browser(driver) - return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver - - -def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]: - """Scrape text from a website using selenium - - Args: - url (str): The url of the website to scrape - - Returns: - Tuple[WebDriver, str]: The webdriver and the text scraped from the website - """ - logging.getLogger("selenium").setLevel(logging.CRITICAL) - - options_available = { - "chrome": ChromeOptions, - "safari": SafariOptions, - "firefox": FirefoxOptions, - } - - options = options_available[CFG.selenium_web_browser]() - options.add_argument( - "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36" - ) - - if CFG.selenium_web_browser == "firefox": - driver = webdriver.Firefox( - executable_path=GeckoDriverManager().install(), options=options - ) - elif CFG.selenium_web_browser == "safari": - # Requires a bit more setup on the users end - # See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari - driver = webdriver.Safari(options=options) - else: - if platform == "linux" or platform == "linux2": - options.add_argument("--disable-dev-shm-usage") - options.add_argument("--remote-debugging-port=9222") - - options.add_argument("--no-sandbox") - if CFG.selenium_headless: - options.add_argument("--headless") - options.add_argument("--disable-gpu") - - driver = webdriver.Chrome( - executable_path=ChromeDriverManager().install(), options=options - ) - driver.get(url) - - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) - - # Get the HTML content directly from the browser's DOM - page_source = driver.execute_script("return document.body.outerHTML;") - soup = BeautifulSoup(page_source, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - return driver, text - - -def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]: - """Scrape links from a website using selenium - - Args: - driver (WebDriver): The webdriver to use to scrape the links - - Returns: - List[str]: The links scraped from the website - """ - page_source = driver.page_source - soup = BeautifulSoup(page_source, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - hyperlinks = extract_hyperlinks(soup, url) - - return format_hyperlinks(hyperlinks) - - -def close_browser(driver: WebDriver) -> None: - """Close the browser - - Args: - driver (WebDriver): The webdriver to close - - Returns: - None - """ - driver.quit() - - -def add_header(driver: WebDriver) -> None: - """Add a header to the website - - Args: - driver (WebDriver): The webdriver to use to add the header - - Returns: - None - """ - driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read()) diff --git a/spaces/alitrack/ChatPDF/chatpdf.py b/spaces/alitrack/ChatPDF/chatpdf.py deleted file mode 100644 index d337b7d31b13ee551313b8e2e41f0e49046324d1..0000000000000000000000000000000000000000 --- a/spaces/alitrack/ChatPDF/chatpdf.py +++ /dev/null @@ -1,171 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@author:XuMing(xuming624@qq.com) -@description: -""" -from similarities import Similarity -from textgen import ChatGlmModel, LlamaModel - -PROMPT_TEMPLATE = """\ -基于以下已知信息,简洁和专业的来回答用户的问题。 -如果无法从中得到答案,请说 "根据已知信息无法回答该问题" 或 "没有提供足够的相关信息",不允许在答案中添加编造成分,答案请使用中文。 - -已知内容: -{context_str} - -问题: -{query_str} -""" - - -class ChatPDF: - def __init__( - self, - sim_model_name_or_path: str = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", - gen_model_type: str = "chatglm", - gen_model_name_or_path: str = "THUDM/chatglm-6b-int4", - lora_model_name_or_path: str = None, - - ): - self.sim_model = Similarity(model_name_or_path=sim_model_name_or_path) - - if gen_model_type == "chatglm": - self.gen_model = ChatGlmModel(gen_model_type, gen_model_name_or_path, lora_name=lora_model_name_or_path) - elif gen_model_type == "llama": - self.gen_model = LlamaModel(gen_model_type, gen_model_name_or_path, lora_name=lora_model_name_or_path) - else: - raise ValueError('gen_model_type must be chatglm or llama.') - self.history = None - self.pdf_path = None - - def load_pdf_file(self, pdf_path: str): - """Load a PDF file.""" - if pdf_path.endswith('.pdf'): - corpus = self.extract_text_from_pdf(pdf_path) - elif pdf_path.endswith('.docx'): - corpus = self.extract_text_from_docx(pdf_path) - elif pdf_path.endswith('.md'): - corpus = self.extract_text_from_markdown(pdf_path) - else: - corpus = self.extract_text_from_txt(pdf_path) - self.sim_model.add_corpus(corpus) - self.pdf_path = pdf_path - - @staticmethod - def extract_text_from_pdf(file_path: str): - """Extract text content from a PDF file.""" - import PyPDF2 - contents = [] - with open(file_path, 'rb') as f: - pdf_reader = PyPDF2.PdfReader(f) - for page in pdf_reader.pages: - page_text = page.extract_text().strip() - raw_text = [text.strip() for text in page_text.splitlines() if text.strip()] - new_text = '' - for text in raw_text: - new_text += text - if text[-1] in ['.', '!', '?', '。', '!', '?', '…', ';', ';', ':', ':', '”', '’', ')', '】', '》', '」', - '』', '〕', '〉', '》', '〗', '〞', '〟', '»', '"', "'", ')', ']', '}']: - contents.append(new_text) - new_text = '' - if new_text: - contents.append(new_text) - return contents - - @staticmethod - def extract_text_from_txt(file_path: str): - """Extract text content from a TXT file.""" - contents = [] - with open(file_path, 'r', encoding='utf-8') as f: - contents = [text.strip() for text in f.readlines() if text.strip()] - return contents - - @staticmethod - def extract_text_from_docx(file_path: str): - """Extract text content from a DOCX file.""" - import docx - document = docx.Document(file_path) - contents = [paragraph.text.strip() for paragraph in document.paragraphs if paragraph.text.strip()] - return contents - - @staticmethod - def extract_text_from_markdown(file_path: str): - """Extract text content from a Markdown file.""" - import markdown - from bs4 import BeautifulSoup - with open(file_path, 'r', encoding='utf-8') as f: - markdown_text = f.read() - html = markdown.markdown(markdown_text) - soup = BeautifulSoup(html, 'html.parser') - contents = [text.strip() for text in soup.get_text().splitlines() if text.strip()] - return contents - - @staticmethod - def _add_source_numbers(lst): - """Add source numbers to a list of strings.""" - return [f'[{idx + 1}]\t "{item}"' for idx, item in enumerate(lst)] - - def _generate_answer(self, query_str, context_str, history=None, max_length=1024): - """Generate answer from query and context.""" - prompt = PROMPT_TEMPLATE.format(context_str=context_str, query_str=query_str) - response, out_history = self.gen_model.chat(prompt, history, max_length=max_length) - return response, out_history - - def query( - self, - query, - topn: int = 5, - max_length: int = 1024, - max_input_size: int = 1024, - use_history: bool = False - ): - """Query from corpus.""" - - sim_contents = self.sim_model.most_similar(query, topn=topn) - - reference_results = [] - for query_id, id_score_dict in sim_contents.items(): - for corpus_id, s in id_score_dict.items(): - reference_results.append(self.sim_model.corpus[corpus_id]) - if not reference_results: - return '没有提供足够的相关信息', reference_results - reference_results = self._add_source_numbers(reference_results) - - context_str = '\n'.join(reference_results)[:(max_input_size - len(PROMPT_TEMPLATE))] - - if use_history: - response, out_history = self._generate_answer(query, context_str, self.history, max_length=max_length) - self.history = out_history - else: - - response, out_history = self._generate_answer(query, context_str) - - return response, out_history, reference_results - - def save_index(self, index_path=None): - """Save model.""" - if index_path is None: - index_path = '.'.join(self.pdf_path.split('.')[:-1]) + '_index.json' - self.sim_model.save_index(index_path) - - def load_index(self, index_path=None): - """Load model.""" - if index_path is None: - index_path = '.'.join(self.pdf_path.split('.')[:-1]) + '_index.json' - self.sim_model.load_index(index_path) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) > 2: - gen_model_name_or_path = sys.argv[1] - else: - print('Usage: python chatpdf.py ') - gen_model_name_or_path = "THUDM/chatglm-6b-int4" - m = ChatPDF(gen_model_name_or_path=gen_model_name_or_path) - m.load_pdf_file(pdf_path='sample.pdf') - response = m.query('自然语言中的非平行迁移是指什么?') - print(response[0]) - response = m.query('本文作者是谁?') - print(response[0]) \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test45/app.py b/spaces/allknowingroger/Image-Models-Test45/app.py deleted file mode 100644 index a63d0a8e64ea7d6c09f9894660b40606e405094c..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test45/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Yntec/DucHaitenAIart-beta", - "dpwm/lora-trained-xl", - "Vsukiyaki/ShiratakiMix", - "Silvelter/Sathariel", - "digiplay/nk15_diffusers", - "digiplay/kencanmix_v1.6", - "digiplay/kencanmix_v1.5", - "jonaylor89/sd-johannes", - "EhsanElahi/avatar-creator", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/anshu-man853/webscrapping/README.md b/spaces/anshu-man853/webscrapping/README.md deleted file mode 100644 index 4d3db4693f153cddc9361b2b66fa77bf39fb367d..0000000000000000000000000000000000000000 --- a/spaces/anshu-man853/webscrapping/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Webscrapping -emoji: ⚡ -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/apoorvumang/kgt5/app.py b/spaces/apoorvumang/kgt5/app.py deleted file mode 100644 index 0329a6e36c9ac4afd7a0e3f6c5601f55e089824d..0000000000000000000000000000000000000000 --- a/spaces/apoorvumang/kgt5/app.py +++ /dev/null @@ -1,114 +0,0 @@ -import gradio as gr -import torch -import numpy as np -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - - -def getScores(ids, scores, pad_token_id): - """get sequence scores from model.generate output""" - scores = torch.stack(scores, dim=1) - log_probs = torch.log_softmax(scores, dim=2) - # remove start token - ids = ids[:,1:] - # gather needed probs - x = ids.unsqueeze(-1).expand(log_probs.shape) - needed_logits = torch.gather(log_probs, 2, x) - final_logits = needed_logits[:, :, 0] - padded_mask = (ids == pad_token_id) - final_logits[padded_mask] = 0 - final_scores = final_logits.sum(dim=-1) - return final_scores.cpu().detach().numpy() - -def topkSample(input, model, tokenizer, - num_samples=5, - num_beams=1, - max_output_length=30): - tokenized = tokenizer(input, return_tensors="pt") - out = model.generate(**tokenized, - do_sample=True, - num_return_sequences = num_samples, - num_beams = num_beams, - eos_token_id = tokenizer.eos_token_id, - pad_token_id = tokenizer.pad_token_id, - output_scores = True, - return_dict_in_generate=True, - max_length=max_output_length,) - out_tokens = out.sequences - out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True) - out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id) - - pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)] - sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True) - return sorted_pair_list - -def greedyPredict(input, model, tokenizer): - input_ids = tokenizer([input], return_tensors="pt").input_ids - out_tokens = model.generate(input_ids) - out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True) - return out_str[0] - -def predict_tail(entity, relation): - global model, tokenizer - input = entity + "| " + relation - out = topkSample(input, model, tokenizer, num_samples=25) - out_dict = {} - for k, v in out: - out_dict[k] = np.exp(v).item() - return out_dict - - -tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2") -model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2") - - -ent_input = gr.inputs.Textbox(lines=1, default="Apoorv Umang Saxena") -rel_input = gr.inputs.Textbox(lines=1, default="country") -output = gr.outputs.Label() - -examples = [ -['Adrian Kochsiek', 'sex or gender'], -['Apoorv Umang Saxena', 'family name'], -['World War II', 'followed by'], -['Apoorv Umang Saxena', 'country'], -['Ippolito Boccolini', 'writing language'] , -['Roelant', 'writing system'] , -['The Accountant 2227', 'language of work or name'] , -['Microbial Infection and AMR in Hospitalized Patients With Covid 19', 'study type'] , -['Carla Fracci', 'manner of death'] , -['list of programs broadcast by Comet', 'is a list of'] , -['Loreta Podhradí', 'continent'] , -['Opistognathotrema', 'taxon rank'] , -['Museum Arbeitswelt Steyr', 'wheelchair accessibility'] , -['Heliotropium tytoides', 'subject has role'] , -['School bus crash rates on routine and nonroutine routes.', 'sponsor'] , -['Tachigalieae', 'taxon rank'] , -['Irena Salusová', 'place of detention'] , - -] -title = "Interactive demo: KGT5" -description = """Demo for Sequence-to-Sequence Knowledge Graph Completion and Question Answering (KGT5). This particular model is a T5-base model trained on the task of tail prediction on WikiKG90Mv2 dataset and obtains 0.239 validation MRR on this task (leaderboard, see paper for details). - To use it, simply give an entity name and relation and click 'submit'. Upto 25 model predictions will show up in a few seconds. The model works best when the exact entity/relation names that it has been trained on are used. - It is sometimes able to generalize to unseen entities as well (see examples). -""" -#article = """ -#

    Sequence-to-Sequence Knowledge Graph Completion and Question Answering | Github Repo

    -#""" - -article = """ -Under the hood, this demo concatenates the entity and relation, feeds it to the model and then samples 25 sequences, which are then ranked according to their sequence probabilities. -
    -The text representations of the relations and entities can be downloaded from here: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle and -https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle -
    -For more details see the Github repo or the hf model page. -""" - - -iface = gr.Interface(fn=predict_tail, - inputs=[ent_input, rel_input], - outputs=output, - title=title, - description=description, - article=article, - examples=examples,) -iface.launch() diff --git a/spaces/arch-123/bingo/src/components/settings.tsx b/spaces/arch-123/bingo/src/components/settings.tsx deleted file mode 100644 index 80b8a2d3b252b875f5b6f7dfc2f6e3ad9cdfb22a..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
    - 图文示例: - 如何获取 BING_HEADER - - -
    - -
    - setCurlValue(e.target.value)} - /> -
    - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
    - - - - - - - -
    - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
    - 启用语音回答 - setEnableTTS(checked)} - > - - -
    - - - - -
    -
    - ) - } - return null -} diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/utils/commons.py b/spaces/argilla/argilla-streamlit-customs/my_app/utils/commons.py deleted file mode 100644 index 5b79c78070a01dfccb7795c19957b8bd2f6bcd81..0000000000000000000000000000000000000000 --- a/spaces/argilla/argilla-streamlit-customs/my_app/utils/commons.py +++ /dev/null @@ -1,195 +0,0 @@ -import os - -import argilla as rg -import httpx -import streamlit as st -from argilla.client.api import ArgillaSingleton, active_client -from argilla.client.apis.datasets import __TASK_TO_SETTINGS__ -from huggingface_hub import HfApi - - -def argilla_login_flow(title: str) -> str: - """ - It tries to log in to Argilla - using the environment variables `ARGILLA_API_URL` and `ARGILLA_API_KEY`. If they - are not set, it shows a sidebar with two text inputs to manually enter the API URL - and API Key - - Args: - title: The title of the app. - - Returns: - The api_url is being returned. - """ - x = st.columns(3) - x[0].image("https://docs.argilla.io/en/latest/_static/images/logo-light-mode.svg", use_column_width=True) - - api_url, api_key = None, None - - if os.environ.get("ARGILLA_API_URL") and os.environ.get("ARGILLA_API_KEY"): - api_url = os.environ.get("ARGILLA_API_URL") - api_key = os.environ.get("ARGILLA_API_KEY") - rg.init( - api_url=api_url, - api_key=api_key, - ) - st.success( - f"Logged in at {os.environ.get('ARGILLA_API_URL')}, and workspace is" - f" {rg.get_workspace()}. Change `ARGILLA_API_URL` and `ARGILLA_API_KEY` to" - " use a different endpoint." - ) - else: - api_url = st.sidebar.text_input( - "API URL", "https://argilla-live-demo.hf.space", type="password" - ) - api_key = st.sidebar.text_input("API Key", value="team.apikey", type="password") - try: - rg.init( - api_url=api_url, - api_key=api_key, - ) - - st.success( - f"Logged in at {api_url}, and workspace is {rg.get_workspace()}. Set" - " `ARGILLA_API_URL` and `ARGILLA_API_KEY` as environment variables to" - " avoid this step." - ) - except Exception: - st.error( - "Invalid API URL or API Key. Use a correct manual input or, even" - " better, set `ARGILLA_API_URL` and `ARGILLA_API_KEY` as environment" - " variables to avoid this step." - ) - st.title(title) - return api_url, api_key - - - -def get_data_snapshot(dataset_name, workspace, query=None): - rg.set_workspace(workspace) - if query == "": - query = None - ds = rg.load(dataset_name, query=query, limit=5).to_pandas() - st.write(f"Sample of the `{workspace}/{dataset_name}`", ds) - - -def hf_login_flow(): - """ - It checks if the user has provided a Hugging Face API token in the environment variable - `HF_AUTH_TOKEN` or in the sidebar. If not, it displays an error message and stops the app - - Returns: - A tuple of the token and the api - """ - hf_auth_token = os.environ.get("HF_AUTH_TOKEN", "") - if not hf_auth_token: - hf_auth_token = st.sidebar.text_input( - "Hugging Face [User Access Tokens](https://huggingface.co/settings/tokens)", - os.environ.get("HF_AUTH_TOKEN", ""), type="password" - ) - if not hf_auth_token: - st.error( - "Please provide a Hugging Face [User Access" - " Tokens](https://huggingface.co/settings/tokens) in the sidebar or set" - " `HF_AUTH_TOKEN` as environment variable" - ) - st.stop() - api = HfApi(token=hf_auth_token) - return hf_auth_token, api - - -# def record_info(): -# with st.expander("Dataset Type Info"): -# if dataset_type == "TextClassification": -# st.write(rg.TextClassificationRecord.__doc__) -# elif dataset_type == "TokenClassification": -# st.write(rg.TokenClassificationRecord.__doc__) -# else: -# st.write(rg.Text2TextRecord.__doc__) - - -def get_dataset_list(api_url, api_key): - client = ArgillaSingleton.init(api_url, api_key)._client - url = "{}/api/workspaces".format(client.base_url) - response = httpx.get( - url=url, - headers=client.get_headers(), - cookies=client.get_cookies(), - timeout=client.get_timeout(), - ) - response.raise_for_status() - worskpace_names = [w["name"] for w in response.json()] - - datasets = [] - for name in worskpace_names: - url = "{}/api/datasets?workspace={}".format(client.base_url, name) - response = httpx.get( - url=url, - headers=client.get_headers(), - cookies=client.get_cookies(), - timeout=client.get_timeout(), - ) - response.raise_for_status() - datasets += response.json() - - dataset_overview = [] - for dataset in datasets: - metadata = dataset["metadata"].values() - if ( - metadata - and not isinstance(list(metadata)[0], str) - and not isinstance(list(metadata)[0], int) - ): - metadata = list(metadata)[0].get( - "labels", list(metadata)[0].get("entities") - ) - else: - metadata = None - dataset_overview.append( - { - "name": dataset["name"], - "task": dataset["task"], - "owner": dataset["owner"], - "id": dataset["id"], - "labels": metadata, - } - ) - # if metadata is None: - # # dataset_overview[-1]["labels"] = - # setting = get_dataset_settings(dataset["name"], dataset["task"]) - # if setting is not None: - # dataset_overview[-1]["labels"] = list(setting) - return dataset_overview - - -def whoami(): - client = active_client()._client - url = "{}/api/me".format(client.base_url) - response = httpx.get( - url=url, - headers=client.get_headers(), - cookies=client.get_cookies(), - timeout=client.get_timeout(), - ) - return {**response.json()} - - -def get_dataset_settings(dataset_name, dataset_task): - client = active_client()._client - url = f"{client.base_url}/api/datasets/{dataset_task}/{dataset_name}/settings" - response = httpx.get( - url=url, - headers=client.get_headers(), - cookies=client.get_cookies(), - timeout=client.get_timeout(), - ) - - try: - response.raise_for_status() - return ( - __TASK_TO_SETTINGS__.get(dataset_task) - .from_dict(response.json()) - .label_schema - ) - except Exception: - return None diff --git a/spaces/artificialguybr/video-dubbing/TTS/CODE_OF_CONDUCT.md b/spaces/artificialguybr/video-dubbing/TTS/CODE_OF_CONDUCT.md deleted file mode 100644 index b80639d63c29e902c547de347806651bcc9ad3b2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,133 +0,0 @@ - -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, caste, color, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -coc-report@coqui.ai. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -[https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0]. - -Community Impact Guidelines were inspired by -[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. - -For answers to common questions about this code of conduct, see the FAQ at -[https://www.contributor-covenant.org/faq][FAQ]. Translations are available -at [https://www.contributor-covenant.org/translations][translations]. - -[homepage]: https://www.contributor-covenant.org -[v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html -[Mozilla CoC]: https://github.com/mozilla/diversity -[FAQ]: https://www.contributor-covenant.org/faq -[translations]: https://www.contributor-covenant.org/translations diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/xtts.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/xtts.md deleted file mode 100644 index 03e44af1707a97c05a1a535215e6e71904221425..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/models/xtts.md +++ /dev/null @@ -1,251 +0,0 @@ -# ⓍTTS -ⓍTTS is a super cool Text-to-Speech model that lets you clone voices in different languages by using just a quick 3-second audio clip. Built on the 🐢Tortoise, -ⓍTTS has important model changes that make cross-language voice cloning and multi-lingual speech generation super easy. -There is no need for an excessive amount of training data that spans countless hours. - -This is the same model that powers [Coqui Studio](https://coqui.ai/), and [Coqui API](https://docs.coqui.ai/docs), however we apply -a few tricks to make it faster and support streaming inference. - -### Features -- Voice cloning. -- Cross-language voice cloning. -- Multi-lingual speech generation. -- 24khz sampling rate. -- Streaming inference with < 200ms latency. (See [Streaming inference](#streaming-inference)) -- Fine-tuning support. (See [Training](#training)) - -### Updates with v2 -- Improved voice cloning. -- Voices can be cloned with a single audio file or multiple audio files, without any effect on the runtime. -- 2 new languages: Hungarian and Korean. -- Across the board quality improvements. - -### Code -Current implementation only supports inference. - -### Languages -As of now, XTTS-v2 supports 16 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu) and Korean (ko). - -Stay tuned as we continue to add support for more languages. If you have any language requests, please feel free to reach out. - -### License -This model is licensed under [Coqui Public Model License](https://coqui.ai/cpml). - -### Contact -Come and join in our 🐸Community. We're active on [Discord](https://discord.gg/fBC58unbKE) and [Twitter](https://twitter.com/coqui_ai). -You can also mail us at info@coqui.ai. - -### Inference -#### 🐸TTS API - -##### Single reference -```python -from TTS.api import TTS -tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True) - -# generate speech by cloning a voice using default settings -tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", - file_path="output.wav", - speaker_wav=["/path/to/target/speaker.wav"], - language="en") -``` - -##### Multiple references -```python -from TTS.api import TTS -tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True) - -# generate speech by cloning a voice using default settings -tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", - file_path="output.wav", - speaker_wav=["/path/to/target/speaker.wav", "/path/to/target/speaker_2.wav", "/path/to/target/speaker_3.wav"], - language="en") -``` - -#### 🐸TTS Command line - -##### Single reference -```console - tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \ - --text "Bugün okula gitmek istemiyorum." \ - --speaker_wav /path/to/target/speaker.wav \ - --language_idx tr \ - --use_cuda true -``` - -##### Multiple references -```console - tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \ - --text "Bugün okula gitmek istemiyorum." \ - --speaker_wav /path/to/target/speaker.wav /path/to/target/speaker_2.wav /path/to/target/speaker_3.wav \ - --language_idx tr \ - --use_cuda true -``` -or for all wav files in a directory you can use: - -```console - tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \ - --text "Bugün okula gitmek istemiyorum." \ - --speaker_wav /path/to/target/*.wav \ - --language_idx tr \ - --use_cuda true -``` - - -#### model directly - -If you want to be able to run with `use_deepspeed=True` and enjoy the speedup, you need to install deepspeed first. - -```console -pip install deepspeed==0.8.3 -``` - -```python -import os -import torch -import torchaudio -from TTS.tts.configs.xtts_config import XttsConfig -from TTS.tts.models.xtts import Xtts - -print("Loading model...") -config = XttsConfig() -config.load_json("/path/to/xtts/config.json") -model = Xtts.init_from_config(config) -model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=True) -model.cuda() - -print("Computing speaker latents...") -gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"]) - -print("Inference...") -out = model.inference( - "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", - "en", - gpt_cond_latent, - speaker_embedding, - temperature=0.7, # Add custom parameters here -) -torchaudio.save("xtts.wav", torch.tensor(out["wav"]).unsqueeze(0), 24000) -``` - - -#### streaming inference - -Here the goal is to stream the audio as it is being generated. This is useful for real-time applications. -Streaming inference is typically slower than regular inference, but it allows to get a first chunk of audio faster. - - -```python -import os -import time -import torch -import torchaudio -from TTS.tts.configs.xtts_config import XttsConfig -from TTS.tts.models.xtts import Xtts - -print("Loading model...") -config = XttsConfig() -config.load_json("/path/to/xtts/config.json") -model = Xtts.init_from_config(config) -model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", use_deepspeed=True) -model.cuda() - -print("Computing speaker latents...") -gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=["reference.wav"]) - -print("Inference...") -t0 = time.time() -chunks = model.inference_stream( - "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", - "en", - gpt_cond_latent, - speaker_embedding -) - -wav_chuncks = [] -for i, chunk in enumerate(chunks): - if i == 0: - print(f"Time to first chunck: {time.time() - t0}") - print(f"Received chunk {i} of audio length {chunk.shape[-1]}") - wav_chuncks.append(chunk) -wav = torch.cat(wav_chuncks, dim=0) -torchaudio.save("xtts_streaming.wav", wav.squeeze().unsqueeze(0).cpu(), 24000) -``` - - -### Training - -A recipe for `XTTS_v2` GPT encoder training using `LJSpeech` dataset is available at https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech/xtts_v1/train_gpt_xtts.py - -You need to change the fields of the `BaseDatasetConfig` to match your dataset and then update `GPTArgs` and `GPTTrainerConfig` fields as you need. By default, it will use the same parameters that XTTS v1.1 model was trained with. To speed up the model convergence, as default, it will also download the XTTS v1.1 checkpoint and load it. - -After training you can do inference following the code bellow. - -```python -import os -import torch -import torchaudio -from TTS.tts.configs.xtts_config import XttsConfig -from TTS.tts.models.xtts import Xtts - -# Add here the xtts_config path -CONFIG_PATH = "recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-October-23-2023_10+36AM-653f2e75/config.json" -# Add here the vocab file that you have used to train the model -TOKENIZER_PATH = "recipes/ljspeech/xtts_v1/run/training/XTTS_v2_original_model_files/vocab.json" -# Add here the checkpoint that you want to do inference with -XTTS_CHECKPOINT = "recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT/best_model.pth" -# Add here the speaker reference -SPEAKER_REFERENCE = "LjSpeech_reference.wav" - -# output wav path -OUTPUT_WAV_PATH = "xtts-ft.wav" - -print("Loading model...") -config = XttsConfig() -config.load_json(CONFIG_PATH) -model = Xtts.init_from_config(config) -model.load_checkpoint(config, checkpoint_path=XTTS_CHECKPOINT, vocab_path=TOKENIZER_PATH, use_deepspeed=False) -model.cuda() - -print("Computing speaker latents...") -gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(audio_path=[SPEAKER_REFERENCE]) - -print("Inference...") -out = model.inference( - "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.", - "en", - gpt_cond_latent, - speaker_embedding, - temperature=0.7, # Add custom parameters here -) -torchaudio.save(OUTPUT_WAV_PATH, torch.tensor(out["wav"]).unsqueeze(0), 24000) -``` - - -## References and Acknowledgements -- VallE: https://arxiv.org/abs/2301.02111 -- Tortoise Repo: https://github.com/neonbjb/tortoise-tts -- Faster implementation: https://github.com/152334H/tortoise-tts-fast -- Univnet: https://arxiv.org/abs/2106.07889 -- Latent Diffusion:https://arxiv.org/abs/2112.10752 -- DALL-E: https://arxiv.org/abs/2102.12092 -- Perceiver: https://arxiv.org/abs/2103.03206 - - -## XttsConfig -```{eval-rst} -.. autoclass:: TTS.tts.configs.xtts_config.XttsConfig - :members: -``` - -## XttsArgs -```{eval-rst} -.. autoclass:: TTS.tts.models.xtts.XttsArgs - :members: -``` - -## XTTS Model -```{eval-rst} -.. autoclass:: TTS.tts.models.xtts.XTTS - :members: -``` diff --git a/spaces/aseifert/writing-assistant/README.md b/spaces/aseifert/writing-assistant/README.md deleted file mode 100644 index aa6c9ceda53155f83b1282f1b4208b233b2d1681..0000000000000000000000000000000000000000 --- a/spaces/aseifert/writing-assistant/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Writing Assistant -emoji: 📊 -colorFrom: blue -colorTo: gray -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/awacke1/04-AW-StorywriterwMem/README.md b/spaces/awacke1/04-AW-StorywriterwMem/README.md deleted file mode 100644 index dee5d8a05f3975a8b80215895c6f657f9e27feb8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/04-AW-StorywriterwMem/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 04 AW StorywriterwMem -emoji: 📉 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/GradioVoicetoTexttoSentiment/app.py b/spaces/awacke1/GradioVoicetoTexttoSentiment/app.py deleted file mode 100644 index 6066bc8c5bd0572c67119cf0f910228e3ad05ab6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GradioVoicetoTexttoSentiment/app.py +++ /dev/null @@ -1,73 +0,0 @@ -from transformers import pipeline -import gradio as gr - -asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h") -classifier = pipeline("text-classification", "michellejieli/emotion_text_classifier") - -def transcribe(speech, state=""): - text = asr(speech)["text"] - state += text + " " - return text, state - -def speech_to_text(speech): - text = asr(speech)["text"] - return text - -def text_to_sentiment(text): - return classifier(text)[0]["label"] - - -demo = gr.Blocks() -with demo: - - microphone = gr.Audio(source="microphone", type="filepath") - audio_file = gr.Audio(type="filepath") - text = gr.Textbox() - label = gr.Label() - - b0 = gr.Button("Speech From Microphone") - b1 = gr.Button("Recognize Speech") - b2 = gr.Button("Classify Sentiment") - - #b0.click(transcribe, inputs=[microphone, "state"], outputs=[text, "state"], live=True) - b0.click(transcribe, inputs=[microphone], outputs=[text]) - b1.click(speech_to_text, inputs=audio_file, outputs=text) - b2.click(text_to_sentiment, inputs=text, outputs=label) - - gr.Markdown("""References: - -## Building an Asynchronous Real-Time Live Telemedicine System Using AI Pipelines for Smart Communities - -1. **Designing the Telemedicine System** - - Identify the needs and challenges of smart communities and design a telemedicine system that addresses these challenges. - - Choose a platform that allows for asynchronous real-time communication, such as video conferencing or chat-based messaging, to facilitate remote consultations with healthcare providers. - - Design the system to incorporate AI pipelines that can analyze patient data and provide decision support for healthcare providers. - -2. **Implementing the AI Pipelines** - - Identify the relevant AI algorithms and techniques that can be used to analyze patient data, such as machine learning or natural language processing. - - Integrate these AI pipelines into the telemedicine system to provide decision support for healthcare providers during consultations. - - Ensure that the AI algorithms are accurate and reliable by testing them on a large and diverse set of patient data. - -3. **Deploying the Telemedicine System** - - Deploy the telemedicine system in smart communities, ensuring that it is easily accessible and user-friendly for patients and healthcare providers. - - Train healthcare providers on how to use the system effectively and provide ongoing support and feedback to optimize its use. - - Continuously monitor and evaluate the system's performance, making improvements and updates as needed to ensure that it remains effective and efficient in meeting the needs of smart communities. - -**__Asynchronous Telemedicine:__ A Solution to Address Provider Shortages by Offering Remote Care Services.** -([Wikipedia](https://en.wikipedia.org/wiki/Telemedicine)) - - -# 2023's Top 7 Breakthroughs in Medical Technology -1. __Asynchronous Telemedicine:__ A Solution to Address Provider Shortages by Offering Remote Care Services. ([Wikipedia](https://en.wikipedia.org/wiki/Telemedicine)) -2. __Ambient and Emotion AI:__ Empowering Patients with Artificial Intelligence That Shows Empathy and Compassion. ([Wikipedia](https://en.wikipedia.org/wiki/Ambient_intelligence)) -3. __Skin Patch Technology:__ A Convenient Way to Measure Vital Signals such as Blood Pressure and Glucose Levels. ([Wikipedia](https://en.wikipedia.org/wiki/Skin_patch)) -4. __Affordable Vein Scanner:__ A Revolutionary Tool to View Veins Through the Skin. ([Wikipedia](https://en.wikipedia.org/wiki/Vein_matching)) -5. __Synthetic Medical Records:__ Creating Reliable Medical Records Using Generative Adversarial Networks. ([Wikipedia](https://en.wikipedia.org/wiki/Synthetic_data)) -6. __Blood Draw Devices for Clinical Trials:__ Facilitating Remote Participation in Trials with Innovative Technology. ([Wikipedia](https://en.wikipedia.org/wiki/Blood_sampling)) -7. __Smart TVs for Remote Care:__ Enhancing Remote Care Consultations with Video Chat and Recordings. ([Wikipedia](https://en.wikipedia.org/wiki/Smart_television)) - -Reference: [The Medical Futurist](https://www.youtube.com/watch?v=_9DpLD4S2AY&list=PLHgX2IExbFotoMt32SrT3Xynt5BXTGnEP&index=2) - - """) - -demo.launch() \ No newline at end of file diff --git a/spaces/awacke1/InContextLearning-PromptTargeting/backup.app.py b/spaces/awacke1/InContextLearning-PromptTargeting/backup.app.py deleted file mode 100644 index d5811b6ec7a1e176e20c3a5a17272f70bdde2d81..0000000000000000000000000000000000000000 --- a/spaces/awacke1/InContextLearning-PromptTargeting/backup.app.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st -import graphviz as gv -import json -import os - -FILE_NAME = 'saved_data.json' - -# Function to create the Graphviz graph -def create_graph(): - g = gv.Digraph('G', engine='dot', format='png') - - # Create the first box - box1_label = 'Source Sequence: Question{s1,s2,s3}\nContext: {sx,sy,sz}\nAnswer: {s1,s2,s3}' - g.node('box1', label=box1_label, shape='box', style='rounded') - - # Create the second box - box2_label = 'Target Sequence: The answer to the question given the context is yes.' - g.node('box2', label=box2_label, shape='box', style='rounded') - - # Add the line connecting the two boxes - g.edge('box1', 'box2') - - return g - -def save_data(data): - with open(FILE_NAME, 'w') as f: - json.dump(data, f) - -def load_data(): - if not os.path.exists(FILE_NAME): - return {} - with open(FILE_NAME, 'r') as f: - return json.load(f) - -# Create the graph -graph = create_graph() - -# Streamlit app -st.title("In Context Learning - Prompt Targeting QA Pattern") -st.subheader("The Question / Answer pattern below can be used in concert with a LLM to do real time in context learning using general intelligence.") -st.graphviz_chart(graph) - -data = load_data() - -# Input fields -st.header("Enter your data") -question = st.text_input("Question:") -context = st.text_input("Context:") -answer = st.text_input("Answer:") -target_sequence = st.text_input("Target Sequence:") - -if st.button("Save"): - if question and context and answer and target_sequence: - data["question"] = question - data["context"] = context - data["answer"] = answer - data["target_sequence"] = target_sequence - save_data(data) - st.success("Data saved successfully.") - else: - st.error("Please fill in all fields.") - -st.header("Saved data") -st.write(data) diff --git a/spaces/ayaanzaveri/whisper-webui/tests/vad_test.py b/spaces/ayaanzaveri/whisper-webui/tests/vad_test.py deleted file mode 100644 index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000 --- a/spaces/ayaanzaveri/whisper-webui/tests/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/src/animation/PropertyBinding.js b/spaces/banana-projects/web3d/node_modules/three/src/animation/PropertyBinding.js deleted file mode 100644 index 4e89bd3e2758f02d780634c78d8a77bb28c2dcdf..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/animation/PropertyBinding.js +++ /dev/null @@ -1,723 +0,0 @@ -/** - * - * A reference to a real property in the scene graph. - * - * - * @author Ben Houston / http://clara.io/ - * @author David Sarno / http://lighthaus.us/ - * @author tschw - */ - -// Characters [].:/ are reserved for track binding syntax. -var RESERVED_CHARS_RE = '\\[\\]\\.:\\/'; - -function Composite( targetGroup, path, optionalParsedPath ) { - - var parsedPath = optionalParsedPath || PropertyBinding.parseTrackName( path ); - - this._targetGroup = targetGroup; - this._bindings = targetGroup.subscribe_( path, parsedPath ); - -} - -Object.assign( Composite.prototype, { - - getValue: function ( array, offset ) { - - this.bind(); // bind all binding - - var firstValidIndex = this._targetGroup.nCachedObjects_, - binding = this._bindings[ firstValidIndex ]; - - // and only call .getValue on the first - if ( binding !== undefined ) binding.getValue( array, offset ); - - }, - - setValue: function ( array, offset ) { - - var bindings = this._bindings; - - for ( var i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++ i ) { - - bindings[ i ].setValue( array, offset ); - - } - - }, - - bind: function () { - - var bindings = this._bindings; - - for ( var i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++ i ) { - - bindings[ i ].bind(); - - } - - }, - - unbind: function () { - - var bindings = this._bindings; - - for ( var i = this._targetGroup.nCachedObjects_, n = bindings.length; i !== n; ++ i ) { - - bindings[ i ].unbind(); - - } - - } - -} ); - - -function PropertyBinding( rootNode, path, parsedPath ) { - - this.path = path; - this.parsedPath = parsedPath || PropertyBinding.parseTrackName( path ); - - this.node = PropertyBinding.findNode( rootNode, this.parsedPath.nodeName ) || rootNode; - - this.rootNode = rootNode; - -} - -Object.assign( PropertyBinding, { - - Composite: Composite, - - create: function ( root, path, parsedPath ) { - - if ( ! ( root && root.isAnimationObjectGroup ) ) { - - return new PropertyBinding( root, path, parsedPath ); - - } else { - - return new PropertyBinding.Composite( root, path, parsedPath ); - - } - - }, - - /** - * Replaces spaces with underscores and removes unsupported characters from - * node names, to ensure compatibility with parseTrackName(). - * - * @param {string} name Node name to be sanitized. - * @return {string} - */ - sanitizeNodeName: ( function () { - - var reservedRe = new RegExp( '[' + RESERVED_CHARS_RE + ']', 'g' ); - - return function sanitizeNodeName( name ) { - - return name.replace( /\s/g, '_' ).replace( reservedRe, '' ); - - }; - - }() ), - - parseTrackName: function () { - - // Attempts to allow node names from any language. ES5's `\w` regexp matches - // only latin characters, and the unicode \p{L} is not yet supported. So - // instead, we exclude reserved characters and match everything else. - var wordChar = '[^' + RESERVED_CHARS_RE + ']'; - var wordCharOrDot = '[^' + RESERVED_CHARS_RE.replace( '\\.', '' ) + ']'; - - // Parent directories, delimited by '/' or ':'. Currently unused, but must - // be matched to parse the rest of the track name. - var directoryRe = /((?:WC+[\/:])*)/.source.replace( 'WC', wordChar ); - - // Target node. May contain word characters (a-zA-Z0-9_) and '.' or '-'. - var nodeRe = /(WCOD+)?/.source.replace( 'WCOD', wordCharOrDot ); - - // Object on target node, and accessor. May not contain reserved - // characters. Accessor may contain any character except closing bracket. - var objectRe = /(?:\.(WC+)(?:\[(.+)\])?)?/.source.replace( 'WC', wordChar ); - - // Property and accessor. May not contain reserved characters. Accessor may - // contain any non-bracket characters. - var propertyRe = /\.(WC+)(?:\[(.+)\])?/.source.replace( 'WC', wordChar ); - - var trackRe = new RegExp( '' - + '^' - + directoryRe - + nodeRe - + objectRe - + propertyRe - + '$' - ); - - var supportedObjectNames = [ 'material', 'materials', 'bones' ]; - - return function parseTrackName( trackName ) { - - var matches = trackRe.exec( trackName ); - - if ( ! matches ) { - - throw new Error( 'PropertyBinding: Cannot parse trackName: ' + trackName ); - - } - - var results = { - // directoryName: matches[ 1 ], // (tschw) currently unused - nodeName: matches[ 2 ], - objectName: matches[ 3 ], - objectIndex: matches[ 4 ], - propertyName: matches[ 5 ], // required - propertyIndex: matches[ 6 ] - }; - - var lastDot = results.nodeName && results.nodeName.lastIndexOf( '.' ); - - if ( lastDot !== undefined && lastDot !== - 1 ) { - - var objectName = results.nodeName.substring( lastDot + 1 ); - - // Object names must be checked against a whitelist. Otherwise, there - // is no way to parse 'foo.bar.baz': 'baz' must be a property, but - // 'bar' could be the objectName, or part of a nodeName (which can - // include '.' characters). - if ( supportedObjectNames.indexOf( objectName ) !== - 1 ) { - - results.nodeName = results.nodeName.substring( 0, lastDot ); - results.objectName = objectName; - - } - - } - - if ( results.propertyName === null || results.propertyName.length === 0 ) { - - throw new Error( 'PropertyBinding: can not parse propertyName from trackName: ' + trackName ); - - } - - return results; - - }; - - }(), - - findNode: function ( root, nodeName ) { - - if ( ! nodeName || nodeName === "" || nodeName === "root" || nodeName === "." || nodeName === - 1 || nodeName === root.name || nodeName === root.uuid ) { - - return root; - - } - - // search into skeleton bones. - if ( root.skeleton ) { - - var bone = root.skeleton.getBoneByName( nodeName ); - - if ( bone !== undefined ) { - - return bone; - - } - - } - - // search into node subtree. - if ( root.children ) { - - var searchNodeSubtree = function ( children ) { - - for ( var i = 0; i < children.length; i ++ ) { - - var childNode = children[ i ]; - - if ( childNode.name === nodeName || childNode.uuid === nodeName ) { - - return childNode; - - } - - var result = searchNodeSubtree( childNode.children ); - - if ( result ) return result; - - } - - return null; - - }; - - var subTreeNode = searchNodeSubtree( root.children ); - - if ( subTreeNode ) { - - return subTreeNode; - - } - - } - - return null; - - } - -} ); - -Object.assign( PropertyBinding.prototype, { // prototype, continued - - // these are used to "bind" a nonexistent property - _getValue_unavailable: function () {}, - _setValue_unavailable: function () {}, - - BindingType: { - Direct: 0, - EntireArray: 1, - ArrayElement: 2, - HasFromToArray: 3 - }, - - Versioning: { - None: 0, - NeedsUpdate: 1, - MatrixWorldNeedsUpdate: 2 - }, - - GetterByBindingType: [ - - function getValue_direct( buffer, offset ) { - - buffer[ offset ] = this.node[ this.propertyName ]; - - }, - - function getValue_array( buffer, offset ) { - - var source = this.resolvedProperty; - - for ( var i = 0, n = source.length; i !== n; ++ i ) { - - buffer[ offset ++ ] = source[ i ]; - - } - - }, - - function getValue_arrayElement( buffer, offset ) { - - buffer[ offset ] = this.resolvedProperty[ this.propertyIndex ]; - - }, - - function getValue_toArray( buffer, offset ) { - - this.resolvedProperty.toArray( buffer, offset ); - - } - - ], - - SetterByBindingTypeAndVersioning: [ - - [ - // Direct - - function setValue_direct( buffer, offset ) { - - this.targetObject[ this.propertyName ] = buffer[ offset ]; - - }, - - function setValue_direct_setNeedsUpdate( buffer, offset ) { - - this.targetObject[ this.propertyName ] = buffer[ offset ]; - this.targetObject.needsUpdate = true; - - }, - - function setValue_direct_setMatrixWorldNeedsUpdate( buffer, offset ) { - - this.targetObject[ this.propertyName ] = buffer[ offset ]; - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - ], [ - - // EntireArray - - function setValue_array( buffer, offset ) { - - var dest = this.resolvedProperty; - - for ( var i = 0, n = dest.length; i !== n; ++ i ) { - - dest[ i ] = buffer[ offset ++ ]; - - } - - }, - - function setValue_array_setNeedsUpdate( buffer, offset ) { - - var dest = this.resolvedProperty; - - for ( var i = 0, n = dest.length; i !== n; ++ i ) { - - dest[ i ] = buffer[ offset ++ ]; - - } - - this.targetObject.needsUpdate = true; - - }, - - function setValue_array_setMatrixWorldNeedsUpdate( buffer, offset ) { - - var dest = this.resolvedProperty; - - for ( var i = 0, n = dest.length; i !== n; ++ i ) { - - dest[ i ] = buffer[ offset ++ ]; - - } - - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - ], [ - - // ArrayElement - - function setValue_arrayElement( buffer, offset ) { - - this.resolvedProperty[ this.propertyIndex ] = buffer[ offset ]; - - }, - - function setValue_arrayElement_setNeedsUpdate( buffer, offset ) { - - this.resolvedProperty[ this.propertyIndex ] = buffer[ offset ]; - this.targetObject.needsUpdate = true; - - }, - - function setValue_arrayElement_setMatrixWorldNeedsUpdate( buffer, offset ) { - - this.resolvedProperty[ this.propertyIndex ] = buffer[ offset ]; - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - ], [ - - // HasToFromArray - - function setValue_fromArray( buffer, offset ) { - - this.resolvedProperty.fromArray( buffer, offset ); - - }, - - function setValue_fromArray_setNeedsUpdate( buffer, offset ) { - - this.resolvedProperty.fromArray( buffer, offset ); - this.targetObject.needsUpdate = true; - - }, - - function setValue_fromArray_setMatrixWorldNeedsUpdate( buffer, offset ) { - - this.resolvedProperty.fromArray( buffer, offset ); - this.targetObject.matrixWorldNeedsUpdate = true; - - } - - ] - - ], - - getValue: function getValue_unbound( targetArray, offset ) { - - this.bind(); - this.getValue( targetArray, offset ); - - // Note: This class uses a State pattern on a per-method basis: - // 'bind' sets 'this.getValue' / 'setValue' and shadows the - // prototype version of these methods with one that represents - // the bound state. When the property is not found, the methods - // become no-ops. - - }, - - setValue: function getValue_unbound( sourceArray, offset ) { - - this.bind(); - this.setValue( sourceArray, offset ); - - }, - - // create getter / setter pair for a property in the scene graph - bind: function () { - - var targetObject = this.node, - parsedPath = this.parsedPath, - - objectName = parsedPath.objectName, - propertyName = parsedPath.propertyName, - propertyIndex = parsedPath.propertyIndex; - - if ( ! targetObject ) { - - targetObject = PropertyBinding.findNode( this.rootNode, parsedPath.nodeName ) || this.rootNode; - - this.node = targetObject; - - } - - // set fail state so we can just 'return' on error - this.getValue = this._getValue_unavailable; - this.setValue = this._setValue_unavailable; - - // ensure there is a value node - if ( ! targetObject ) { - - console.error( 'THREE.PropertyBinding: Trying to update node for track: ' + this.path + ' but it wasn\'t found.' ); - return; - - } - - if ( objectName ) { - - var objectIndex = parsedPath.objectIndex; - - // special cases were we need to reach deeper into the hierarchy to get the face materials.... - switch ( objectName ) { - - case 'materials': - - if ( ! targetObject.material ) { - - console.error( 'THREE.PropertyBinding: Can not bind to material as node does not have a material.', this ); - return; - - } - - if ( ! targetObject.material.materials ) { - - console.error( 'THREE.PropertyBinding: Can not bind to material.materials as node.material does not have a materials array.', this ); - return; - - } - - targetObject = targetObject.material.materials; - - break; - - case 'bones': - - if ( ! targetObject.skeleton ) { - - console.error( 'THREE.PropertyBinding: Can not bind to bones as node does not have a skeleton.', this ); - return; - - } - - // potential future optimization: skip this if propertyIndex is already an integer - // and convert the integer string to a true integer. - - targetObject = targetObject.skeleton.bones; - - // support resolving morphTarget names into indices. - for ( var i = 0; i < targetObject.length; i ++ ) { - - if ( targetObject[ i ].name === objectIndex ) { - - objectIndex = i; - break; - - } - - } - - break; - - default: - - if ( targetObject[ objectName ] === undefined ) { - - console.error( 'THREE.PropertyBinding: Can not bind to objectName of node undefined.', this ); - return; - - } - - targetObject = targetObject[ objectName ]; - - } - - - if ( objectIndex !== undefined ) { - - if ( targetObject[ objectIndex ] === undefined ) { - - console.error( 'THREE.PropertyBinding: Trying to bind to objectIndex of objectName, but is undefined.', this, targetObject ); - return; - - } - - targetObject = targetObject[ objectIndex ]; - - } - - } - - // resolve property - var nodeProperty = targetObject[ propertyName ]; - - if ( nodeProperty === undefined ) { - - var nodeName = parsedPath.nodeName; - - console.error( 'THREE.PropertyBinding: Trying to update property for track: ' + nodeName + - '.' + propertyName + ' but it wasn\'t found.', targetObject ); - return; - - } - - // determine versioning scheme - var versioning = this.Versioning.None; - - this.targetObject = targetObject; - - if ( targetObject.needsUpdate !== undefined ) { // material - - versioning = this.Versioning.NeedsUpdate; - - } else if ( targetObject.matrixWorldNeedsUpdate !== undefined ) { // node transform - - versioning = this.Versioning.MatrixWorldNeedsUpdate; - - } - - // determine how the property gets bound - var bindingType = this.BindingType.Direct; - - if ( propertyIndex !== undefined ) { - - // access a sub element of the property array (only primitives are supported right now) - - if ( propertyName === "morphTargetInfluences" ) { - - // potential optimization, skip this if propertyIndex is already an integer, and convert the integer string to a true integer. - - // support resolving morphTarget names into indices. - if ( ! targetObject.geometry ) { - - console.error( 'THREE.PropertyBinding: Can not bind to morphTargetInfluences because node does not have a geometry.', this ); - return; - - } - - if ( targetObject.geometry.isBufferGeometry ) { - - if ( ! targetObject.geometry.morphAttributes ) { - - console.error( 'THREE.PropertyBinding: Can not bind to morphTargetInfluences because node does not have a geometry.morphAttributes.', this ); - return; - - } - - for ( var i = 0; i < this.node.geometry.morphAttributes.position.length; i ++ ) { - - if ( targetObject.geometry.morphAttributes.position[ i ].name === propertyIndex ) { - - propertyIndex = i; - break; - - } - - } - - - } else { - - if ( ! targetObject.geometry.morphTargets ) { - - console.error( 'THREE.PropertyBinding: Can not bind to morphTargetInfluences because node does not have a geometry.morphTargets.', this ); - return; - - } - - for ( var i = 0; i < this.node.geometry.morphTargets.length; i ++ ) { - - if ( targetObject.geometry.morphTargets[ i ].name === propertyIndex ) { - - propertyIndex = i; - break; - - } - - } - - } - - } - - bindingType = this.BindingType.ArrayElement; - - this.resolvedProperty = nodeProperty; - this.propertyIndex = propertyIndex; - - } else if ( nodeProperty.fromArray !== undefined && nodeProperty.toArray !== undefined ) { - - // must use copy for Object3D.Euler/Quaternion - - bindingType = this.BindingType.HasFromToArray; - - this.resolvedProperty = nodeProperty; - - } else if ( Array.isArray( nodeProperty ) ) { - - bindingType = this.BindingType.EntireArray; - - this.resolvedProperty = nodeProperty; - - } else { - - this.propertyName = propertyName; - - } - - // select getter / setter - this.getValue = this.GetterByBindingType[ bindingType ]; - this.setValue = this.SetterByBindingTypeAndVersioning[ bindingType ][ versioning ]; - - }, - - unbind: function () { - - this.node = null; - - // back to the prototype version of getValue / setValue - // note: avoiding to mutate the shape of 'this' via 'delete' - this.getValue = this._getValue_unbound; - this.setValue = this._setValue_unbound; - - } - -} ); - -//!\ DECLARE ALIAS AFTER assign prototype ! -Object.assign( PropertyBinding.prototype, { - - // initial state of these methods that calls 'bind' - _getValue_unbound: PropertyBinding.prototype.getValue, - _setValue_unbound: PropertyBinding.prototype.setValue, - -} ); - -export { PropertyBinding }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TextGeometry.js b/spaces/banana-projects/web3d/node_modules/three/src/geometries/TextGeometry.js deleted file mode 100644 index bd8ba8a7607b6145a8de9218d2919c1e42d29a48..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TextGeometry.js +++ /dev/null @@ -1,81 +0,0 @@ -/** - * @author zz85 / http://www.lab4games.net/zz85/blog - * @author alteredq / http://alteredqualia.com/ - * - * Text = 3D Text - * - * parameters = { - * font: , // font - * - * size: , // size of the text - * height: , // thickness to extrude text - * curveSegments: , // number of points on the curves - * - * bevelEnabled: , // turn on bevel - * bevelThickness: , // how deep into text bevel goes - * bevelSize: // how far from text outline is bevel - * } - */ - -import { Geometry } from '../core/Geometry.js'; -import { ExtrudeBufferGeometry } from './ExtrudeGeometry.js'; - -// TextGeometry - -function TextGeometry( text, parameters ) { - - Geometry.call( this ); - - this.type = 'TextGeometry'; - - this.parameters = { - text: text, - parameters: parameters - }; - - this.fromBufferGeometry( new TextBufferGeometry( text, parameters ) ); - this.mergeVertices(); - -} - -TextGeometry.prototype = Object.create( Geometry.prototype ); -TextGeometry.prototype.constructor = TextGeometry; - -// TextBufferGeometry - -function TextBufferGeometry( text, parameters ) { - - parameters = parameters || {}; - - var font = parameters.font; - - if ( ! ( font && font.isFont ) ) { - - console.error( 'THREE.TextGeometry: font parameter is not an instance of THREE.Font.' ); - return new Geometry(); - - } - - var shapes = font.generateShapes( text, parameters.size ); - - // translate parameters to ExtrudeGeometry API - - parameters.depth = parameters.height !== undefined ? parameters.height : 50; - - // defaults - - if ( parameters.bevelThickness === undefined ) parameters.bevelThickness = 10; - if ( parameters.bevelSize === undefined ) parameters.bevelSize = 8; - if ( parameters.bevelEnabled === undefined ) parameters.bevelEnabled = false; - - ExtrudeBufferGeometry.call( this, shapes, parameters ); - - this.type = 'TextBufferGeometry'; - -} - -TextBufferGeometry.prototype = Object.create( ExtrudeBufferGeometry.prototype ); -TextBufferGeometry.prototype.constructor = TextBufferGeometry; - - -export { TextGeometry, TextBufferGeometry }; diff --git a/spaces/bguberfain/Detic/detic/data/datasets/coco_zeroshot.py b/spaces/bguberfain/Detic/detic/data/datasets/coco_zeroshot.py deleted file mode 100644 index aee895de41db95e379874fa6e1badd95c5fe6742..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/detic/data/datasets/coco_zeroshot.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os - -from detectron2.data.datasets.register_coco import register_coco_instances -from detectron2.data.datasets.builtin_meta import _get_builtin_metadata -from .lvis_v1 import custom_register_lvis_instances - -categories_seen = [ - {'id': 1, 'name': 'person'}, - {'id': 2, 'name': 'bicycle'}, - {'id': 3, 'name': 'car'}, - {'id': 4, 'name': 'motorcycle'}, - {'id': 7, 'name': 'train'}, - {'id': 8, 'name': 'truck'}, - {'id': 9, 'name': 'boat'}, - {'id': 15, 'name': 'bench'}, - {'id': 16, 'name': 'bird'}, - {'id': 19, 'name': 'horse'}, - {'id': 20, 'name': 'sheep'}, - {'id': 23, 'name': 'bear'}, - {'id': 24, 'name': 'zebra'}, - {'id': 25, 'name': 'giraffe'}, - {'id': 27, 'name': 'backpack'}, - {'id': 31, 'name': 'handbag'}, - {'id': 33, 'name': 'suitcase'}, - {'id': 34, 'name': 'frisbee'}, - {'id': 35, 'name': 'skis'}, - {'id': 38, 'name': 'kite'}, - {'id': 42, 'name': 'surfboard'}, - {'id': 44, 'name': 'bottle'}, - {'id': 48, 'name': 'fork'}, - {'id': 50, 'name': 'spoon'}, - {'id': 51, 'name': 'bowl'}, - {'id': 52, 'name': 'banana'}, - {'id': 53, 'name': 'apple'}, - {'id': 54, 'name': 'sandwich'}, - {'id': 55, 'name': 'orange'}, - {'id': 56, 'name': 'broccoli'}, - {'id': 57, 'name': 'carrot'}, - {'id': 59, 'name': 'pizza'}, - {'id': 60, 'name': 'donut'}, - {'id': 62, 'name': 'chair'}, - {'id': 65, 'name': 'bed'}, - {'id': 70, 'name': 'toilet'}, - {'id': 72, 'name': 'tv'}, - {'id': 73, 'name': 'laptop'}, - {'id': 74, 'name': 'mouse'}, - {'id': 75, 'name': 'remote'}, - {'id': 78, 'name': 'microwave'}, - {'id': 79, 'name': 'oven'}, - {'id': 80, 'name': 'toaster'}, - {'id': 82, 'name': 'refrigerator'}, - {'id': 84, 'name': 'book'}, - {'id': 85, 'name': 'clock'}, - {'id': 86, 'name': 'vase'}, - {'id': 90, 'name': 'toothbrush'}, -] - -categories_unseen = [ - {'id': 5, 'name': 'airplane'}, - {'id': 6, 'name': 'bus'}, - {'id': 17, 'name': 'cat'}, - {'id': 18, 'name': 'dog'}, - {'id': 21, 'name': 'cow'}, - {'id': 22, 'name': 'elephant'}, - {'id': 28, 'name': 'umbrella'}, - {'id': 32, 'name': 'tie'}, - {'id': 36, 'name': 'snowboard'}, - {'id': 41, 'name': 'skateboard'}, - {'id': 47, 'name': 'cup'}, - {'id': 49, 'name': 'knife'}, - {'id': 61, 'name': 'cake'}, - {'id': 63, 'name': 'couch'}, - {'id': 76, 'name': 'keyboard'}, - {'id': 81, 'name': 'sink'}, - {'id': 87, 'name': 'scissors'}, -] - -def _get_metadata(cat): - if cat == 'all': - return _get_builtin_metadata('coco') - elif cat == 'seen': - id_to_name = {x['id']: x['name'] for x in categories_seen} - else: - assert cat == 'unseen' - id_to_name = {x['id']: x['name'] for x in categories_unseen} - - thing_dataset_id_to_contiguous_id = { - x: i for i, x in enumerate(sorted(id_to_name))} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - -_PREDEFINED_SPLITS_COCO = { - "coco_zeroshot_train": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2.json", 'seen'), - "coco_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_unseen_2.json", 'unseen'), - "coco_not_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_seen_2.json", 'seen'), - "coco_generalized_zeroshot_val": ("coco/val2017", "coco/zero-shot/instances_val2017_all_2_oriorder.json", 'all'), - "coco_zeroshot_train_oriorder": ("coco/train2017", "coco/zero-shot/instances_train2017_seen_2_oriorder.json", 'all'), -} - -for key, (image_root, json_file, cat) in _PREDEFINED_SPLITS_COCO.items(): - register_coco_instances( - key, - _get_metadata(cat), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - -_CUSTOM_SPLITS_COCO = { - "cc3m_coco_train_tags": ("cc3m/training/", "cc3m/coco_train_image_info_tags.json"), - "coco_caption_train_tags": ("coco/train2017/", "coco/annotations/captions_train2017_tags_allcaps.json"),} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_COCO.items(): - custom_register_lvis_instances( - key, - _get_builtin_metadata('coco'), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/bingbing520/ChatGPT/modules/webui_locale.py b/spaces/bingbing520/ChatGPT/modules/webui_locale.py deleted file mode 100644 index 1ce4d97b9b41cbb2d9be3fdadc4c85f6ef897604..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT/modules/webui_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import locale -import commentjson as json - -class I18nAuto: - def __init__(self): - if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) - else: - config = {} - lang_config = config.get("language", "auto") - language = os.environ.get("LANGUAGE", lang_config) - if language == "auto": - language = locale.getdefaultlocale()[0] # get the language code of the system (ex. zh_CN) - self.language_map = {} - self.file_is_exists = os.path.isfile(f"./locale/{language}.json") - if self.file_is_exists: - with open(f"./locale/{language}.json", "r", encoding="utf-8") as f: - self.language_map.update(json.load(f)) - - def __call__(self, key): - if self.file_is_exists and key in self.language_map: - return self.language_map[key] - else: - return key diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/_explorers.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/_explorers.py deleted file mode 100644 index 0bf4ca57b63f5f9308bd1178ddbde5d8f06748e5..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/_explorers.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class DiffusionExplorer(BaseExplorer): - eval_metrics = ["sisnr", "visqol"] - - def stages(self): - return ["train", "valid", "valid_ema", "evaluate", "evaluate_ema"] - - def get_grid_meta(self): - """Returns the list of Meta information to display for each XP/job. - """ - return [ - tt.leaf("index", align=">"), - tt.leaf("name", wrap=140), - tt.leaf("state"), - tt.leaf("sig", align=">"), - ] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table. - """ - return [ - tt.group( - "train", - [ - tt.leaf("epoch"), - tt.leaf("loss", ".3%"), - ], - align=">", - ), - tt.group( - "valid", - [ - tt.leaf("loss", ".3%"), - # tt.leaf("loss_0", ".3%"), - ], - align=">", - ), - tt.group( - "valid_ema", - [ - tt.leaf("loss", ".3%"), - # tt.leaf("loss_0", ".3%"), - ], - align=">", - ), - tt.group( - "evaluate", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"), - tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"), - tt.leaf("rvm_3", ".4f"), ], align=">" - ), - tt.group( - "evaluate_ema", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"), - tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"), - tt.leaf("rvm_3", ".4f")], align=">" - ), - ] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/mmdet_wrapper.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/mmdet_wrapper.py deleted file mode 100644 index 293b3e9faf34c48456cd3fff37b966af9042fe4e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/mmdet_wrapper.py +++ /dev/null @@ -1,273 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -import numpy as np -from collections import OrderedDict -from collections.abc import Mapping -from typing import Dict, List, Optional, Tuple, Union -import torch -from omegaconf import DictConfig, OmegaConf -from torch import Tensor, nn - -from detectron2.layers import ShapeSpec -from detectron2.structures import BitMasks, Boxes, ImageList, Instances -from detectron2.utils.events import get_event_storage - -from .backbone import Backbone - -logger = logging.getLogger(__name__) - - -def _to_container(cfg): - """ - mmdet will assert the type of dict/list. - So convert omegaconf objects to dict/list. - """ - if isinstance(cfg, DictConfig): - cfg = OmegaConf.to_container(cfg, resolve=True) - from mmcv.utils import ConfigDict - - return ConfigDict(cfg) - - -class MMDetBackbone(Backbone): - """ - Wrapper of mmdetection backbones to use in detectron2. - - mmdet backbones produce list/tuple of tensors, while detectron2 backbones - produce a dict of tensors. This class wraps the given backbone to produce - output in detectron2's convention, so it can be used in place of detectron2 - backbones. - """ - - def __init__( - self, - backbone: Union[nn.Module, Mapping], - neck: Union[nn.Module, Mapping, None] = None, - *, - output_shapes: List[ShapeSpec], - output_names: Optional[List[str]] = None, - ): - """ - Args: - backbone: either a backbone module or a mmdet config dict that defines a - backbone. The backbone takes a 4D image tensor and returns a - sequence of tensors. - neck: either a backbone module or a mmdet config dict that defines a - neck. The neck takes outputs of backbone and returns a - sequence of tensors. If None, no neck is used. - output_shapes: shape for every output of the backbone (or neck, if given). - stride and channels are often needed. - output_names: names for every output of the backbone (or neck, if given). - By default, will use "out0", "out1", ... - """ - super().__init__() - if isinstance(backbone, Mapping): - from mmdet.models import build_backbone - - backbone = build_backbone(_to_container(backbone)) - self.backbone = backbone - - if isinstance(neck, Mapping): - from mmdet.models import build_neck - - neck = build_neck(_to_container(neck)) - self.neck = neck - - # "Neck" weights, if any, are part of neck itself. This is the interface - # of mmdet so we follow it. Reference: - # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/two_stage.py - logger.info("Initializing mmdet backbone weights...") - self.backbone.init_weights() - # train() in mmdet modules is non-trivial, and has to be explicitly - # called. Reference: - # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/resnet.py - self.backbone.train() - if self.neck is not None: - logger.info("Initializing mmdet neck weights ...") - if isinstance(self.neck, nn.Sequential): - for m in self.neck: - m.init_weights() - else: - self.neck.init_weights() - self.neck.train() - - self._output_shapes = output_shapes - if not output_names: - output_names = [f"out{i}" for i in range(len(output_shapes))] - self._output_names = output_names - - def forward(self, x) -> Dict[str, Tensor]: - outs = self.backbone(x) - if self.neck is not None: - outs = self.neck(outs) - assert isinstance( - outs, (list, tuple) - ), "mmdet backbone should return a list/tuple of tensors!" - if len(outs) != len(self._output_shapes): - raise ValueError( - "Length of output_shapes does not match outputs from the mmdet backbone: " - f"{len(outs)} != {len(self._output_shapes)}" - ) - return {k: v for k, v in zip(self._output_names, outs)} - - def output_shape(self) -> Dict[str, ShapeSpec]: - return {k: v for k, v in zip(self._output_names, self._output_shapes)} - - -class MMDetDetector(nn.Module): - """ - Wrapper of a mmdetection detector model, for detection and instance segmentation. - Input/output formats of this class follow detectron2's convention, so a - mmdetection model can be trained and evaluated in detectron2. - """ - - def __init__( - self, - detector: Union[nn.Module, Mapping], - *, - # Default is 32 regardless of model: - # https://github.com/open-mmlab/mmdetection/tree/master/configs/_base_/datasets - size_divisibility=32, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - ): - """ - Args: - detector: a mmdet detector, or a mmdet config dict that defines a detector. - size_divisibility: pad input images to multiple of this number - pixel_mean: per-channel mean to normalize input image - pixel_std: per-channel stddev to normalize input image - """ - super().__init__() - if isinstance(detector, Mapping): - from mmdet.models import build_detector - - detector = build_detector(_to_container(detector)) - self.detector = detector - self.detector.init_weights() - self.size_divisibility = size_divisibility - - self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) - assert ( - self.pixel_mean.shape == self.pixel_std.shape - ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, size_divisibility=self.size_divisibility).tensor - metas = [] - rescale = {"height" in x for x in batched_inputs} - if len(rescale) != 1: - raise ValueError("Some inputs have original height/width, but some don't!") - rescale = list(rescale)[0] - output_shapes = [] - for input in batched_inputs: - meta = {} - c, h, w = input["image"].shape - meta["img_shape"] = meta["ori_shape"] = (h, w, c) - if rescale: - scale_factor = np.array( - [w / input["width"], h / input["height"]] * 2, dtype="float32" - ) - ori_shape = (input["height"], input["width"]) - output_shapes.append(ori_shape) - meta["ori_shape"] = ori_shape + (c,) - else: - scale_factor = 1.0 - output_shapes.append((h, w)) - meta["scale_factor"] = scale_factor - meta["flip"] = False - padh, padw = images.shape[-2:] - meta["pad_shape"] = (padh, padw, c) - metas.append(meta) - - if self.training: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - if gt_instances[0].has("gt_masks"): - from mmdet.core import PolygonMasks as mm_PolygonMasks, BitmapMasks as mm_BitMasks - - def convert_mask(m, shape): - # mmdet mask format - if isinstance(m, BitMasks): - return mm_BitMasks(m.tensor.cpu().numpy(), shape[0], shape[1]) - else: - return mm_PolygonMasks(m.polygons, shape[0], shape[1]) - - gt_masks = [convert_mask(x.gt_masks, x.image_size) for x in gt_instances] - losses_and_metrics = self.detector.forward_train( - images, - metas, - [x.gt_boxes.tensor for x in gt_instances], - [x.gt_classes for x in gt_instances], - gt_masks=gt_masks, - ) - else: - losses_and_metrics = self.detector.forward_train( - images, - metas, - [x.gt_boxes.tensor for x in gt_instances], - [x.gt_classes for x in gt_instances], - ) - return _parse_losses(losses_and_metrics) - else: - results = self.detector.simple_test(images, metas, rescale=rescale) - results = [ - {"instances": _convert_mmdet_result(r, shape)} - for r, shape in zip(results, output_shapes) - ] - return results - - @property - def device(self): - return self.pixel_mean.device - - -# Reference: show_result() in -# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py -def _convert_mmdet_result(result, shape: Tuple[int, int]) -> Instances: - if isinstance(result, tuple): - bbox_result, segm_result = result - if isinstance(segm_result, tuple): - segm_result = segm_result[0] - else: - bbox_result, segm_result = result, None - - bboxes = torch.from_numpy(np.vstack(bbox_result)) # Nx5 - bboxes, scores = bboxes[:, :4], bboxes[:, -1] - labels = [ - torch.full((bbox.shape[0],), i, dtype=torch.int32) for i, bbox in enumerate(bbox_result) - ] - labels = torch.cat(labels) - inst = Instances(shape) - inst.pred_boxes = Boxes(bboxes) - inst.scores = scores - inst.pred_classes = labels - - if segm_result is not None and len(labels) > 0: - segm_result = list(itertools.chain(*segm_result)) - segm_result = [torch.from_numpy(x) if isinstance(x, np.ndarray) else x for x in segm_result] - segm_result = torch.stack(segm_result, dim=0) - inst.pred_masks = segm_result - return inst - - -# reference: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py -def _parse_losses(losses: Dict[str, Tensor]) -> Dict[str, Tensor]: - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError(f"{loss_name} is not a tensor or list of tensors") - - if "loss" not in loss_name: - # put metrics to storage; don't return them - storage = get_event_storage() - value = log_vars.pop(loss_name).cpu().item() - storage.put_scalar(loss_name, value) - return log_vars diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/embedder.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/embedder.py deleted file mode 100644 index 7f52b06032ed97b2d652931646f0855ef342ada9..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/embedder.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import logging -import numpy as np -import pickle -from enum import Enum -from typing import Optional -import torch -from torch import nn - -from detectron2.config import CfgNode -from detectron2.utils.file_io import PathManager - -from .vertex_direct_embedder import VertexDirectEmbedder -from .vertex_feature_embedder import VertexFeatureEmbedder - - -class EmbedderType(Enum): - """ - Embedder type which defines how vertices are mapped into the embedding space: - - "vertex_direct": direct vertex embedding - - "vertex_feature": embedding vertex features - """ - - VERTEX_DIRECT = "vertex_direct" - VERTEX_FEATURE = "vertex_feature" - - -def create_embedder(embedder_spec: CfgNode, embedder_dim: int) -> nn.Module: - """ - Create an embedder based on the provided configuration - - Args: - embedder_spec (CfgNode): embedder configuration - embedder_dim (int): embedding space dimensionality - Return: - An embedder instance for the specified configuration - Raises ValueError, in case of unexpected embedder type - """ - embedder_type = EmbedderType(embedder_spec.TYPE) - if embedder_type == EmbedderType.VERTEX_DIRECT: - embedder = VertexDirectEmbedder( - num_vertices=embedder_spec.NUM_VERTICES, - embed_dim=embedder_dim, - ) - if embedder_spec.INIT_FILE != "": - embedder.load(embedder_spec.INIT_FILE) - elif embedder_type == EmbedderType.VERTEX_FEATURE: - embedder = VertexFeatureEmbedder( - num_vertices=embedder_spec.NUM_VERTICES, - feature_dim=embedder_spec.FEATURE_DIM, - embed_dim=embedder_dim, - train_features=embedder_spec.FEATURES_TRAINABLE, - ) - if embedder_spec.INIT_FILE != "": - embedder.load(embedder_spec.INIT_FILE) - else: - raise ValueError(f"Unexpected embedder type {embedder_type}") - - if not embedder_spec.IS_TRAINABLE: - embedder.requires_grad_(False) - - return embedder - - -class Embedder(nn.Module): - """ - Embedder module that serves as a container for embedders to use with different - meshes. Extends Module to automatically save / load state dict. - """ - - DEFAULT_MODEL_CHECKPOINT_PREFIX = "roi_heads.embedder." - - def __init__(self, cfg: CfgNode): - """ - Initialize mesh embedders. An embedder for mesh `i` is stored in a submodule - "embedder_{i}". - - Args: - cfg (CfgNode): configuration options - """ - super(Embedder, self).__init__() - self.mesh_names = set() - embedder_dim = cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBED_SIZE - logger = logging.getLogger(__name__) - for mesh_name, embedder_spec in cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBEDDERS.items(): - logger.info(f"Adding embedder embedder_{mesh_name} with spec {embedder_spec}") - self.add_module(f"embedder_{mesh_name}", create_embedder(embedder_spec, embedder_dim)) - self.mesh_names.add(mesh_name) - if cfg.MODEL.WEIGHTS != "": - self.load_from_model_checkpoint(cfg.MODEL.WEIGHTS) - - def load_from_model_checkpoint(self, fpath: str, prefix: Optional[str] = None): - if prefix is None: - prefix = Embedder.DEFAULT_MODEL_CHECKPOINT_PREFIX - state_dict = None - if fpath.endswith(".pkl"): - with PathManager.open(fpath, "rb") as hFile: - state_dict = pickle.load(hFile, encoding="latin1") # pyre-ignore[6] - else: - with PathManager.open(fpath, "rb") as hFile: - # pyre-fixme[6]: For 1st param expected `Union[PathLike[typing.Any], - # IO[bytes], str, BinaryIO]` but got `Union[IO[bytes], IO[str]]`. - state_dict = torch.load(hFile, map_location=torch.device("cpu")) - if state_dict is not None and "model" in state_dict: - state_dict_local = {} - for key in state_dict["model"]: - if key.startswith(prefix): - v_key = state_dict["model"][key] - if isinstance(v_key, np.ndarray): - v_key = torch.from_numpy(v_key) - state_dict_local[key[len(prefix) :]] = v_key - # non-strict loading to finetune on different meshes - self.load_state_dict(state_dict_local, strict=False) - - def forward(self, mesh_name: str) -> torch.Tensor: - """ - Produce vertex embeddings for the specific mesh; vertex embeddings are - a tensor of shape [N, D] where: - N = number of vertices - D = number of dimensions in the embedding space - Args: - mesh_name (str): name of a mesh for which to obtain vertex embeddings - Return: - Vertex embeddings, a tensor of shape [N, D] - """ - return getattr(self, f"embedder_{mesh_name}")() - - def has_embeddings(self, mesh_name: str) -> bool: - return hasattr(self, f"embedder_{mesh_name}") diff --git a/spaces/bzd4576/sovits-sin/transforms.py b/spaces/bzd4576/sovits-sin/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/bzd4576/sovits-sin/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/cahya/persona-chatbot/app/app.py b/spaces/cahya/persona-chatbot/app/app.py deleted file mode 100644 index 193abb5a72789fd816e9f933efa831e8aa01a25d..0000000000000000000000000000000000000000 --- a/spaces/cahya/persona-chatbot/app/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -import codecs -import streamlit.components.v1 as stc -import pathlib - - -def stc_chatbot(root_dir, width=700, height=600): - html_file = root_dir/"app/chatbot.html" - css_file = root_dir/"app/css/chatbot.css" - js_file = root_dir/"app/js/chatbot.js" - if css_file.exists() and js_file.exists(): - html = codecs.open(html_file, "r").read() - css = codecs.open(css_file, "r").read() - js = codecs.open(js_file, "r").read() - html = html.replace('', "") - html = html.replace('', "") - stc.html(html, width=width, height=height, scrolling=False) - - -st.header("English and Indonesian Persona Chatbot") - -description = f"This application uses the Indonesian GPT2 model; we finetuned it with the original English persona " \ - f"dataset and its Indonesian translation. It gives the chatbot the capability to understand and talk " \ - f"in both languages (code-switching). The finetuning is based on the Huggingface's Conversational AI " \ - f"with Transfer Learning." -st.markdown(description) -root_dir = pathlib.Path(".") -stc_chatbot(root_dir) diff --git a/spaces/caixyz/ok/README.md b/spaces/caixyz/ok/README.md deleted file mode 100644 index 0340229a9e9530d2a4f3d7c4cc145c046df349f4..0000000000000000000000000000000000000000 --- a/spaces/caixyz/ok/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Ok -emoji: 🏃 -colorFrom: green -colorTo: pink -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/model.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/model.py deleted file mode 100644 index f41e6d6d0b0bbecacb90744928a516b75d218214..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/model.py +++ /dev/null @@ -1,936 +0,0 @@ -""" CLAP Model - -Adapted from CLIP: https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -Adapted to the Audio Task. -""" - -from collections import OrderedDict -from dataclasses import dataclass -from email.mime import audio -from typing import Tuple, Union, Callable, Optional - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from .timm_model import TimmModel -import logging -from .utils import freeze_batch_norm_2d - -from .pann_model import create_pann_model -from .htsat import create_htsat_model -from transformers import BertModel, RobertaModel, BartModel -from transformers.tokenization_utils_base import BatchEncoding - - -class MLPLayers(nn.Module): - def __init__(self, units=[512, 512, 512], nonlin=nn.ReLU(), dropout=0.1): - super(MLPLayers, self).__init__() - self.nonlin = nonlin - self.dropout = dropout - - sequence = [] - for u0, u1 in zip(units[:-1], units[1:]): - sequence.append(nn.Linear(u0, u1)) - sequence.append(self.nonlin) - sequence.append(nn.Dropout(self.dropout)) - sequence = sequence[:-2] - - self.sequential = nn.Sequential(*sequence) - - def forward(self, X): - X = self.sequential(X) - return X - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential( - OrderedDict( - [ - ("-1", nn.AvgPool2d(stride)), - ( - "0", - nn.Conv2d( - inplanes, - planes * self.expansion, - 1, - stride=1, - bias=False, - ), - ), - ("1", nn.BatchNorm2d(planes * self.expansion)), - ] - ) - ) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__( - self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None - ): - super().__init__() - self.positional_embedding = nn.Parameter( - torch.randn(spacial_dim**2 + 1, embed_dim) / embed_dim**0.5 - ) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute( - 2, 0, 1 - ) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, - key=x, - value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat( - [self.q_proj.bias, self.k_proj.bias, self.v_proj.bias] - ), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False, - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, image_size=224, width=64): - super().__init__() - self.output_dim = output_dim - self.image_size = image_size - - # the 3-layer stem - self.conv1 = nn.Conv2d( - 3, width // 2, kernel_size=3, stride=2, padding=1, bias=False - ) - self.bn1 = nn.BatchNorm2d(width // 2) - self.conv2 = nn.Conv2d( - width // 2, width // 2, kernel_size=3, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(width // 2) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.avgpool = nn.AvgPool2d(2) - self.relu = nn.ReLU(inplace=True) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(image_size // 32, embed_dim, heads, output_dim) - - self.init_parameters() - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def init_parameters(self): - if self.attnpool is not None: - std = self.attnpool.c_proj.in_features**-0.5 - nn.init.normal_(self.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.layer1, self.layer2, self.layer3, self.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert ( - unlocked_groups == 0 - ), "partial locking not currently supported for this model" - for param in self.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self) - - def stem(self, x): - for conv, bn in [ - (self.conv1, self.bn1), - (self.conv2, self.bn2), - (self.conv3, self.bn3), - ]: - x = self.relu(bn(conv(x))) - x = self.avgpool(x) - return x - - def forward(self, x): - x = self.stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - return x.to(orig_type) - - -class QuickGELU(nn.Module): - # NOTE This is slower than nn.GELU or nn.SiLU and uses more GPU memory - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, act_layer: Callable = nn.GELU): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential( - OrderedDict( - [ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", act_layer()), - ("c_proj", nn.Linear(d_model * 4, d_model)), - ] - ) - ) - self.ln_2 = LayerNorm(d_model) - - def attention(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - return self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - x = x + self.attention(self.ln_1(x), attn_mask=attn_mask) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, width: int, layers: int, heads: int, act_layer: Callable = nn.GELU - ): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock(width, heads, act_layer=act_layer) - for _ in range(layers) - ] - ) - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - for r in self.resblocks: - x = r(x, attn_mask=attn_mask) - return x - - -class VisualTransformer(nn.Module): - def __init__( - self, - image_size: int, - patch_size: int, - width: int, - layers: int, - heads: int, - output_dim: int, - act_layer: Callable = nn.GELU, - ): - super().__init__() - self.image_size = image_size - self.output_dim = output_dim - self.conv1 = nn.Conv2d( - in_channels=3, - out_channels=width, - kernel_size=patch_size, - stride=patch_size, - bias=False, - ) - - scale = width**-0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter( - scale * torch.randn((image_size // patch_size) ** 2 + 1, width) - ) - self.ln_pre = LayerNorm(width) - - self.text_branch = Transformer(width, layers, heads, act_layer=act_layer) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert ( - unlocked_groups == 0 - ), "partial locking not currently supported for this model" - for param in self.parameters(): - param.requires_grad = False - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [ - self.class_embedding.to(x.dtype) - + torch.zeros( - x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device - ), - x, - ], - dim=1, - ) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_branch(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -@dataclass -class CLAPVisionCfg: - layers: Union[Tuple[int, int, int, int], int] = 12 - width: int = 768 - patch_size: int = 16 - image_size: Union[Tuple[int, int], int] = 224 - timm_model_name: str = ( - None # a valid model name overrides layers, width, patch_size - ) - timm_model_pretrained: bool = ( - False # use (imagenet) pretrained weights for named model - ) - timm_pool: str = ( - "avg" # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') - ) - timm_proj: str = ( - "linear" # linear projection for timm model output ('linear', 'mlp', '') - ) - - -# Audio Config Class -@dataclass -class CLAPAudioCfp: - model_type: str = "PANN" - model_name: str = "Cnn14" - sample_rate: int = 48000 - # Param - audio_length: int = 1024 - window_size: int = 1024 - hop_size: int = 1024 - fmin: int = 50 - fmax: int = 14000 - class_num: int = 527 - mel_bins: int = 64 - clip_samples: int = 480000 - - -@dataclass -class CLAPTextCfg: - context_length: int - vocab_size: int - width: int - heads: int - layers: int - model_type: str - - -class CLAP(nn.Module): - def __init__( - self, - embed_dim: int, - audio_cfg: CLAPAudioCfp, - text_cfg: CLAPTextCfg, - quick_gelu: bool = False, - enable_fusion: bool = False, - fusion_type: str = "None", - joint_embed_shape: int = 512, - mlp_act: str = "relu", - ): - super().__init__() - if isinstance(audio_cfg, dict): - audio_cfg = CLAPAudioCfp(**audio_cfg) - if isinstance(text_cfg, dict): - text_cfg = CLAPTextCfg(**text_cfg) - - self.audio_cfg = audio_cfg - self.text_cfg = text_cfg - self.enable_fusion = enable_fusion - self.fusion_type = fusion_type - self.joint_embed_shape = joint_embed_shape - self.mlp_act = mlp_act - - self.context_length = text_cfg.context_length - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if mlp_act == "relu": - mlp_act_layer = nn.ReLU() - elif mlp_act == "gelu": - mlp_act_layer = nn.GELU() - else: - raise NotImplementedError - - # audio branch - # audio branch parameters - if audio_cfg.model_type == "PANN": - self.audio_branch = create_pann_model(audio_cfg, enable_fusion, fusion_type) - elif audio_cfg.model_type == "HTSAT": - self.audio_branch = create_htsat_model( - audio_cfg, enable_fusion, fusion_type - ) - else: - logging.error(f"Model config for {audio_cfg.model_type} not found") - raise RuntimeError(f"Model config for {audio_cfg.model_type} not found.") - - # text branch - # text branch parameters - if text_cfg.model_type == "transformer": - self.text_branch = Transformer( - width=text_cfg.width, - layers=text_cfg.layers, - heads=text_cfg.heads, - act_layer=act_layer, - ) - self.vocab_size = text_cfg.vocab_size - self.token_embedding = nn.Embedding(text_cfg.vocab_size, text_cfg.width) - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, text_cfg.width) - ) - self.ln_final = LayerNorm(text_cfg.width) - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(text_cfg.width, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "bert": - self.text_branch = BertModel.from_pretrained("bert-base-uncased") - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "roberta": - self.text_branch = RobertaModel.from_pretrained("roberta-base") - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - elif text_cfg.model_type == "bart": - self.text_branch = BartModel.from_pretrained("facebook/bart-base") - self.text_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - self.text_projection = nn.Sequential( - nn.Linear(768, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - else: - logging.error(f"Model config for {text_cfg.model_type} not found") - raise RuntimeError(f"Model config for {text_cfg.model_type} not found.") - self.text_branch_type = text_cfg.model_type - # text branch parameters - - # audio branch parameters - self.audio_transform = MLPLayers( - units=[ - self.joint_embed_shape, - self.joint_embed_shape, - self.joint_embed_shape, - ], - dropout=0.1, - ) - - # below here is text branch parameters - - # ============================================================================================================ - self.audio_projection = nn.Sequential( - nn.Linear(embed_dim, self.joint_embed_shape), - mlp_act_layer, - nn.Linear(self.joint_embed_shape, self.joint_embed_shape), - ) - - self.logit_scale_a = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.logit_scale_t = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.register_buffer("attn_mask", self.build_attention_mask(), persistent=False) - - self.init_text_branch_parameters() - - def init_text_branch_parameters(self): - if self.text_branch_type == "transformer": - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - proj_std = (self.text_branch.width**-0.5) * ( - (2 * self.text_branch.layers) ** -0.5 - ) - attn_std = self.text_branch.width**-0.5 - fc_std = (2 * self.text_branch.width) ** -0.5 - for block in self.text_branch.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - if self.text_branch_type == "bert" or self.text_branch_type == "roberta": - width = self.text_branch.embeddings.word_embeddings.weight.shape[-1] - elif self.text_branch_type == "bart": - width = self.text_branch.shared.weight.shape[-1] - else: - width = self.text_branch.width - nn.init.constant_(self.logit_scale_a, np.log(1 / 0.07)) - nn.init.constant_(self.logit_scale_t, np.log(1 / 0.07)) - - # deprecated - # if hasattr(self.visual, 'init_parameters'): - # self.visual.init_parameters() - - # if self.text_projection is not None: - # nn.init.normal_(self.text_projection, std=width**-0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def encode_audio(self, audio, device): - return self.audio_branch( - audio, mixup_lambda=None, device=device - ) # mix lambda needs to add - - # def list_of_dict_of_tensor2dict_of_tensor(self, x, device): - # tmp = {} - # for k in x[0].keys(): - # tmp[k] = [] - # for i in range(len(x)): - # tmp[k].append(x[i][k][:77]) - # for k in x[0].keys(): - # tmp[k] = torch.tensor(tmp[k]).to(device=device, non_blocking=True) - # return tmp - - def encode_text(self, text, device): - if self.text_branch_type == "transformer": - text = text.to(device=device, non_blocking=True) - x = self.token_embedding(text) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_branch(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = self.text_projection(x[torch.arange(x.shape[0]), text.argmax(dim=-1)]) - elif self.text_branch_type == "bert": - # text = self.list_of_dict_of_tensor2dict_of_tensor(text, device) - # text = BatchEncoding(text) - x = self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - token_type_ids=text["token_type_ids"].to( - device=device, non_blocking=True - ), - )["pooler_output"] - x = self.text_projection(x) - elif self.text_branch_type == "roberta": - x = self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - )["pooler_output"] - x = self.text_projection(x) - elif self.text_branch_type == "bart": - x = torch.mean( - self.text_branch( - input_ids=text["input_ids"].to(device=device, non_blocking=True), - attention_mask=text["attention_mask"].to( - device=device, non_blocking=True - ), - )["encoder_last_hidden_state"], - axis=1, - ) - x = self.text_projection(x) - else: - logging.error(f"Model type {self.text_branch_type} not found") - raise RuntimeError(f"Model type {self.text_branch_type} not found.") - return x - - def forward(self, audio, text, device=None): - """Forward audio and text into the CLAP - - Parameters - ---------- - audio: torch.Tensor (batch_size, audio_length) - the time-domain audio input / the batch of mel_spec and longer list. - text: torch.Tensor () // need to add - the text token input - """ - if device is None: - if audio is not None: - device = audio.device - elif text is not None: - device = text.device - if audio is None and text is None: - # a hack to get the logit scale - return self.logit_scale_a.exp(), self.logit_scale_t.exp() - elif audio is None: - return self.encode_text(text, device=device) - elif text is None: - return self.audio_projection( - self.encode_audio(audio, device=device)["embedding"] - ) - audio_features = self.audio_projection( - self.encode_audio(audio, device=device)["embedding"] - ) - audio_features = F.normalize(audio_features, dim=-1) - - text_features = self.encode_text(text, device=device) - # print("text_features", text_features) - # print("text_features.shape", text_features.shape) - # print("text_features.type", type(text_features)) - text_features = F.normalize(text_features, dim=-1) - - audio_features_mlp = self.audio_transform(audio_features) - text_features_mlp = self.text_transform(text_features) - # Four outputs: audio features (basic & MLP), text features (basic & MLP) - return ( - audio_features, - text_features, - audio_features_mlp, - text_features_mlp, - self.logit_scale_a.exp(), - self.logit_scale_t.exp(), - ) - - def get_logit_scale(self): - return self.logit_scale_a.exp(), self.logit_scale_t.exp() - - def get_text_embedding(self, data): - """Get the text embedding from the model - - Parameters - ---------- - data: torch.Tensor - a tensor of text embedding - - Returns - ---------- - text_embed: torch.Tensor - a tensor of text_embeds (N, D) - - """ - device = next(self.parameters()).device - for k in data: - data[k] = data[k].to(device) - if(len(data[k].size()) < 2): - data[k] = data[k].unsqueeze(0) - text_embeds = self.encode_text(data, device=device) - text_embeds = F.normalize(text_embeds, dim=-1) - - return text_embeds - - def get_audio_embedding(self, data): - """Get the audio embedding from the model - - Parameters - ---------- - data: a list of dict - the audio input dict list from 'get_audio_feature' method - - Returns - ---------- - audio_embed: torch.Tensor - a tensor of audio_embeds (N, D) - - """ - device = next(self.parameters()).device - input_dict = {} - keys = data[0].keys() - for k in keys: - input_dict[k] = torch.cat([d[k].unsqueeze(0) for d in data], dim=0).to( - device - ) - - audio_embeds = self.audio_projection( - self.encode_audio(input_dict, device=device)["embedding"] - ) - audio_embeds = F.normalize(audio_embeds, dim=-1) - - return audio_embeds - - def audio_infer(self, audio, hopsize=None, device=None): - """Forward one audio and produce the audio embedding - - Parameters - ---------- - audio: (audio_length) - the time-domain audio input, notice that it must be only one input - hopsize: int - the overlap hopsize as the sliding window - - Returns - ---------- - output_dict: { - key: [n, (embedding_shape)] if "HTS-AT" - or - key: [(embedding_shape)] if "PANN" - } - the list of key values of the audio branch - - """ - - assert not self.training, "the inference mode must be run at eval stage" - output_dict = {} - # PANN - if self.audio_cfg.model_type == "PANN": - audio_input = audio.unsqueeze(dim=0) - output_dict[key] = self.encode_audio(audio_input, device=device)[ - key - ].squeeze(dim=0) - elif self.audio_cfg.model_type == "HTSAT": - # repeat - audio_len = len(audio) - k = self.audio_cfg.clip_samples // audio_len - if k > 1: - audio = audio.repeat(k) - audio_len = len(audio) - - if hopsize is None: - hopsize = min(hopsize, audio_len) - - if audio_len > self.audio_cfg.clip_samples: - audio_input = [ - audio[pos : pos + self.audio_cfg.clip_samples].clone() - for pos in range( - 0, audio_len - self.audio_cfg.clip_samples, hopsize - ) - ] - audio_input.append(audio[-self.audio_cfg.clip_samples :].clone()) - audio_input = torch.stack(audio_input) - output_dict[key] = self.encode_audio(audio_input, device=device)[key] - else: - audio_input = audio.unsqueeze(dim=0) - output_dict[key] = self.encode_audio(audio_input, device=device)[ - key - ].squeeze(dim=0) - - return output_dict - - -def convert_weights_to_fp16(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [ - *[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], - "in_proj_bias", - "bias_k", - "bias_v", - ]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -# Ignore the state dict of the vision part -def build_model_from_openai_state_dict( - state_dict: dict, model_cfg, enable_fusion: bool = False, fusion_type: str = "None" -): - - embed_dim = model_cfg["embed_dim"] - audio_cfg = model_cfg["audio_cfg"] - text_cfg = model_cfg["text_cfg"] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"transformer.resblocks") - ) - ) - - audio_cfg = CLAPAudioCfp(**audio_cfg) - text_cfg = CLAPTextCfg(**text_cfg) - - model = CLAP( - embed_dim, - audio_cfg=audio_cfg, - text_cfg=text_cfg, - quick_gelu=True, # OpenAI models were trained with QuickGELU - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - state_dict["logit_scale_a"] = state_dict["logit_scale"] - state_dict["logit_scale_t"] = state_dict["logit_scale"] - pop_keys = list(state_dict.keys())[::] - # pop the visual branch saved weights - for key in pop_keys: - if key.startswith("visual."): - state_dict.pop(key, None) - - for key in ["logit_scale", "input_resolution", "context_length", "vocab_size"]: - state_dict.pop(key, None) - - # not use fp16 - # convert_weights_to_fp16(model) - model.load_state_dict(state_dict, strict=False) - return model.eval() - - -def trace_model(model, batch_size=256, device=torch.device("cpu")): - model.eval() - audio_length = model.audio_cfg.audio_length - example_audio = torch.ones((batch_size, audio_length), device=device) - example_text = torch.zeros( - (batch_size, model.context_length), dtype=torch.int, device=device - ) - model = torch.jit.trace_module( - model, - inputs=dict( - forward=(example_audio, example_text), - encode_text=(example_text,), - encode_image=(example_audio,), - ), - ) - model.audio_cfg.audio_length = audio_length # Question: what does this do? - return model diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/main.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/main.py deleted file mode 100644 index 3b563a5d001be7adfbe779dee7ad8ac49aadc50d..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/main.py +++ /dev/null @@ -1,596 +0,0 @@ -from inspect import getargs -import logging -import os -import random -from datetime import datetime -import bisect -import copy -import numpy as np -import torch -import torch.backends.cudnn as cudnn -from torch import optim -from torch.cuda.amp import GradScaler -import faulthandler -import pathlib - -try: - import wandb -except ImportError: - wandb = None - -try: - import torch.utils.tensorboard as tensorboard -except ImportError: - tensorboard = None - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -from open_clip import create_model_and_transforms, trace_model, create_model -from training.data import get_data -from training.distributed import is_master, init_distributed_device, world_info_from_env -from training.logger import setup_logging -from training.params import parse_args -from training.scheduler import cosine_lr -from training.train import train_one_epoch, evaluate -from open_clip.utils import dataset_split, get_optimizer - - -def maintain_ckpts(args, startidx, all_idx_len): - for i in reversed(range(startidx, all_idx_len)): - if os.path.exists(os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt")): - os.rename( - os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"), - os.path.join(args.checkpoint_path, f"epoch_top_{i+1}.pt"), - ) - if os.path.exists( - os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt") - ): - os.remove(os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt")) - return - - -def update_top_k_performance( - new_metrics_inputs, current_top_k_ckpt_metrics, args, ckpt, bignumbetter=True -): - """ - Record the top-k performance of the current epoch. - current_top_k_metrics is a dictionary of the form: {1: top_1_ckpt_measure, 2: top_2_ckpt_measure, ...} - """ - if isinstance(new_metrics_inputs, (list, tuple)): - new_metrics_inputs = np.mean(new_metrics_inputs) - return update_top_k_performance( - new_metrics_inputs, - current_top_k_ckpt_metrics, - args=args, - ckpt=ckpt, - bignumbetter=bignumbetter, - ) - elif isinstance(new_metrics_inputs, dict): - new_metrics_inputs = np.mean(list(new_metrics_inputs.values())) - return update_top_k_performance( - new_metrics_inputs, - current_top_k_ckpt_metrics, - args=args, - ckpt=ckpt, - bignumbetter=bignumbetter, - ) - elif isinstance(new_metrics_inputs, (float, int)): - update_flag = {k: False for k in current_top_k_ckpt_metrics.keys()} - sorted_keys = sorted(current_top_k_ckpt_metrics.keys()) - sorted_values = sorted( - current_top_k_ckpt_metrics.values(), reverse=bignumbetter - ) - sorted_values_ = copy.deepcopy(sorted_values) - sorted_values.append(new_metrics_inputs) - sorted_values = sorted(sorted_values, reverse=bignumbetter) - sorted_values = sorted_values[:-1] - - if sorted_values == sorted_values_: - return current_top_k_ckpt_metrics, new_metrics_inputs - else: - for i in range(len(sorted_keys)): - if current_top_k_ckpt_metrics[sorted_keys[i]] != sorted_values[i]: - current_top_k_ckpt_metrics[sorted_keys[i]] = sorted_values[i] - update_flag[sorted_keys[i]] = True - for i in range(len(update_flag)): - if update_flag[i]: - maintain_ckpts(args, i, len(sorted_keys)) - torch.save( - ckpt, - os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"), - ) - break - return current_top_k_ckpt_metrics, new_metrics_inputs - - -# def updateifNone(a, b): -# a = b if None else a -# return a - - -def is_pretrained_params(n): - return ( - n.startswith("transformer") - or n in ["positional_embedding", "text_projection"] - or n.startswith("token_embedding") - or n.startswith("ln_final") - or n.startswith("logit_scale_t") - ) - - -def random_seed(seed=42, rank=0): - torch.manual_seed(seed + rank) - np.random.seed(seed + rank) - random.seed(seed + rank) - - -def main(): - args = parse_args() - # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule? - args.amodel = args.amodel.replace("/", "-") - # download sizes.json file - - # (yusong): the below two lines are for debug - # print("setting up faulthandler") - # faulthandler.register(10) - - random.seed(args.seed) - torch.manual_seed(args.seed) - torch.cuda.manual_seed(args.seed) - torch.cuda.manual_seed_all(args.seed) - np.random.seed(args.seed) - if args.tmodel == "bert" or args.tmodel == "roberta" or args.tmodel == "bart": - assert ( - args.pretrained == "" or args.pretrained is None - ), "bert/roberta/bart text encoder does not support pretrained models." - - # get the name of the experiments - if args.name is None: - args.name = "-".join( - [ - datetime.now().strftime("%Y_%m_%d-%H_%M_%S"), - f"model_{args.amodel}", - f"lr_{args.lr}", - f"b_{args.batch_size}", - f"j_{args.workers}", - f"p_{args.precision}", - ] - ) - - # discover initial world args early so we can log properly - args.distributed = False - args.local_rank, args.rank, args.world_size = world_info_from_env() - - if args.remotedata and is_master(args): - for dataset_name in args.datasetnames: - for split in dataset_split[dataset_name]: - if not os.path.exists(f"./json_files/{dataset_name}/{split}"): - os.makedirs(f"./json_files/{dataset_name}/{split}") - os.system( - f"aws s3 cp s3://s-laion-audio/webdataset_tar/{dataset_name}/{split}/sizes.json ./json_files/{dataset_name}/{split}/sizes.json" - ) - - args.log_path = None - if is_master(args, local=args.log_local): - log_base_path = os.path.join(args.logs, args.name) - os.makedirs(log_base_path, exist_ok=True) - log_filename = f"out-{args.rank}" if args.log_local else "out.log" - args.log_path = os.path.join(log_base_path, log_filename) - if os.path.exists(args.log_path): - print( - "Error. Experiment already exists. Use --name {} to specify a new experiment." - ) - return -1 - - # Set logger - args.log_level = logging.DEBUG if args.debug else logging.INFO - setup_logging(args.log_path, args.log_level) - - # fully initialize distributed device environment - device = init_distributed_device(args) - - args.wandb = "wandb" in args.report_to or "all" in args.report_to - args.tensorboard = "tensorboard" in args.report_to or "all" in args.report_to - if is_master(args): - args.tensorboard_path = ( - os.path.join(args.logs, args.name, "tensorboard") - if args.tensorboard - else "" - ) - args.checkpoint_path = os.path.join(args.logs, args.name, "checkpoints") - for dirname in [args.tensorboard_path, args.checkpoint_path]: - if dirname: - os.makedirs(dirname, exist_ok=True) - else: - args.tensorboard_path = "" - args.checkpoint_path = "" - - if args.copy_codebase: - copy_codebase(args) - - assert args.precision in ["amp", "fp16", "fp32"] - if args.precision == "fp16": - logging.warning( - "It is recommended to use AMP mixed-precision instead of FP16. " - "FP16 support needs further verification and tuning, especially for train." - ) - - if args.horovod: - logging.info( - f"Running in horovod mode with multiple processes / nodes. Device: {args.device}." - f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}." - ) - elif args.distributed: - logging.info( - f"Running in distributed mode with multiple processes. Device: {args.device}." - f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}." - ) - else: - logging.info(f"Running with a single process. Device {args.device}.") - - logging.info(f"openai cache dir: {os.path.expanduser(args.openai_model_cache_dir)}") - - model, model_cfg = create_model( - args.amodel, - args.tmodel, - args.pretrained, - precision=args.precision, - device=device, - jit=args.torchscript, - force_quick_gelu=args.force_quick_gelu, - openai_model_cache_dir=os.path.expanduser(args.openai_model_cache_dir), - skip_params=True, - pretrained_audio=args.pretrained_audio, - pretrained_text=args.pretrained_text, - enable_fusion=args.enable_fusion, - fusion_type=args.fusion_type, - ) - - if args.horovod: - with torch.no_grad(): - for param in model.parameters(): - param.set_(param.contiguous()) - - if args.trace: - model = trace_model(model, batch_size=args.batch_size, device=device) - - if is_master(args): - logging.info("Model:") - logging.info(f"{str(model)}") - logging.info("Params:") - params_file = os.path.join(args.logs, args.name, "params.txt") - with open(params_file, "w") as f: - for name in sorted(vars(args)): - val = getattr(args, name) - logging.info(f" {name}: {val}") - f.write(f"{name}: {val}\n") - - if args.distributed and not args.horovod: - if args.use_bn_sync: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) - ddp_args = {} - if args.ddp_static_graph: - # this doesn't exist in older PyTorch, arg only added if enabled - ddp_args["static_graph"] = True - model = torch.nn.parallel.DistributedDataParallel( - model, device_ids=[device], find_unused_parameters=True, **ddp_args - ) - - data = get_data(args, model_cfg) - assert len(data), "At least one train or eval dataset must be specified." - if args.trace: - assert "train" not in data, "Cannot train with traced model" - - exclude = ( - lambda n, p: p.ndim < 2 - or "bn" in n - or "ln" in n - or "bias" in n - or "logit_scale" in n - ) - include = lambda n, p: not exclude(n, p) - - named_parameters = list(model.named_parameters()) - - # freeze text encoder - text_freeze_parameters = [p for n, p in named_parameters if "text_branch" in n] - - if args.freeze_text: - print("Freeze Text!!!!") - for k in text_freeze_parameters: - k.requires_grad = False - - gain_or_bias_params = [ - p for n, p in named_parameters if exclude(n, p) and p.requires_grad - ] - rest_params = [p for n, p in named_parameters if include(n, p) and p.requires_grad] - - # set wd-related params to 0 if use adam optimizer - if args.optimizer == "adam": - args.wd = 0 - args.wd_pretrained = 0 - args.wd_new = 0 - - if args.train_data is None: - optimizer = None - scheduler = None - else: - total_steps = data["train"].dataloader.num_batches * args.epochs - - if args.split_opt: - for x in ["lr", "beta1", "beta2", "eps", "wd"]: - for y in ["_new", "_pretrained"]: - if getattr(args, x + y) is None: - setattr(args, x + y, getattr(args, x)) - - gain_or_bias_pretrained_params = [ - p - for n, p in named_parameters - if (exclude(n, p) and p.requires_grad) and is_pretrained_params(n) - ] - rest_pretrained_params = [ - p - for n, p in named_parameters - if (include(n, p) and p.requires_grad) and is_pretrained_params(n) - ] - gain_or_bias_new_params = [ - p - for n, p in named_parameters - if (exclude(n, p) and p.requires_grad) and (not is_pretrained_params(n)) - ] - rest_new_params = [ - p - for n, p in named_parameters - if (include(n, p) and p.requires_grad) and (not is_pretrained_params(n)) - ] - pretrained_params_optimizer = get_optimizer( - [ - {"params": gain_or_bias_pretrained_params, "weight_decay": 0.0}, - { - "params": rest_pretrained_params, - "weight_decay": args.wd_pretrained, - }, - ], - lr=args.lr_pretrained, - betas=(args.beta1_pretrained, args.beta2_pretrained), - eps=args.eps_pretrained, - momentum=args.momentum_pretrained, - optimizer_name=args.optimizer, - ) - pretrained_params_scheduler = cosine_lr( - pretrained_params_optimizer, - args.lr_pretrained, - args.warmup, - total_steps, - ) - new_params_optimizer = get_optimizer( - [ - {"params": gain_or_bias_new_params, "weight_decay": 0.0}, - {"params": rest_new_params, "weight_decay": args.wd_new}, - ], - lr=args.lr_new, - betas=(args.beta1_new, args.beta2_new), - eps=args.eps_new, - momentum=args.momentum_new, - optimizer_name=args.optimizer, - ) - - new_params_scheduler = cosine_lr( - new_params_optimizer, args.lr_new, args.warmup, total_steps - ) - - optimizer = { - "pretrained": pretrained_params_optimizer, - "new": new_params_optimizer, - } - scheduler = { - "pretrained": pretrained_params_scheduler, - "new": new_params_scheduler, - } - - if args.horovod: - pretrained_params_optimizer = hvd.DistributedOptimizer( - pretrained_params_optimizer, - named_parameters=model.named_parameters(), - ) - new_params_optimizer = hvd.DistributedOptimizer( - new_params_optimizer, named_parameters=model.named_parameters() - ) - hvd.broadcast_parameters(model.state_dict(), root_rank=0) - hvd.broadcast_optimizer_state(pretrained_params_optimizer, root_rank=0) - hvd.broadcast_optimizer_state(new_params_optimizer, root_rank=0) - else: - optimizer = get_optimizer( - [ - {"params": gain_or_bias_params, "weight_decay": 0.0}, - {"params": rest_params, "weight_decay": args.wd}, - ], - lr=args.lr, - betas=(args.beta1, args.beta2), - eps=args.eps, - momentum=args.momentum, - optimizer_name=args.optimizer, - ) - - scheduler = cosine_lr(optimizer, args.lr, args.warmup, total_steps) - - if args.horovod: - optimizer = hvd.DistributedOptimizer( - optimizer, named_parameters=model.named_parameters() - ) - hvd.broadcast_parameters(model.state_dict(), root_rank=0) - hvd.broadcast_optimizer_state(optimizer, root_rank=0) - - scaler = GradScaler() if args.precision == "amp" else None - - # optionally resume from a checkpoint - start_epoch = 0 - if args.resume is not None: - if os.path.isfile(args.resume): - checkpoint = torch.load(args.resume, map_location=device) - if "epoch" in checkpoint: - # resuming a train checkpoint w/ epoch and optimizer state - start_epoch = checkpoint["epoch"] - sd = checkpoint["state_dict"] - if not args.distributed and next(iter(sd.items()))[0].startswith( - "module" - ): - sd = {k[len("module.") :]: v for k, v in sd.items()} - model.load_state_dict(sd) - if args.split_opt: - if optimizer is not None: - for k, o_ in optimizer.items(): - o_.load_state_dict(checkpoint[k + "_" + "optimizer"]) - if optimizer is not None: - optimizer.load_state_dict(checkpoint["optimizer"]) - if scaler is not None and "scaler" in checkpoint: - scaler.load_state_dict(checkpoint["scaler"]) - logging.info( - f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})" - ) - else: - # loading a bare (model only) checkpoint for fine-tune or evaluation - model.load_state_dict(checkpoint) - logging.info( - f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})" - ) - if args.freeze_text: - print("Freeze Text!!!!") - for k in text_freeze_parameters: - k.requires_grad = False - else: - logging.info("=> no checkpoint found at '{}'".format(args.resume)) - - cudnn.benchmark = True - cudnn.deterministic = False - - # determine if this worker should save logs and checkpoints. only do so if it is rank == 0 - args.save_logs = args.logs and args.logs.lower() != "none" and is_master(args) - writer = None - if args.save_logs and args.tensorboard: - assert tensorboard is not None, "Please install tensorboard." - writer = tensorboard.SummaryWriter(args.tensorboard_path) - - if args.wandb and is_master(args): - assert wandb is not None, "Please install wandb." - logging.debug("Starting wandb.") - args.train_sz = data["train"].dataloader.num_samples - if args.val_data is not None: - args.val_sz = data["val"].dataloader.num_samples - # you will have to configure this for your project! - wandb.init( - project="clap", - notes=args.wandb_notes, - name=args.wandb_notes, - tags=[], - config=vars(args), - ) - if args.debug: - wandb.watch(model, log="all") - wandb.save(params_file) - logging.debug("Finished loading wandb.") - - if "train" not in data: - evaluate(model, data, start_epoch, args, writer) - return - elif start_epoch == 0 and "val" in data and not args.no_eval: - evaluate(model, data, 0, args, writer) - # print(f'rank {args.rank}, Start First Evaluation')# (yusong): for debug - if args.save_top_performance: - current_top_k_ckpt_metrics = { - i: 0 for i in range(args.save_top_performance) - } # initialize the top-k metric for ckpts to 0 - - # print(f'rank {args.rank}, Start Training') # (yusong): for debug - for epoch in range(start_epoch, args.epochs): - # freeze the text param after (include) args.freeze_text_after, this is -1 by default - if epoch == args.freeze_text_after: - print("Text pretrained parameters are freezed since this epoch.") - for k in text_freeze_parameters: - k.requires_grad = False - if is_master(args): - logging.info(f"Start epoch {epoch}") - - train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer) - completed_epoch = epoch + 1 - - if ( - any(v in data for v in ("val", "imagenet-val", "imagenet-v2")) - and not args.no_eval - ): - metrics = evaluate(model, data, completed_epoch, args, writer) - if args.save_top_performance: - top_k_dataset = args.top_k_checkpoint_select_dataset - top_k_metric = args.top_k_checkpoint_select_metric - filtered_metrics = [ - v - for k, v in metrics.items() - if top_k_metric in k and top_k_dataset in k - ] # check all R@10 metrics (all dataset) and use it to update the ckpt - # Saving checkpoints. - if args.save_logs: - if args.split_opt: - opt_dict = { - k + "_" + "optimizer": v.state_dict() for k, v in optimizer.items() - } - else: - opt_dict = {"optimizer": optimizer.state_dict()} - checkpoint_dict = { - "epoch": completed_epoch, - "name": args.name, - "state_dict": model.state_dict(), - } - checkpoint_dict.update(opt_dict) - if scaler is not None: - checkpoint_dict["scaler"] = scaler.state_dict() - - if completed_epoch == args.epochs or ( - args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0 - ): - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"), - ) - if args.save_most_recent: - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_latest.pt"), - ) - if args.save_top_performance and not args.no_eval: - update_top_k_performance( - filtered_metrics, - current_top_k_ckpt_metrics, - args, - checkpoint_dict, - bignumbetter=True, - ) - - if args.wandb and is_master(args): - wandb.finish() - - -def copy_codebase(args): - from shutil import copytree, ignore_patterns - - new_code_path = os.path.join(args.logs, args.name, "code") - if os.path.exists(new_code_path): - print( - f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment." - ) - return -1 - print(f"Copying codebase to {new_code_path}") - current_code_path = os.path.realpath(__file__) - for _ in range(3): - current_code_path = os.path.dirname(current_code_path) - copytree( - current_code_path, new_code_path, ignore=ignore_patterns("log", "logs", "wandb") - ) - print("Done copying code.") - return 1 - - -if __name__ == "__main__": - main() diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/nvSTFT.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 32000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 32000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/samplers/distributed_sampler.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/samplers/distributed_sampler.py deleted file mode 100644 index a098e6ac07c1b193fddcb69e6e54aced82e6081c..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/samplers/distributed_sampler.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -import math -from collections import defaultdict -from typing import Optional -import torch -from torch.utils.data.sampler import Sampler - -from detectron2.utils import comm - -logger = logging.getLogger(__name__) - - -class TrainingSampler(Sampler): - """ - In training, we only care about the "infinite stream" of training data. - So this sampler produces an infinite stream of indices and - all workers cooperate to correctly shuffle the indices and sample different indices. - - The samplers in each worker effectively produces `indices[worker_id::num_workers]` - where `indices` is an infinite stream of indices consisting of - `shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True) - or `range(size) + range(size) + ...` (if shuffle is False) - - Note that this sampler does not shard based on pytorch DataLoader worker id. - A sampler passed to pytorch DataLoader is used only with map-style dataset - and will not be executed inside workers. - But if this sampler is used in a way that it gets execute inside a dataloader - worker, then extra work needs to be done to shard its outputs based on worker id. - This is required so that workers don't produce identical data. - :class:`ToIterableDataset` implements this logic. - This note is true for all samplers in detectron2. - """ - - def __init__(self, size: int, shuffle: bool = True, seed: Optional[int] = None): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - if not isinstance(size, int): - raise TypeError(f"TrainingSampler(size=) expects an int. Got type {type(size)}.") - if size <= 0: - raise ValueError(f"TrainingSampler(size=) expects a positive int. Got {size}.") - self._size = size - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - if self._shuffle: - yield from torch.randperm(self._size, generator=g).tolist() - else: - yield from torch.arange(self._size).tolist() - - -class RandomSubsetTrainingSampler(TrainingSampler): - """ - Similar to TrainingSampler, but only sample a random subset of indices. - This is useful when you want to estimate the accuracy vs data-number curves by - training the model with different subset_ratio. - """ - - def __init__( - self, - size: int, - subset_ratio: float, - shuffle: bool = True, - seed_shuffle: Optional[int] = None, - seed_subset: Optional[int] = None, - ): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - subset_ratio (float): the ratio of subset data to sample from the underlying dataset - shuffle (bool): whether to shuffle the indices or not - seed_shuffle (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - seed_subset (int): the seed to randomize the subset to be sampled. - Must be the same across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - super().__init__(size=size, shuffle=shuffle, seed=seed_shuffle) - - assert 0.0 < subset_ratio <= 1.0 - self._size_subset = int(size * subset_ratio) - assert self._size_subset > 0 - if seed_subset is None: - seed_subset = comm.shared_random_seed() - self._seed_subset = int(seed_subset) - - # randomly generate the subset indexes to be sampled from - g = torch.Generator() - g.manual_seed(self._seed_subset) - indexes_randperm = torch.randperm(self._size, generator=g) - self._indexes_subset = indexes_randperm[: self._size_subset] - - logger.info("Using RandomSubsetTrainingSampler......") - logger.info(f"Randomly sample {self._size_subset} data from the original {self._size} data") - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) # self._seed equals seed_shuffle from __init__() - while True: - if self._shuffle: - # generate a random permutation to shuffle self._indexes_subset - randperm = torch.randperm(self._size_subset, generator=g) - yield from self._indexes_subset[randperm].tolist() - else: - yield from self._indexes_subset.tolist() - - -class RepeatFactorTrainingSampler(Sampler): - """ - Similar to TrainingSampler, but a sample may appear more times than others based - on its "repeat factor". This is suitable for training on class imbalanced datasets like LVIS. - """ - - def __init__(self, repeat_factors, *, shuffle=True, seed=None): - """ - Args: - repeat_factors (Tensor): a float vector, the repeat factor for each indice. When it's - full of ones, it is equivalent to ``TrainingSampler(len(repeat_factors), ...)``. - shuffle (bool): whether to shuffle the indices or not - seed (int): the initial seed of the shuffle. Must be the same - across all workers. If None, will use a random seed shared - among workers (require synchronization among all workers). - """ - self._shuffle = shuffle - if seed is None: - seed = comm.shared_random_seed() - self._seed = int(seed) - - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - - # Split into whole number (_int_part) and fractional (_frac_part) parts. - self._int_part = torch.trunc(repeat_factors) - self._frac_part = repeat_factors - self._int_part - - @staticmethod - def repeat_factors_from_category_frequency(dataset_dicts, repeat_thresh): - """ - Compute (fractional) per-image repeat factors based on category frequency. - The repeat factor for an image is a function of the frequency of the rarest - category labeled in that image. The "frequency of category c" in [0, 1] is defined - as the fraction of images in the training set (without repeats) in which category c - appears. - See :paper:`lvis` (>= v2) Appendix B.2. - - Args: - dataset_dicts (list[dict]): annotations in Detectron2 dataset format. - repeat_thresh (float): frequency threshold below which data is repeated. - If the frequency is half of `repeat_thresh`, the image will be - repeated twice. - - Returns: - torch.Tensor: - the i-th element is the repeat factor for the dataset image at index i. - """ - # 1. For each category c, compute the fraction of images that contain it: f(c) - category_freq = defaultdict(int) - for dataset_dict in dataset_dicts: # For each image (without repeats) - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - for cat_id in cat_ids: - category_freq[cat_id] += 1 - num_images = len(dataset_dicts) - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t / f(c))) - category_rep = { - cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - rep_factors = [] - for dataset_dict in dataset_dicts: - cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]} - rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0) - rep_factors.append(rep_factor) - - return torch.tensor(rep_factors, dtype=torch.float32) - - def _get_epoch_indices(self, generator): - """ - Create a list of dataset indices (with repeats) to use for one epoch. - - Args: - generator (torch.Generator): pseudo random number generator used for - stochastic rounding. - - Returns: - torch.Tensor: list of dataset indices to use in one epoch. Each index - is repeated based on its calculated repeat factor. - """ - # Since repeat factors are fractional, we use stochastic rounding so - # that the target repeat factor is achieved in expectation over the - # course of training - rands = torch.rand(len(self._frac_part), generator=generator) - rep_factors = self._int_part + (rands < self._frac_part).float() - # Construct a list of indices in which we repeat images as specified - indices = [] - for dataset_index, rep_factor in enumerate(rep_factors): - indices.extend([dataset_index] * int(rep_factor.item())) - return torch.tensor(indices, dtype=torch.int64) - - def __iter__(self): - start = self._rank - yield from itertools.islice(self._infinite_indices(), start, None, self._world_size) - - def _infinite_indices(self): - g = torch.Generator() - g.manual_seed(self._seed) - while True: - # Sample indices with repeats determined by stochastic rounding; each - # "epoch" may have a slightly different size due to the rounding. - indices = self._get_epoch_indices(g) - if self._shuffle: - randperm = torch.randperm(len(indices), generator=g) - yield from indices[randperm].tolist() - else: - yield from indices.tolist() - - -class InferenceSampler(Sampler): - """ - Produce indices for inference across all workers. - Inference needs to run on the __exact__ set of samples, - therefore when the total number of samples is not divisible by the number of workers, - this sampler produces different number of samples on different workers. - """ - - def __init__(self, size: int): - """ - Args: - size (int): the total number of data of the underlying dataset to sample from - """ - self._size = size - assert size > 0 - self._rank = comm.get_rank() - self._world_size = comm.get_world_size() - self._local_indices = self._get_local_indices(size, self._world_size, self._rank) - - @staticmethod - def _get_local_indices(total_size, world_size, rank): - shard_size = total_size // world_size - left = total_size % world_size - shard_sizes = [shard_size + int(r < left) for r in range(world_size)] - - begin = sum(shard_sizes[:rank]) - end = min(sum(shard_sizes[: rank + 1]), total_size) - return range(begin, end) - - def __iter__(self): - yield from self._local_indices - - def __len__(self): - return len(self._local_indices) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/vertex_feature_embedder.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/vertex_feature_embedder.py deleted file mode 100644 index dcb2f2039cf40b834235dc81143d0c94a7c33936..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/cse/vertex_feature_embedder.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import pickle -import torch -from torch import nn - -from detectron2.utils.file_io import PathManager - -from .utils import normalize_embeddings - - -class VertexFeatureEmbedder(nn.Module): - """ - Class responsible for embedding vertex features. Mapping from - feature space to the embedding space is a tensor of size [K, D], where - K = number of dimensions in the feature space - D = number of dimensions in the embedding space - Vertex features is a tensor of size [N, K], where - N = number of vertices - K = number of dimensions in the feature space - Vertex embeddings are computed as F * E = tensor of size [N, D] - """ - - def __init__( - self, num_vertices: int, feature_dim: int, embed_dim: int, train_features: bool = False - ): - """ - Initialize embedder, set random embeddings - - Args: - num_vertices (int): number of vertices to embed - feature_dim (int): number of dimensions in the feature space - embed_dim (int): number of dimensions in the embedding space - train_features (bool): determines whether vertex features should - be trained (default: False) - """ - super(VertexFeatureEmbedder, self).__init__() - if train_features: - self.features = nn.Parameter(torch.Tensor(num_vertices, feature_dim)) - else: - self.register_buffer("features", torch.Tensor(num_vertices, feature_dim)) - self.embeddings = nn.Parameter(torch.Tensor(feature_dim, embed_dim)) - self.reset_parameters() - - @torch.no_grad() - def reset_parameters(self): - self.features.zero_() - self.embeddings.zero_() - - def forward(self) -> torch.Tensor: - """ - Produce vertex embeddings, a tensor of shape [N, D] where: - N = number of vertices - D = number of dimensions in the embedding space - - Return: - Full vertex embeddings, a tensor of shape [N, D] - """ - return normalize_embeddings(torch.mm(self.features, self.embeddings)) - - @torch.no_grad() - def load(self, fpath: str): - """ - Load data from a file - - Args: - fpath (str): file path to load data from - """ - with PathManager.open(fpath, "rb") as hFile: - data = pickle.load(hFile) # pyre-ignore[6] - for name in ["features", "embeddings"]: - if name in data: - getattr(self, name).copy_( - torch.tensor(data[name]).float().to(device=getattr(self, name).device) - ) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/cse_confidence.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/cse_confidence.py deleted file mode 100644 index 8220337cea8eb87bbdf74378079551259dcc37e2..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/predictors/cse_confidence.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import torch -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.layers import ConvTranspose2d - -from densepose.modeling.confidence import DensePoseConfidenceModelConfig -from densepose.modeling.utils import initialize_module_params -from densepose.structures import decorate_cse_predictor_output_class_with_confidences - - -class DensePoseEmbeddingConfidencePredictorMixin: - """ - Predictor contains the last layers of a DensePose model that take DensePose head - outputs as an input and produce model outputs. Confidence predictor mixin is used - to generate confidences for coarse segmentation estimated by some - base predictor. Several assumptions need to hold for the base predictor: - 1) the `forward` method must return CSE DensePose head outputs, - tensor of shape [N, D, H, W] - 2) `interp2d` method must be defined to perform bilinear interpolation; - the same method is typically used for masks and confidences - Confidence predictor mixin provides confidence estimates, as described in: - N. Neverova et al., Correlated Uncertainty for Learning Dense Correspondences - from Noisy Labels, NeurIPS 2019 - A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020 - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize confidence predictor using configuration options. - - Args: - cfg (CfgNode): configuration options - input_channels (int): number of input channels - """ - # we rely on base predictor to call nn.Module.__init__ - super().__init__(cfg, input_channels) # pyre-ignore[19] - self.confidence_model_cfg = DensePoseConfidenceModelConfig.from_cfg(cfg) - self._initialize_confidence_estimation_layers(cfg, input_channels) - self._registry = {} - initialize_module_params(self) # pyre-ignore[6] - - def _initialize_confidence_estimation_layers(self, cfg: CfgNode, dim_in: int): - """ - Initialize confidence estimation layers based on configuration options - - Args: - cfg (CfgNode): configuration options - dim_in (int): number of input channels - """ - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - if self.confidence_model_cfg.segm_confidence.enabled: - self.coarse_segm_confidence_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, 1, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - - def forward(self, head_outputs: torch.Tensor): - """ - Perform forward operation on head outputs used as inputs for the predictor. - Calls forward method from the base predictor and uses its outputs to compute - confidences. - - Args: - head_outputs (Tensor): head outputs used as predictor inputs - Return: - An instance of outputs with confidences, - see `decorate_cse_predictor_output_class_with_confidences` - """ - # assuming base class returns SIUV estimates in its first result - base_predictor_outputs = super().forward(head_outputs) # pyre-ignore[16] - - # create output instance by extending base predictor outputs: - output = self._create_output_instance(base_predictor_outputs) - - if self.confidence_model_cfg.segm_confidence.enabled: - # base predictor outputs are assumed to have `coarse_segm` attribute - # base predictor is assumed to define `interp2d` method for bilinear interpolation - output.coarse_segm_confidence = ( - F.softplus( - self.interp2d( # pyre-ignore[16] - self.coarse_segm_confidence_lowres(head_outputs) # pyre-ignore[16] - ) - ) - + self.confidence_model_cfg.segm_confidence.epsilon - ) - output.coarse_segm = base_predictor_outputs.coarse_segm * torch.repeat_interleave( - output.coarse_segm_confidence, base_predictor_outputs.coarse_segm.shape[1], dim=1 - ) - - return output - - def _create_output_instance(self, base_predictor_outputs: Any): - """ - Create an instance of predictor outputs by copying the outputs from the - base predictor and initializing confidence - - Args: - base_predictor_outputs: an instance of base predictor outputs - (the outputs type is assumed to be a dataclass) - Return: - An instance of outputs with confidences - """ - PredictorOutput = decorate_cse_predictor_output_class_with_confidences( - type(base_predictor_outputs) # pyre-ignore[6] - ) - # base_predictor_outputs is assumed to be a dataclass - # reassign all the fields from base_predictor_outputs (no deep copy!), add new fields - output = PredictorOutput( - **base_predictor_outputs.__dict__, - coarse_segm_confidence=None, - ) - return output diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/common/coco_loader_lsj.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/common/coco_loader_lsj.py deleted file mode 100644 index e6c2f1e913a9f629290ce345fc4ffd4db4037e14..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/common/coco_loader_lsj.py +++ /dev/null @@ -1,22 +0,0 @@ -import detectron2.data.transforms as T -from detectron2 import model_zoo -from detectron2.config import LazyCall as L - -# Data using LSJ -image_size = 1024 -dataloader = model_zoo.get_config("common/data/coco.py").dataloader -dataloader.train.mapper.augmentations = [ - L(T.RandomFlip)(horizontal=True), # flip first - L(T.ResizeScale)( - min_scale=0.1, max_scale=2.0, target_height=image_size, target_width=image_size - ), - L(T.FixedSizeCrop)(crop_size=(image_size, image_size), pad=False), -] -dataloader.train.mapper.image_format = "RGB" -dataloader.train.total_batch_size = 64 -# recompute boxes due to cropping -dataloader.train.mapper.recompute_boxes = True - -dataloader.test.mapper.augmentations = [ - L(T.ResizeShortestEdge)(short_edge_length=image_size, max_size=image_size), -] diff --git a/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/gpt_translate.py b/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/gpt_translate.py deleted file mode 100644 index 01cdc56d3f1cce9b3e7a1372c6af3cfca1b31bb4..0000000000000000000000000000000000000000 --- a/spaces/chanhi0603/Create_subtitles_for_videos_ChatGPT/gpt_translate.py +++ /dev/null @@ -1,57 +0,0 @@ -from GptSrtTranslator import GptSrtTranslator -import time - -def Gpt_translate(chatgpt_api_key, input_file, input_language, output_language, os_path): - GptSrtTranslator.API_KEY = chatgpt_api_key - GptSrtTranslator.MODEL_ENGINE = "gpt-3.5-turbo-0301" - - input_file = input_file[:input_file.rfind('.')]+'.srt' - input_language, output_language = Short2long_lang(input_language, output_language) - subtitle = GptSrtTranslator(input_file=f'{os_path}{input_file}', - output_file=f'{os_path}{input_file.split(".")[0]}_gpt.srt', - input_language=input_language, - output_language=output_language, - # break after 40 characters - subtitle_line_max_length=40) - - subtitle.translate() - time.sleep(0.5) - with open(f'{os_path}{input_file.split(".")[0]}_gpt.srt', "r", encoding='utf-8') as f: - gpt_content = f.read() - f.close() - return gpt_content - -# 언어 치환 -def Short2long_lang(input_language, output_language): - if input_language == 'en': input_language = 'english' - elif input_language == 'ja': input_language = 'japanese' - elif input_language == 'ko': input_language = 'korean' - elif input_language == 'zh': input_language = 'chinese' - elif input_language == 'es': input_language = 'spanish' - elif input_language == 'fr': input_language = 'french' - - if output_language == 'en': output_language = 'english' - elif output_language == 'ja': output_language = 'japanese' - elif output_language == 'ko': output_language = 'korean' - elif output_language == 'zh': output_language = 'chinese' - elif output_language == 'es': output_language = 'spanish' - elif output_language == 'fr': output_language = 'french' - - return input_language, output_language - -def Long2short_lang(input_language, output_language): - if input_language == 'english': input_language = 'en' - elif input_language == 'japanese': input_language = 'ja' - elif input_language == 'korean': input_language = 'ko' - elif input_language == 'chinese': input_language = 'zh' - elif input_language == 'spanish': input_language = 'es' - elif input_language == 'french': input_language = 'fr' - - if output_language == 'english': output_language = 'en' - elif output_language == 'japanese': output_language = 'ja' - elif output_language == 'korean': output_language = 'ko' - elif output_language == 'chinese': output_language = 'zh' - elif output_language == 'spanish': output_language = 'es' - elif output_language == 'french': output_language = 'fr' - - return input_language, output_language \ No newline at end of file diff --git a/spaces/chansung/llm-discord-bot/README.md b/spaces/chansung/llm-discord-bot/README.md deleted file mode 100644 index f06d79188856c748f9ffc048505226bfc24c78d4..0000000000000000000000000000000000000000 --- a/spaces/chansung/llm-discord-bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llm Discord Bot -emoji: 📈 -colorFrom: pink -colorTo: purple -sdk: docker -pinned: false -license: apache-2.0 -app_port: 7860 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chompionsawelo/whisper_transcribe/main/set_up.py b/spaces/chompionsawelo/whisper_transcribe/main/set_up.py deleted file mode 100644 index 255846ad86012d8035888eb3d2cb4b3b7b7ec756..0000000000000000000000000000000000000000 --- a/spaces/chompionsawelo/whisper_transcribe/main/set_up.py +++ /dev/null @@ -1,88 +0,0 @@ -from ui.ui_component import * -from tool.file_name import * -from main.diarization import start_diarization -from main.transcribe import start_transcribe -from tool.ffmpeg_tool import * -import gradio as gr -import re -import os -import tool.text_file_tool as text_file_tool - - -def prepare_input(input_file, start_time, end_time, lang, model_size, progress=gr.Progress()): - gr.Info(current_ui_lang["progress_starting_process"]) - - check_input_video_settings(input_file, start_time, end_time) - if lang is None: - raise gr.Error(current_ui_lang["lang_radio_warning"]) - if model_size is None: - raise gr.Error(current_ui_lang["model_dropdown_warning"]) - - print(f"SOURCE: {input_file}") - - # Convert video to audio - progress(0.2, desc=current_ui_lang["progress_preparing_video"]) - convert_video_to_audio(input_file, start_time, end_time) - - # Start diarization - progress(0.4, desc=current_ui_lang["progress_acquiring_diarization"]) - start_diarization(dir_cut_audio_file) - - # Start transcribing - progress(0.6, desc=current_ui_lang["progress_transcribing_audio"]) - start_transcribe(lang, model_size, progress) - - # Cutting video - progress(0.8, desc=current_ui_lang["progress_cutting_video"]) - cut_video(input_file, start_time, end_time) - - # Get complete transcribe into string - transcribe_txt_list, _ = text_file_tool.read_transcribe_subtitle_file( - False) - transcribe_txt = "\n".join(transcribe_txt_list) - - # Return to output textbox, output files, and output video - return [ - transcribe_txt, - [dir_adjusted_transcribe_file, dir_adjusted_subtitle_file], - [dir_cut_video_file, dir_adjusted_subtitle_file] - ] - - -def prepare_video_subtitle(input_file, start_time, end_time): - check_input_video_settings(input_file, start_time, end_time) - gr.Info(current_ui_lang["progress_add_subtitle"]) - - # Add subtitle to video - add_subtitle_to_video() - - # Return to output files - return [dir_base_transcribe_file, dir_base_subtitle_file, dir_video_subtitle_file] - - -def check_input_video_settings(input_file, start_time, end_time): - if input_file is None or not os.path.exists(input_file): - raise gr.Error(current_ui_lang["input_video_warning"]) - if validate_time_format(start_time) is False: - raise gr.Error(current_ui_lang["start_time_warning"]) - if validate_time_format(end_time) is False: - raise gr.Error(current_ui_lang["end_time_warning"]) - if (check_if_time_invalid(start_time, end_time)): - raise gr.Error(current_ui_lang["time_invalid"]) - - -def validate_time_format(input_string): - pattern = re.compile(r'^\d{2}:\d{2}:\d{2}$') - return pattern.match(input_string) is not None - - -def check_if_time_invalid(start_time, end_time): - start = get_total_seconds(start_time) - end = get_total_seconds(end_time) - return start >= end - - -def get_total_seconds(time_string): - hours, minutes, seconds = map(int, time_string.split(":")) - total_seconds = hours * 3600 + minutes * 60 + seconds - return total_seconds diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py deleted file mode 100644 index c97b4354298d7c933fa812084a71a4b6c1ac32b8..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/variableScalar.py +++ /dev/null @@ -1,112 +0,0 @@ -from fontTools.varLib.models import VariationModel, normalizeValue, piecewiseLinearMap - - -def Location(loc): - return tuple(sorted(loc.items())) - - -class VariableScalar: - """A scalar with different values at different points in the designspace.""" - - def __init__(self, location_value={}): - self.values = {} - self.axes = {} - for location, value in location_value.items(): - self.add_value(location, value) - - def __repr__(self): - items = [] - for location, value in self.values.items(): - loc = ",".join(["%s=%i" % (ax, loc) for ax, loc in location]) - items.append("%s:%i" % (loc, value)) - return "(" + (" ".join(items)) + ")" - - @property - def does_vary(self): - values = list(self.values.values()) - return any(v != values[0] for v in values[1:]) - - @property - def axes_dict(self): - if not self.axes: - raise ValueError( - ".axes must be defined on variable scalar before interpolating" - ) - return {ax.axisTag: ax for ax in self.axes} - - def _normalized_location(self, location): - location = self.fix_location(location) - normalized_location = {} - for axtag in location.keys(): - if axtag not in self.axes_dict: - raise ValueError("Unknown axis %s in %s" % (axtag, location)) - axis = self.axes_dict[axtag] - normalized_location[axtag] = normalizeValue( - location[axtag], (axis.minValue, axis.defaultValue, axis.maxValue) - ) - - return Location(normalized_location) - - def fix_location(self, location): - location = dict(location) - for tag, axis in self.axes_dict.items(): - if tag not in location: - location[tag] = axis.defaultValue - return location - - def add_value(self, location, value): - if self.axes: - location = self.fix_location(location) - - self.values[Location(location)] = value - - def fix_all_locations(self): - self.values = { - Location(self.fix_location(l)): v for l, v in self.values.items() - } - - @property - def default(self): - self.fix_all_locations() - key = Location({ax.axisTag: ax.defaultValue for ax in self.axes}) - if key not in self.values: - raise ValueError("Default value could not be found") - # I *guess* we could interpolate one, but I don't know how. - return self.values[key] - - def value_at_location(self, location, model_cache=None, avar=None): - loc = location - if loc in self.values.keys(): - return self.values[loc] - values = list(self.values.values()) - return self.model(model_cache, avar).interpolateFromMasters(loc, values) - - def model(self, model_cache=None, avar=None): - if model_cache is not None: - key = tuple(self.values.keys()) - if key in model_cache: - return model_cache[key] - locations = [dict(self._normalized_location(k)) for k in self.values.keys()] - if avar is not None: - mapping = avar.segments - locations = [ - { - k: piecewiseLinearMap(v, mapping[k]) if k in mapping else v - for k, v in location.items() - } - for location in locations - ] - m = VariationModel(locations) - if model_cache is not None: - model_cache[key] = m - return m - - def get_deltas_and_supports(self, model_cache=None, avar=None): - values = list(self.values.values()) - return self.model(model_cache, avar).getDeltasAndSupports(values) - - def add_to_variation_store(self, store_builder, model_cache=None, avar=None): - deltas, supports = self.get_deltas_and_supports(model_cache, avar) - store_builder.setSupports(supports) - index = store_builder.storeDeltas(deltas) - return int(self.default), index diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/__main__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/__main__.py deleted file mode 100644 index ff632d49c54e678623a27998a9d51b7cf84df81f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from fontTools.merge import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py deleted file mode 100644 index 8a1dca5b680fdd02d1e6ef5797e33e617005c254..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py +++ /dev/null @@ -1,91 +0,0 @@ -def lookupKerningValue( - pair, kerning, groups, fallback=0, glyphToFirstGroup=None, glyphToSecondGroup=None -): - """ - Note: This expects kerning to be a flat dictionary - of kerning pairs, not the nested structure used - in kerning.plist. - - >>> groups = { - ... "public.kern1.O" : ["O", "D", "Q"], - ... "public.kern2.E" : ["E", "F"] - ... } - >>> kerning = { - ... ("public.kern1.O", "public.kern2.E") : -100, - ... ("public.kern1.O", "F") : -200, - ... ("D", "F") : -300 - ... } - >>> lookupKerningValue(("D", "F"), kerning, groups) - -300 - >>> lookupKerningValue(("O", "F"), kerning, groups) - -200 - >>> lookupKerningValue(("O", "E"), kerning, groups) - -100 - >>> lookupKerningValue(("O", "O"), kerning, groups) - 0 - >>> lookupKerningValue(("E", "E"), kerning, groups) - 0 - >>> lookupKerningValue(("E", "O"), kerning, groups) - 0 - >>> lookupKerningValue(("X", "X"), kerning, groups) - 0 - >>> lookupKerningValue(("public.kern1.O", "public.kern2.E"), - ... kerning, groups) - -100 - >>> lookupKerningValue(("public.kern1.O", "F"), kerning, groups) - -200 - >>> lookupKerningValue(("O", "public.kern2.E"), kerning, groups) - -100 - >>> lookupKerningValue(("public.kern1.X", "public.kern2.X"), kerning, groups) - 0 - """ - # quickly check to see if the pair is in the kerning dictionary - if pair in kerning: - return kerning[pair] - # create glyph to group mapping - if glyphToFirstGroup is not None: - assert glyphToSecondGroup is not None - if glyphToSecondGroup is not None: - assert glyphToFirstGroup is not None - if glyphToFirstGroup is None: - glyphToFirstGroup = {} - glyphToSecondGroup = {} - for group, groupMembers in groups.items(): - if group.startswith("public.kern1."): - for glyph in groupMembers: - glyphToFirstGroup[glyph] = group - elif group.startswith("public.kern2."): - for glyph in groupMembers: - glyphToSecondGroup[glyph] = group - # get group names and make sure first and second are glyph names - first, second = pair - firstGroup = secondGroup = None - if first.startswith("public.kern1."): - firstGroup = first - first = None - else: - firstGroup = glyphToFirstGroup.get(first) - if second.startswith("public.kern2."): - secondGroup = second - second = None - else: - secondGroup = glyphToSecondGroup.get(second) - # make an ordered list of pairs to look up - pairs = [ - (first, second), - (first, secondGroup), - (firstGroup, second), - (firstGroup, secondGroup), - ] - # look up the pairs and return any matches - for pair in pairs: - if pair in kerning: - return kerning[pair] - # use the fallback value - return fallback - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/generic.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/generic.py deleted file mode 100644 index 18e27405a31f78bceda9aec5b78aeb8f68f33036..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fsspec/generic.py +++ /dev/null @@ -1,302 +0,0 @@ -import inspect -import logging - -from .asyn import AsyncFileSystem -from .callbacks import _DEFAULT_CALLBACK -from .core import filesystem, get_filesystem_class, split_protocol - -_generic_fs = {} -logger = logging.getLogger("fsspec.generic") - - -def set_generic_fs(protocol, **storage_options): - _generic_fs[protocol] = filesystem(protocol, **storage_options) - - -default_method = "default" - - -def _resolve_fs(url, method=None, protocol=None, storage_options=None): - """Pick instance of backend FS""" - method = method or default_method - protocol = protocol or split_protocol(url)[0] - storage_options = storage_options or {} - if method == "default": - return filesystem(protocol) - if method == "generic": - return _generic_fs[protocol] - if method == "current": - cls = get_filesystem_class(protocol) - return cls.current() - if method == "options": - return filesystem(protocol, **storage_options.get(protocol, {})) - raise ValueError(f"Unknown FS resolution method: {method}") - - -def rsync( - source, - destination, - delete_missing=False, - source_field="size", - dest_field="size", - update_cond="different", - inst_kwargs=None, - fs=None, - **kwargs, -): - """Sync files between two directory trees - - (experimental) - - Parameters - ---------- - source: str - Root of the directory tree to take files from. - destination: str - Root path to copy into. The contents of this location should be - identical to the contents of ``source`` when done. - delete_missing: bool - If there are paths in the destination that don't exist in the - source and this is True, delete them. Otherwise, leave them alone. - source_field: str - If ``update_field`` is "different", this is the key in the info - of source files to consider for difference. - dest_field: str - If ``update_field`` is "different", this is the key in the info - of destination files to consider for difference. - update_cond: "different"|"always"|"never" - If "always", every file is copied, regardless of whether it exists in - the destination. If "never", files that exist in the destination are - not copied again. If "different" (default), only copy if the info - fields given by ``source_field`` and ``dest_field`` (usually "size") - are different. Other comparisons may be added in the future. - inst_kwargs: dict|None - If ``fs`` is None, use this set of keyword arguments to make a - GenericFileSystem instance - fs: GenericFileSystem|None - Instance to use if explicitly given. The instance defines how to - to make downstream file system instances from paths. - """ - fs = fs or GenericFileSystem(**(inst_kwargs or {})) - source = fs._strip_protocol(source) - destination = fs._strip_protocol(destination) - allfiles = fs.find(source, withdirs=True, detail=True) - if not fs.isdir(source): - raise ValueError("Can only rsync on a directory") - otherfiles = fs.find(destination, withdirs=True, detail=True) - dirs = [ - a - for a, v in allfiles.items() - if v["type"] == "directory" and a.replace(source, destination) not in otherfiles - ] - logger.debug(f"{len(dirs)} directories to create") - for dirn in dirs: - # no async - fs.mkdirs(dirn.replace(source, destination), exist_ok=True) - allfiles = {a: v for a, v in allfiles.items() if v["type"] == "file"} - logger.debug(f"{len(allfiles)} files to consider for copy") - to_delete = [ - o - for o, v in otherfiles.items() - if o.replace(destination, source) not in allfiles and v["type"] == "file" - ] - for k, v in allfiles.copy().items(): - otherfile = k.replace(source, destination) - if otherfile in otherfiles: - if update_cond == "always": - allfiles[k] = otherfile - elif update_cond == "different": - if v[source_field] != otherfiles[otherfile][dest_field]: - # details mismatch, make copy - allfiles[k] = otherfile - else: - # details match, don't copy - allfiles.pop(k) - else: - # file not in target yet - allfiles[k] = otherfile - if allfiles: - source_files, target_files = zip(*allfiles.items()) - logger.debug(f"{len(source_files)} files to copy") - fs.cp(source_files, target_files, **kwargs) - if delete_missing: - logger.debug(f"{len(to_delete)} files to delete") - fs.rm(to_delete) - - -class GenericFileSystem(AsyncFileSystem): - """Wrapper over all other FS types - - - - This implementation is a single unified interface to be able to run FS operations - over generic URLs, and dispatch to the specific implementations using the URL - protocol prefix. - - Note: instances of this FS are always async, even if you never use it with any async - backend. - """ - - protocol = "generic" # there is no real reason to ever use a protocol with this FS - - def __init__(self, default_method="default", **kwargs): - """ - - Parameters - ---------- - default_method: str (optional) - Defines how to configure backend FS instances. Options are: - - "default": instantiate like FSClass(), with no - extra arguments; this is the default instance of that FS, and can be - configured via the config system - - "generic": takes instances from the `_generic_fs` dict in this module, - which you must populate before use. Keys are by protocol - - "current": takes the most recently instantiated version of each FS - """ - self.method = default_method - super(GenericFileSystem, self).__init__(**kwargs) - - def _strip_protocol(self, path): - # normalization only - fs = _resolve_fs(path, self.method) - return fs.unstrip_protocol(fs._strip_protocol(path)) - - async def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs): - fs = _resolve_fs(path, self.method) - if fs.async_impl: - out = await fs._find( - path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, **kwargs - ) - else: - out = fs.find( - path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, **kwargs - ) - result = {} - for k, v in out.items(): - name = fs.unstrip_protocol(k) - v["name"] = name - result[name] = v - if detail: - return result - return list(result) - - async def _info(self, url, **kwargs): - fs = _resolve_fs(url, self.method) - if fs.async_impl: - out = await fs._info(url, **kwargs) - else: - out = fs.info(url, **kwargs) - out["name"] = fs.unstrip_protocol(out["name"]) - return out - - async def _ls( - self, - url, - detail=True, - **kwargs, - ): - fs = _resolve_fs(url, self.method) - if fs.async_impl: - out = await fs._ls(url, detail=True, **kwargs) - else: - out = fs.ls(url, detail=True, **kwargs) - for o in out: - o["name"] = fs.unstrip_protocol(o["name"]) - if detail: - return out - else: - return [o["name"] for o in out] - - async def _cat_file( - self, - url, - **kwargs, - ): - fs = _resolve_fs(url, self.method) - if fs.async_impl: - return await fs._cat_file(url, **kwargs) - else: - return fs.cat_file(url, **kwargs) - - async def _pipe_file( - self, - path, - value, - **kwargs, - ): - fs = _resolve_fs(path, self.method) - if fs.async_impl: - return await fs._pipe_file(path, value, **kwargs) - else: - return fs.pipe_file(path, value, **kwargs) - - async def _rm(self, url, **kwargs): - fs = _resolve_fs(url, self.method) - if fs.async_impl: - await fs._rm(url, **kwargs) - else: - fs.rm(url, **kwargs) - - async def _makedirs(self, path, exist_ok=False): - fs = _resolve_fs(path, self.method) - if fs.async_impl: - await fs._makedirs(path, exist_ok=exist_ok) - else: - fs.makedirs(path, exist_ok=exist_ok) - - def rsync(self, source, destination, **kwargs): - """Sync files between two directory trees - - See `func:rsync` for more details. - """ - rsync(source, destination, fs=self, **kwargs) - - async def _cp_file( - self, - url, - url2, - blocksize=2**20, - callback=_DEFAULT_CALLBACK, - **kwargs, - ): - fs = _resolve_fs(url, self.method) - fs2 = _resolve_fs(url2, self.method) - if fs is fs2: - # pure remote - if fs.async_impl: - return await fs._cp_file(url, url2, **kwargs) - else: - return fs.cp_file(url, url2, **kwargs) - kw = {"blocksize": 0, "cache_type": "none"} - try: - f1 = ( - await fs.open_async(url, "rb") - if hasattr(fs, "open_async") - else fs.open(url, "rb", **kw) - ) - callback.set_size(await maybe_await(f1.size)) - f2 = ( - await fs2.open_async(url2, "wb") - if hasattr(fs2, "open_async") - else fs2.open(url2, "wb", **kw) - ) - while f1.size is None or f2.tell() < f1.size: - data = await maybe_await(f1.read(blocksize)) - if f1.size is None and not data: - break - await maybe_await(f2.write(data)) - callback.absolute_update(f2.tell()) - finally: - try: - await maybe_await(f2.close()) - await maybe_await(f1.close()) - except NameError: - # fail while opening f1 or f2 - pass - - -async def maybe_await(cor): - if inspect.iscoroutine(cor): - return await cor - else: - return cor diff --git a/spaces/cihyFjudo/fairness-paper-search/Despedida Maria Grever Partitura PDF Download Learn the Beautiful Song by the Mexican Composer.md b/spaces/cihyFjudo/fairness-paper-search/Despedida Maria Grever Partitura PDF Download Learn the Beautiful Song by the Mexican Composer.md deleted file mode 100644 index 8549657fb5d40667913e8b1b33b61df30a404dc5..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Despedida Maria Grever Partitura PDF Download Learn the Beautiful Song by the Mexican Composer.md +++ /dev/null @@ -1,6 +0,0 @@ -

    despedida maria grever partitura pdf download


    DOWNLOADhttps://tinurli.com/2uwk4B



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Roblox Walk Through Walls Hack Download Mac A Simple and Easy Tutorial.md b/spaces/cihyFjudo/fairness-paper-search/Roblox Walk Through Walls Hack Download Mac A Simple and Easy Tutorial.md deleted file mode 100644 index 512122c0aa71a14b9596a09f36c24712b6870152..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Roblox Walk Through Walls Hack Download Mac A Simple and Easy Tutorial.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Roblox Walk Through Walls Hack Download Mac


    Download Filehttps://tinurli.com/2uwhRM



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Roxas Boulevard Pasay City Zip Code The Benefits and Challenges of Living in this Area.md b/spaces/cihyFjudo/fairness-paper-search/Roxas Boulevard Pasay City Zip Code The Benefits and Challenges of Living in this Area.md deleted file mode 100644 index 1450c0cdf2f4b2886ed31d0a2faa16af99f78ceb..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Roxas Boulevard Pasay City Zip Code The Benefits and Challenges of Living in this Area.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Roxas Boulevard Pasay City Zip Code


    Download Zip ->>> https://tinurli.com/2uwjPA



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GifImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GifImagePlugin.py deleted file mode 100644 index cf2993e38920bdebf79c6342875c2898e174ef6b..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GifImagePlugin.py +++ /dev/null @@ -1,1064 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# GIF file handling -# -# History: -# 1995-09-01 fl Created -# 1996-12-14 fl Added interlace support -# 1996-12-30 fl Added animation support -# 1997-01-05 fl Added write support, fixed local colour map bug -# 1997-02-23 fl Make sure to load raster data in getdata() -# 1997-07-05 fl Support external decoder (0.4) -# 1998-07-09 fl Handle all modes when saving (0.5) -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6) -# 2001-04-17 fl Added palette optimization (0.7) -# 2002-06-06 fl Added transparency support for save (0.8) -# 2004-02-24 fl Disable interlacing for small images -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import itertools -import math -import os -import subprocess -from enum import IntEnum - -from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - - -class LoadingStrategy(IntEnum): - """.. versionadded:: 9.1.0""" - - RGB_AFTER_FIRST = 0 - RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1 - RGB_ALWAYS = 2 - - -#: .. versionadded:: 9.1.0 -LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST - -# -------------------------------------------------------------------- -# Identify/read GIF files - - -def _accept(prefix): - return prefix[:6] in [b"GIF87a", b"GIF89a"] - - -## -# Image plugin for GIF images. This plugin supports both GIF87 and -# GIF89 images. - - -class GifImageFile(ImageFile.ImageFile): - format = "GIF" - format_description = "Compuserve GIF" - _close_exclusive_fp_after_loading = False - - global_palette = None - - def data(self): - s = self.fp.read(1) - if s and s[0]: - return self.fp.read(s[0]) - return None - - def _is_palette_needed(self, p): - for i in range(0, len(p), 3): - if not (i // 3 == p[i] == p[i + 1] == p[i + 2]): - return True - return False - - def _open(self): - # Screen - s = self.fp.read(13) - if not _accept(s): - msg = "not a GIF file" - raise SyntaxError(msg) - - self.info["version"] = s[:6] - self._size = i16(s, 6), i16(s, 8) - self.tile = [] - flags = s[10] - bits = (flags & 7) + 1 - - if flags & 128: - # get global palette - self.info["background"] = s[11] - # check if palette contains colour indices - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - p = ImagePalette.raw("RGB", p) - self.global_palette = self.palette = p - - self._fp = self.fp # FIXME: hack - self.__rewind = self.fp.tell() - self._n_frames = None - self._is_animated = None - self._seek(0) # get ready to read first frame - - @property - def n_frames(self): - if self._n_frames is None: - current = self.tell() - try: - while True: - self._seek(self.tell() + 1, False) - except EOFError: - self._n_frames = self.tell() + 1 - self.seek(current) - return self._n_frames - - @property - def is_animated(self): - if self._is_animated is None: - if self._n_frames is not None: - self._is_animated = self._n_frames != 1 - else: - current = self.tell() - if current: - self._is_animated = True - else: - try: - self._seek(1, False) - self._is_animated = True - except EOFError: - self._is_animated = False - - self.seek(current) - return self._is_animated - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self.im = None - self._seek(0) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in GIF file" - raise EOFError(msg) from e - - def _seek(self, frame, update_image=True): - if frame == 0: - # rewind - self.__offset = 0 - self.dispose = None - self.__frame = -1 - self._fp.seek(self.__rewind) - self.disposal_method = 0 - if "comment" in self.info: - del self.info["comment"] - else: - # ensure that the previous frame was loaded - if self.tile and update_image: - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - self.fp = self._fp - if self.__offset: - # backup to last frame - self.fp.seek(self.__offset) - while self.data(): - pass - self.__offset = 0 - - s = self.fp.read(1) - if not s or s == b";": - raise EOFError - - palette = None - - info = {} - frame_transparency = None - interlace = None - frame_dispose_extent = None - while True: - if not s: - s = self.fp.read(1) - if not s or s == b";": - break - - elif s == b"!": - # - # extensions - # - s = self.fp.read(1) - block = self.data() - if s[0] == 249: - # - # graphic control extension - # - flags = block[0] - if flags & 1: - frame_transparency = block[3] - info["duration"] = i16(block, 1) * 10 - - # disposal method - find the value of bits 4 - 6 - dispose_bits = 0b00011100 & flags - dispose_bits = dispose_bits >> 2 - if dispose_bits: - # only set the dispose if it is not - # unspecified. I'm not sure if this is - # correct, but it seems to prevent the last - # frame from looking odd for some animations - self.disposal_method = dispose_bits - elif s[0] == 254: - # - # comment extension - # - comment = b"" - - # Read this comment block - while block: - comment += block - block = self.data() - - if "comment" in info: - # If multiple comment blocks in frame, separate with \n - info["comment"] += b"\n" + comment - else: - info["comment"] = comment - s = None - continue - elif s[0] == 255 and frame == 0: - # - # application extension - # - info["extension"] = block, self.fp.tell() - if block[:11] == b"NETSCAPE2.0": - block = self.data() - if len(block) >= 3 and block[0] == 1: - self.info["loop"] = i16(block, 1) - while self.data(): - pass - - elif s == b",": - # - # local image - # - s = self.fp.read(9) - - # extent - x0, y0 = i16(s, 0), i16(s, 2) - x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6) - if (x1 > self.size[0] or y1 > self.size[1]) and update_image: - self._size = max(x1, self.size[0]), max(y1, self.size[1]) - Image._decompression_bomb_check(self._size) - frame_dispose_extent = x0, y0, x1, y1 - flags = s[8] - - interlace = (flags & 64) != 0 - - if flags & 128: - bits = (flags & 7) + 1 - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - palette = ImagePalette.raw("RGB", p) - else: - palette = False - - # image data - bits = self.fp.read(1)[0] - self.__offset = self.fp.tell() - break - - else: - pass - # raise OSError, "illegal GIF tag `%x`" % s[0] - s = None - - if interlace is None: - # self._fp = None - raise EOFError - - self.__frame = frame - if not update_image: - return - - self.tile = [] - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - - self._frame_palette = palette if palette is not None else self.global_palette - self._frame_transparency = frame_transparency - if frame == 0: - if self._frame_palette: - if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - self.mode = "RGBA" if frame_transparency is not None else "RGB" - else: - self.mode = "P" - else: - self.mode = "L" - - if not palette and self.global_palette: - from copy import copy - - palette = copy(self.global_palette) - self.palette = palette - else: - if self.mode == "P": - if ( - LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY - or palette - ): - self.pyaccess = None - if "transparency" in self.info: - self.im.putpalettealpha(self.info["transparency"], 0) - self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG) - self.mode = "RGBA" - del self.info["transparency"] - else: - self.mode = "RGB" - self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG) - - def _rgb(color): - if self._frame_palette: - color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3]) - else: - color = (color, color, color) - return color - - self.dispose_extent = frame_dispose_extent - try: - if self.disposal_method < 2: - # do not dispose or none specified - self.dispose = None - elif self.disposal_method == 2: - # replace with background colour - - # only dispose the extent in this frame - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - - # by convention, attempt to use transparency first - dispose_mode = "P" - color = self.info.get("transparency", frame_transparency) - if color is not None: - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(color) + (0,) - else: - color = self.info.get("background", 0) - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGB" - color = _rgb(color) - self.dispose = Image.core.fill(dispose_mode, dispose_size, color) - else: - # replace with previous contents - if self.im is not None: - # only dispose the extent in this frame - self.dispose = self._crop(self.im, self.dispose_extent) - elif frame_transparency is not None: - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - dispose_mode = "P" - color = frame_transparency - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(frame_transparency) + (0,) - self.dispose = Image.core.fill(dispose_mode, dispose_size, color) - except AttributeError: - pass - - if interlace is not None: - transparency = -1 - if frame_transparency is not None: - if frame == 0: - if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS: - self.info["transparency"] = frame_transparency - elif self.mode not in ("RGB", "RGBA"): - transparency = frame_transparency - self.tile = [ - ( - "gif", - (x0, y0, x1, y1), - self.__offset, - (bits, interlace, transparency), - ) - ] - - if info.get("comment"): - self.info["comment"] = info["comment"] - for k in ["duration", "extension"]: - if k in info: - self.info[k] = info[k] - elif k in self.info: - del self.info[k] - - def load_prepare(self): - temp_mode = "P" if self._frame_palette else "L" - self._prev_im = None - if self.__frame == 0: - if self._frame_transparency is not None: - self.im = Image.core.fill( - temp_mode, self.size, self._frame_transparency - ) - elif self.mode in ("RGB", "RGBA"): - self._prev_im = self.im - if self._frame_palette: - self.im = Image.core.fill("P", self.size, self._frame_transparency or 0) - self.im.putpalette(*self._frame_palette.getdata()) - else: - self.im = None - self.mode = temp_mode - self._frame_palette = None - - super().load_prepare() - - def load_end(self): - if self.__frame == 0: - if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - if self._frame_transparency is not None: - self.im.putpalettealpha(self._frame_transparency, 0) - self.mode = "RGBA" - else: - self.mode = "RGB" - self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG) - return - if not self._prev_im: - return - if self._frame_transparency is not None: - self.im.putpalettealpha(self._frame_transparency, 0) - frame_im = self.im.convert("RGBA") - else: - frame_im = self.im.convert("RGB") - frame_im = self._crop(frame_im, self.dispose_extent) - - self.im = self._prev_im - self.mode = self.im.mode - if frame_im.mode == "RGBA": - self.im.paste(frame_im, self.dispose_extent, frame_im) - else: - self.im.paste(frame_im, self.dispose_extent) - - def tell(self): - return self.__frame - - -# -------------------------------------------------------------------- -# Write GIF files - - -RAWMODE = {"1": "L", "L": "L", "P": "P"} - - -def _normalize_mode(im): - """ - Takes an image (or frame), returns an image in a mode that is appropriate - for saving in a Gif. - - It may return the original image, or it may return an image converted to - palette or 'L' mode. - - :param im: Image object - :returns: Image object - """ - if im.mode in RAWMODE: - im.load() - return im - if Image.getmodebase(im.mode) == "RGB": - im = im.convert("P", palette=Image.Palette.ADAPTIVE) - if im.palette.mode == "RGBA": - for rgba in im.palette.colors: - if rgba[3] == 0: - im.info["transparency"] = im.palette.colors[rgba] - break - return im - return im.convert("L") - - -def _normalize_palette(im, palette, info): - """ - Normalizes the palette for image. - - Sets the palette to the incoming palette, if provided. - - Ensures that there's a palette for L mode images - - Optimizes the palette if necessary/desired. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: Image object - """ - source_palette = None - if palette: - # a bytes palette - if isinstance(palette, (bytes, bytearray, list)): - source_palette = bytearray(palette[:768]) - if isinstance(palette, ImagePalette.ImagePalette): - source_palette = bytearray(palette.palette) - - if im.mode == "P": - if not source_palette: - source_palette = im.im.getpalette("RGB")[:768] - else: # L-mode - if not source_palette: - source_palette = bytearray(i // 3 for i in range(768)) - im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette) - - if palette: - used_palette_colors = [] - for i in range(0, len(source_palette), 3): - source_color = tuple(source_palette[i : i + 3]) - index = im.palette.colors.get(source_color) - if index in used_palette_colors: - index = None - used_palette_colors.append(index) - for i, index in enumerate(used_palette_colors): - if index is None: - for j in range(len(used_palette_colors)): - if j not in used_palette_colors: - used_palette_colors[i] = j - break - im = im.remap_palette(used_palette_colors) - else: - used_palette_colors = _get_optimize(im, info) - if used_palette_colors is not None: - return im.remap_palette(used_palette_colors, source_palette) - - im.palette.palette = source_palette - return im - - -def _write_single_frame(im, fp, palette): - im_out = _normalize_mode(im) - for k, v in im_out.info.items(): - im.encoderinfo.setdefault(k, v) - im_out = _normalize_palette(im_out, palette, im.encoderinfo) - - for s in _get_global_header(im_out, im.encoderinfo): - fp.write(s) - - # local image header - flags = 0 - if get_interlace(im): - flags = flags | 64 - _write_local_header(fp, im, (0, 0), flags) - - im_out.encoderconfig = (8, get_interlace(im)) - ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])]) - - fp.write(b"\0") # end of image data - - -def _getbbox(base_im, im_frame): - if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im): - delta = ImageChops.subtract_modulo(im_frame, base_im) - else: - delta = ImageChops.subtract_modulo( - im_frame.convert("RGBA"), base_im.convert("RGBA") - ) - return delta.getbbox(alpha_only=False) - - -def _write_multiple_frames(im, fp, palette): - duration = im.encoderinfo.get("duration") - disposal = im.encoderinfo.get("disposal", im.info.get("disposal")) - - im_frames = [] - frame_count = 0 - background_im = None - for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])): - for im_frame in ImageSequence.Iterator(imSequence): - # a copy is required here since seek can still mutate the image - im_frame = _normalize_mode(im_frame.copy()) - if frame_count == 0: - for k, v in im_frame.info.items(): - if k == "transparency": - continue - im.encoderinfo.setdefault(k, v) - - encoderinfo = im.encoderinfo.copy() - im_frame = _normalize_palette(im_frame, palette, encoderinfo) - if "transparency" in im_frame.info: - encoderinfo.setdefault("transparency", im_frame.info["transparency"]) - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - elif duration is None and "duration" in im_frame.info: - encoderinfo["duration"] = im_frame.info["duration"] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - frame_count += 1 - - if im_frames: - # delta frame - previous = im_frames[-1] - bbox = _getbbox(previous["im"], im_frame) - if not bbox: - # This frame is identical to the previous frame - if encoderinfo.get("duration"): - previous["encoderinfo"]["duration"] += encoderinfo["duration"] - continue - if encoderinfo.get("disposal") == 2: - if background_im is None: - color = im.encoderinfo.get( - "transparency", im.info.get("transparency", (0, 0, 0)) - ) - background = _get_background(im_frame, color) - background_im = Image.new("P", im_frame.size, background) - background_im.putpalette(im_frames[0]["im"].palette) - bbox = _getbbox(background_im, im_frame) - else: - bbox = None - im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo}) - - if len(im_frames) > 1: - for frame_data in im_frames: - im_frame = frame_data["im"] - if not frame_data["bbox"]: - # global header - for s in _get_global_header(im_frame, frame_data["encoderinfo"]): - fp.write(s) - offset = (0, 0) - else: - # compress difference - if not palette: - frame_data["encoderinfo"]["include_color_table"] = True - - im_frame = im_frame.crop(frame_data["bbox"]) - offset = frame_data["bbox"][:2] - _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"]) - return True - elif "duration" in im.encoderinfo and isinstance( - im.encoderinfo["duration"], (list, tuple) - ): - # Since multiple frames will not be written, add together the frame durations - im.encoderinfo["duration"] = sum(im.encoderinfo["duration"]) - - -def _save_all(im, fp, filename): - _save(im, fp, filename, save_all=True) - - -def _save(im, fp, filename, save_all=False): - # header - if "palette" in im.encoderinfo or "palette" in im.info: - palette = im.encoderinfo.get("palette", im.info.get("palette")) - else: - palette = None - im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True) - - if not save_all or not _write_multiple_frames(im, fp, palette): - _write_single_frame(im, fp, palette) - - fp.write(b";") # end of file - - if hasattr(fp, "flush"): - fp.flush() - - -def get_interlace(im): - interlace = im.encoderinfo.get("interlace", 1) - - # workaround for @PIL153 - if min(im.size) < 16: - interlace = 0 - - return interlace - - -def _write_local_header(fp, im, offset, flags): - transparent_color_exists = False - try: - if "transparency" in im.encoderinfo: - transparency = im.encoderinfo["transparency"] - else: - transparency = im.info["transparency"] - transparency = int(transparency) - except (KeyError, ValueError): - pass - else: - # optimize the block away if transparent color is not used - transparent_color_exists = True - - used_palette_colors = _get_optimize(im, im.encoderinfo) - if used_palette_colors is not None: - # adjust the transparency index after optimize - try: - transparency = used_palette_colors.index(transparency) - except ValueError: - transparent_color_exists = False - - if "duration" in im.encoderinfo: - duration = int(im.encoderinfo["duration"] / 10) - else: - duration = 0 - - disposal = int(im.encoderinfo.get("disposal", 0)) - - if transparent_color_exists or duration != 0 or disposal: - packed_flag = 1 if transparent_color_exists else 0 - packed_flag |= disposal << 2 - if not transparent_color_exists: - transparency = 0 - - fp.write( - b"!" - + o8(249) # extension intro - + o8(4) # length - + o8(packed_flag) # packed fields - + o16(duration) # duration - + o8(transparency) # transparency index - + o8(0) - ) - - include_color_table = im.encoderinfo.get("include_color_table") - if include_color_table: - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - if color_table_size: - flags = flags | 128 # local color table flag - flags = flags | color_table_size - - fp.write( - b"," - + o16(offset[0]) # offset - + o16(offset[1]) - + o16(im.size[0]) # size - + o16(im.size[1]) - + o8(flags) # flags - ) - if include_color_table and color_table_size: - fp.write(_get_header_palette(palette_bytes)) - fp.write(o8(8)) # bits - - -def _save_netpbm(im, fp, filename): - # Unused by default. - # To use, uncomment the register_save call at the end of the file. - # - # If you need real GIF compression and/or RGB quantization, you - # can use the external NETPBM/PBMPLUS utilities. See comments - # below for information on how to enable this. - tempfile = im._dump() - - try: - with open(filename, "wb") as f: - if im.mode != "RGB": - subprocess.check_call( - ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL - ) - else: - # Pipe ppmquant output into ppmtogif - # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename) - quant_cmd = ["ppmquant", "256", tempfile] - togif_cmd = ["ppmtogif"] - quant_proc = subprocess.Popen( - quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL - ) - togif_proc = subprocess.Popen( - togif_cmd, - stdin=quant_proc.stdout, - stdout=f, - stderr=subprocess.DEVNULL, - ) - - # Allow ppmquant to receive SIGPIPE if ppmtogif exits - quant_proc.stdout.close() - - retcode = quant_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, quant_cmd) - - retcode = togif_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, togif_cmd) - finally: - try: - os.unlink(tempfile) - except OSError: - pass - - -# Force optimization so that we can test performance against -# cases where it took lots of memory and time previously. -_FORCE_OPTIMIZE = False - - -def _get_optimize(im, info): - """ - Palette optimization is a potentially expensive operation. - - This function determines if the palette should be optimized using - some heuristics, then returns the list of palette entries in use. - - :param im: Image object - :param info: encoderinfo - :returns: list of indexes of palette entries in use, or None - """ - if im.mode in ("P", "L") and info and info.get("optimize", 0): - # Potentially expensive operation. - - # The palette saves 3 bytes per color not used, but palette - # lengths are restricted to 3*(2**N) bytes. Max saving would - # be 768 -> 6 bytes if we went all the way down to 2 colors. - # * If we're over 128 colors, we can't save any space. - # * If there aren't any holes, it's not worth collapsing. - # * If we have a 'large' image, the palette is in the noise. - - # create the new palette if not every color is used - optimise = _FORCE_OPTIMIZE or im.mode == "L" - if optimise or im.width * im.height < 512 * 512: - # check which colors are used - used_palette_colors = [] - for i, count in enumerate(im.histogram()): - if count: - used_palette_colors.append(i) - - if optimise or max(used_palette_colors) >= len(used_palette_colors): - return used_palette_colors - - num_palette_colors = len(im.palette.palette) // Image.getmodebands( - im.palette.mode - ) - current_palette_size = 1 << (num_palette_colors - 1).bit_length() - if ( - # check that the palette would become smaller when saved - len(used_palette_colors) <= current_palette_size // 2 - # check that the palette is not already the smallest possible size - and current_palette_size > 2 - ): - return used_palette_colors - - -def _get_color_table_size(palette_bytes): - # calculate the palette size for the header - if not palette_bytes: - return 0 - elif len(palette_bytes) < 9: - return 1 - else: - return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1 - - -def _get_header_palette(palette_bytes): - """ - Returns the palette, null padded to the next power of 2 (*3) bytes - suitable for direct inclusion in the GIF header - - :param palette_bytes: Unpadded palette bytes, in RGBRGB form - :returns: Null padded palette - """ - color_table_size = _get_color_table_size(palette_bytes) - - # add the missing amount of bytes - # the palette has to be 2< 0: - palette_bytes += o8(0) * 3 * actual_target_size_diff - return palette_bytes - - -def _get_palette_bytes(im): - """ - Gets the palette for inclusion in the gif header - - :param im: Image object - :returns: Bytes, len<=768 suitable for inclusion in gif header - """ - return im.palette.palette if im.palette else b"" - - -def _get_background(im, info_background): - background = 0 - if info_background: - if isinstance(info_background, tuple): - # WebPImagePlugin stores an RGBA value in info["background"] - # So it must be converted to the same format as GifImagePlugin's - # info["background"] - a global color table index - try: - background = im.palette.getcolor(info_background, im) - except ValueError as e: - if str(e) not in ( - # If all 256 colors are in use, - # then there is no need for the background color - "cannot allocate more than 256 colors", - # Ignore non-opaque WebP background - "cannot add non-opaque RGBA color to RGB palette", - ): - raise - else: - background = info_background - return background - - -def _get_global_header(im, info): - """Return a list of strings representing a GIF header""" - - # Header Block - # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp - - version = b"87a" - if im.info.get("version") == b"89a" or ( - info - and ( - "transparency" in info - or "loop" in info - or info.get("duration") - or info.get("comment") - ) - ): - version = b"89a" - - background = _get_background(im, info.get("background")) - - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - - header = [ - b"GIF" # signature - + version # version - + o16(im.size[0]) # canvas width - + o16(im.size[1]), # canvas height - # Logical Screen Descriptor - # size of global color table + global color table flag - o8(color_table_size + 128), # packed fields - # background + reserved/aspect - o8(background) + o8(0), - # Global Color Table - _get_header_palette(palette_bytes), - ] - if "loop" in info: - header.append( - b"!" - + o8(255) # extension intro - + o8(11) - + b"NETSCAPE2.0" - + o8(3) - + o8(1) - + o16(info["loop"]) # number of loops - + o8(0) - ) - if info.get("comment"): - comment_block = b"!" + o8(254) # extension intro - - comment = info["comment"] - if isinstance(comment, str): - comment = comment.encode() - for i in range(0, len(comment), 255): - subblock = comment[i : i + 255] - comment_block += o8(len(subblock)) + subblock - - comment_block += o8(0) - header.append(comment_block) - return header - - -def _write_frame_data(fp, im_frame, offset, params): - try: - im_frame.encoderinfo = params - - # local image header - _write_local_header(fp, im_frame, offset, 0) - - ImageFile._save( - im_frame, fp, [("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])] - ) - - fp.write(b"\0") # end of image data - finally: - del im_frame.encoderinfo - - -# -------------------------------------------------------------------- -# Legacy GIF utilities - - -def getheader(im, palette=None, info=None): - """ - Legacy Method to get Gif data from image. - - Warning:: May modify image data. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: tuple of(list of header items, optimized palette) - - """ - used_palette_colors = _get_optimize(im, info) - - if info is None: - info = {} - - if "background" not in info and "background" in im.info: - info["background"] = im.info["background"] - - im_mod = _normalize_palette(im, palette, info) - im.palette = im_mod.palette - im.im = im_mod.im - header = _get_global_header(im, info) - - return header, used_palette_colors - - -def getdata(im, offset=(0, 0), **params): - """ - Legacy Method - - Return a list of strings representing this image. - The first string is a local image header, the rest contains - encoded image data. - - To specify duration, add the time in milliseconds, - e.g. ``getdata(im_frame, duration=1000)`` - - :param im: Image object - :param offset: Tuple of (x, y) pixels. Defaults to (0, 0) - :param \\**params: e.g. duration or other encoder info parameters - :returns: List of bytes containing GIF encoded frame data - - """ - - class Collector: - data = [] - - def write(self, data): - self.data.append(data) - - im.load() # make sure raster data is available - - fp = Collector() - - _write_frame_data(fp, im, offset, params) - - return fp.data - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GifImageFile.format, GifImageFile, _accept) -Image.register_save(GifImageFile.format, _save) -Image.register_save_all(GifImageFile.format, _save_all) -Image.register_extension(GifImageFile.format, ".gif") -Image.register_mime(GifImageFile.format, "image/gif") - -# -# Uncomment the following line if you wish to use NETPBM/PBMPLUS -# instead of the built-in "uncompressed" GIF encoder - -# Image.register_save(GifImageFile.format, _save_netpbm) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/background.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/background.py deleted file mode 100644 index dd3bbe249130348881331aea569ce3ec3f295128..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/background.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.background import BackgroundTasks as BackgroundTasks # noqa diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/layout.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/layout.py deleted file mode 100644 index 6b85cd503387291f326e937b36a5739b1de23ef1..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/merge/layout.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.ttLib.tables import otTables -from fontTools.merge.base import add_method, mergeObjects -from fontTools.merge.util import * -import logging - - -log = logging.getLogger("fontTools.merge") - - -def mergeLookupLists(lst): - # TODO Do smarter merge. - return sumLists(lst) - - -def mergeFeatures(lst): - assert lst - self = otTables.Feature() - self.FeatureParams = None - self.LookupListIndex = mergeLookupLists( - [l.LookupListIndex for l in lst if l.LookupListIndex] - ) - self.LookupCount = len(self.LookupListIndex) - return self - - -def mergeFeatureLists(lst): - d = {} - for l in lst: - for f in l: - tag = f.FeatureTag - if tag not in d: - d[tag] = [] - d[tag].append(f.Feature) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.FeatureRecord() - rec.FeatureTag = tag - rec.Feature = mergeFeatures(d[tag]) - ret.append(rec) - return ret - - -def mergeLangSyses(lst): - assert lst - - # TODO Support merging ReqFeatureIndex - assert all(l.ReqFeatureIndex == 0xFFFF for l in lst) - - self = otTables.LangSys() - self.LookupOrder = None - self.ReqFeatureIndex = 0xFFFF - self.FeatureIndex = mergeFeatureLists( - [l.FeatureIndex for l in lst if l.FeatureIndex] - ) - self.FeatureCount = len(self.FeatureIndex) - return self - - -def mergeScripts(lst): - assert lst - - if len(lst) == 1: - return lst[0] - langSyses = {} - for sr in lst: - for lsr in sr.LangSysRecord: - if lsr.LangSysTag not in langSyses: - langSyses[lsr.LangSysTag] = [] - langSyses[lsr.LangSysTag].append(lsr.LangSys) - lsrecords = [] - for tag, langSys_list in sorted(langSyses.items()): - lsr = otTables.LangSysRecord() - lsr.LangSys = mergeLangSyses(langSys_list) - lsr.LangSysTag = tag - lsrecords.append(lsr) - - self = otTables.Script() - self.LangSysRecord = lsrecords - self.LangSysCount = len(lsrecords) - dfltLangSyses = [s.DefaultLangSys for s in lst if s.DefaultLangSys] - if dfltLangSyses: - self.DefaultLangSys = mergeLangSyses(dfltLangSyses) - else: - self.DefaultLangSys = None - return self - - -def mergeScriptRecords(lst): - d = {} - for l in lst: - for s in l: - tag = s.ScriptTag - if tag not in d: - d[tag] = [] - d[tag].append(s.Script) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.ScriptRecord() - rec.ScriptTag = tag - rec.Script = mergeScripts(d[tag]) - ret.append(rec) - return ret - - -otTables.ScriptList.mergeMap = { - "ScriptCount": lambda lst: None, # TODO - "ScriptRecord": mergeScriptRecords, -} -otTables.BaseScriptList.mergeMap = { - "BaseScriptCount": lambda lst: None, # TODO - # TODO: Merge duplicate entries - "BaseScriptRecord": lambda lst: sorted( - sumLists(lst), key=lambda s: s.BaseScriptTag - ), -} - -otTables.FeatureList.mergeMap = { - "FeatureCount": sum, - "FeatureRecord": lambda lst: sorted(sumLists(lst), key=lambda s: s.FeatureTag), -} - -otTables.LookupList.mergeMap = { - "LookupCount": sum, - "Lookup": sumLists, -} - -otTables.Coverage.mergeMap = { - "Format": min, - "glyphs": sumLists, -} - -otTables.ClassDef.mergeMap = { - "Format": min, - "classDefs": sumDicts, -} - -otTables.LigCaretList.mergeMap = { - "Coverage": mergeObjects, - "LigGlyphCount": sum, - "LigGlyph": sumLists, -} - -otTables.AttachList.mergeMap = { - "Coverage": mergeObjects, - "GlyphCount": sum, - "AttachPoint": sumLists, -} - -# XXX Renumber MarkFilterSets of lookups -otTables.MarkGlyphSetsDef.mergeMap = { - "MarkSetTableFormat": equal, - "MarkSetCount": sum, - "Coverage": sumLists, -} - -otTables.Axis.mergeMap = { - "*": mergeObjects, -} - -# XXX Fix BASE table merging -otTables.BaseTagList.mergeMap = { - "BaseTagCount": sum, - "BaselineTag": sumLists, -} - -otTables.GDEF.mergeMap = ( - otTables.GSUB.mergeMap -) = ( - otTables.GPOS.mergeMap -) = otTables.BASE.mergeMap = otTables.JSTF.mergeMap = otTables.MATH.mergeMap = { - "*": mergeObjects, - "Version": max, -} - -ttLib.getTableClass("GDEF").mergeMap = ttLib.getTableClass( - "GSUB" -).mergeMap = ttLib.getTableClass("GPOS").mergeMap = ttLib.getTableClass( - "BASE" -).mergeMap = ttLib.getTableClass( - "JSTF" -).mergeMap = ttLib.getTableClass( - "MATH" -).mergeMap = { - "tableTag": onlyExisting(equal), # XXX clean me up - "table": mergeObjects, -} - - -@add_method(ttLib.getTableClass("GSUB")) -def merge(self, m, tables): - assert len(tables) == len(m.duplicateGlyphsPerFont) - for i, (table, dups) in enumerate(zip(tables, m.duplicateGlyphsPerFont)): - if not dups: - continue - if table is None or table is NotImplemented: - log.warning( - "Have non-identical duplicates to resolve for '%s' but no GSUB. Are duplicates intended?: %s", - m.fonts[i]._merger__name, - dups, - ) - continue - - synthFeature = None - synthLookup = None - for script in table.table.ScriptList.ScriptRecord: - if script.ScriptTag == "DFLT": - continue # XXX - for langsys in [script.Script.DefaultLangSys] + [ - l.LangSys for l in script.Script.LangSysRecord - ]: - if langsys is None: - continue # XXX Create! - feature = [v for v in langsys.FeatureIndex if v.FeatureTag == "locl"] - assert len(feature) <= 1 - if feature: - feature = feature[0] - else: - if not synthFeature: - synthFeature = otTables.FeatureRecord() - synthFeature.FeatureTag = "locl" - f = synthFeature.Feature = otTables.Feature() - f.FeatureParams = None - f.LookupCount = 0 - f.LookupListIndex = [] - table.table.FeatureList.FeatureRecord.append(synthFeature) - table.table.FeatureList.FeatureCount += 1 - feature = synthFeature - langsys.FeatureIndex.append(feature) - langsys.FeatureIndex.sort(key=lambda v: v.FeatureTag) - - if not synthLookup: - subtable = otTables.SingleSubst() - subtable.mapping = dups - synthLookup = otTables.Lookup() - synthLookup.LookupFlag = 0 - synthLookup.LookupType = 1 - synthLookup.SubTableCount = 1 - synthLookup.SubTable = [subtable] - if table.table.LookupList is None: - # mtiLib uses None as default value for LookupList, - # while feaLib points to an empty array with count 0 - # TODO: make them do the same - table.table.LookupList = otTables.LookupList() - table.table.LookupList.Lookup = [] - table.table.LookupList.LookupCount = 0 - table.table.LookupList.Lookup.append(synthLookup) - table.table.LookupList.LookupCount += 1 - - if feature.Feature.LookupListIndex[:1] != [synthLookup]: - feature.Feature.LookupListIndex[:0] = [synthLookup] - feature.Feature.LookupCount += 1 - - DefaultTable.merge(self, m, tables) - return self - - -@add_method( - otTables.SingleSubst, - otTables.MultipleSubst, - otTables.AlternateSubst, - otTables.LigatureSubst, - otTables.ReverseChainSingleSubst, - otTables.SinglePos, - otTables.PairPos, - otTables.CursivePos, - otTables.MarkBasePos, - otTables.MarkLigPos, - otTables.MarkMarkPos, -) -def mapLookups(self, lookupMap): - pass - - -# Copied and trimmed down from subset.py -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def __merge_classify_context(self): - class ContextHelper(object): - def __init__(self, klass, Format): - if klass.__name__.endswith("Subst"): - Typ = "Sub" - Type = "Subst" - else: - Typ = "Pos" - Type = "Pos" - if klass.__name__.startswith("Chain"): - Chain = "Chain" - else: - Chain = "" - ChainTyp = Chain + Typ - - self.Typ = Typ - self.Type = Type - self.Chain = Chain - self.ChainTyp = ChainTyp - - self.LookupRecord = Type + "LookupRecord" - - if Format == 1: - self.Rule = ChainTyp + "Rule" - self.RuleSet = ChainTyp + "RuleSet" - elif Format == 2: - self.Rule = ChainTyp + "ClassRule" - self.RuleSet = ChainTyp + "ClassSet" - - if self.Format not in [1, 2, 3]: - return None # Don't shoot the messenger; let it go - if not hasattr(self.__class__, "_merge__ContextHelpers"): - self.__class__._merge__ContextHelpers = {} - if self.Format not in self.__class__._merge__ContextHelpers: - helper = ContextHelper(self.__class__, self.Format) - self.__class__._merge__ContextHelpers[self.Format] = helper - return self.__class__._merge__ContextHelpers[self.Format] - - -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def mapLookups(self, lookupMap): - c = self.__merge_classify_context() - - if self.Format in [1, 2]: - for rs in getattr(self, c.RuleSet): - if not rs: - continue - for r in getattr(rs, c.Rule): - if not r: - continue - for ll in getattr(r, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - elif self.Format == 3: - for ll in getattr(self, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.ExtensionSubst, otTables.ExtensionPos) -def mapLookups(self, lookupMap): - if self.Format == 1: - self.ExtSubTable.mapLookups(lookupMap) - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.Lookup) -def mapLookups(self, lookupMap): - for st in self.SubTable: - if not st: - continue - st.mapLookups(lookupMap) - - -@add_method(otTables.LookupList) -def mapLookups(self, lookupMap): - for l in self.Lookup: - if not l: - continue - l.mapLookups(lookupMap) - - -@add_method(otTables.Lookup) -def mapMarkFilteringSets(self, markFilteringSetMap): - if self.LookupFlag & 0x0010: - self.MarkFilteringSet = markFilteringSetMap[self.MarkFilteringSet] - - -@add_method(otTables.LookupList) -def mapMarkFilteringSets(self, markFilteringSetMap): - for l in self.Lookup: - if not l: - continue - l.mapMarkFilteringSets(markFilteringSetMap) - - -@add_method(otTables.Feature) -def mapLookups(self, lookupMap): - self.LookupListIndex = [lookupMap[i] for i in self.LookupListIndex] - - -@add_method(otTables.FeatureList) -def mapLookups(self, lookupMap): - for f in self.FeatureRecord: - if not f or not f.Feature: - continue - f.Feature.mapLookups(lookupMap) - - -@add_method(otTables.DefaultLangSys, otTables.LangSys) -def mapFeatures(self, featureMap): - self.FeatureIndex = [featureMap[i] for i in self.FeatureIndex] - if self.ReqFeatureIndex != 65535: - self.ReqFeatureIndex = featureMap[self.ReqFeatureIndex] - - -@add_method(otTables.Script) -def mapFeatures(self, featureMap): - if self.DefaultLangSys: - self.DefaultLangSys.mapFeatures(featureMap) - for l in self.LangSysRecord: - if not l or not l.LangSys: - continue - l.LangSys.mapFeatures(featureMap) - - -@add_method(otTables.ScriptList) -def mapFeatures(self, featureMap): - for s in self.ScriptRecord: - if not s or not s.Script: - continue - s.Script.mapFeatures(featureMap) - - -def layoutPreMerge(font): - # Map indices to references - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.LookupList: - lookupMap = {i: v for i, v in enumerate(t.table.LookupList.Lookup)} - t.table.LookupList.mapLookups(lookupMap) - t.table.FeatureList.mapLookups(lookupMap) - - if ( - GDEF - and GDEF.table.Version >= 0x00010002 - and GDEF.table.MarkGlyphSetsDef - ): - markFilteringSetMap = { - i: v for i, v in enumerate(GDEF.table.MarkGlyphSetsDef.Coverage) - } - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - if t.table.FeatureList and t.table.ScriptList: - featureMap = {i: v for i, v in enumerate(t.table.FeatureList.FeatureRecord)} - t.table.ScriptList.mapFeatures(featureMap) - - # TODO FeatureParams nameIDs - - -def layoutPostMerge(font): - # Map references back to indices - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.FeatureList and t.table.ScriptList: - # Collect unregistered (new) features. - featureMap = GregariousIdentityDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - # Record used features. - featureMap = AttendanceRecordingIdentityDict( - t.table.FeatureList.FeatureRecord - ) - t.table.ScriptList.mapFeatures(featureMap) - usedIndices = featureMap.s - - # Remove unused features - t.table.FeatureList.FeatureRecord = [ - f - for i, f in enumerate(t.table.FeatureList.FeatureRecord) - if i in usedIndices - ] - - # Map back to indices. - featureMap = NonhashableDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - t.table.FeatureList.FeatureCount = len(t.table.FeatureList.FeatureRecord) - - if t.table.LookupList: - # Collect unregistered (new) lookups. - lookupMap = GregariousIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - # Record used lookups. - lookupMap = AttendanceRecordingIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - usedIndices = lookupMap.s - - # Remove unused lookups - t.table.LookupList.Lookup = [ - l for i, l in enumerate(t.table.LookupList.Lookup) if i in usedIndices - ] - - # Map back to indices. - lookupMap = NonhashableDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - t.table.LookupList.LookupCount = len(t.table.LookupList.Lookup) - - if GDEF and GDEF.table.Version >= 0x00010002: - markFilteringSetMap = NonhashableDict( - GDEF.table.MarkGlyphSetsDef.Coverage - ) - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - # TODO FeatureParams nameIDs diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/encodingTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/encodingTools.py deleted file mode 100644 index 3b2651d3b1ce222060fa67abaeac4da8030618fa..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/encodingTools.py +++ /dev/null @@ -1,72 +0,0 @@ -"""fontTools.misc.encodingTools.py -- tools for working with OpenType encodings. -""" - -import fontTools.encodings.codecs - -# Map keyed by platformID, then platEncID, then possibly langID -_encodingMap = { - 0: { # Unicode - 0: "utf_16_be", - 1: "utf_16_be", - 2: "utf_16_be", - 3: "utf_16_be", - 4: "utf_16_be", - 5: "utf_16_be", - 6: "utf_16_be", - }, - 1: { # Macintosh - # See - # https://github.com/fonttools/fonttools/issues/236 - 0: { # Macintosh, platEncID==0, keyed by langID - 15: "mac_iceland", - 17: "mac_turkish", - 18: "mac_croatian", - 24: "mac_latin2", - 25: "mac_latin2", - 26: "mac_latin2", - 27: "mac_latin2", - 28: "mac_latin2", - 36: "mac_latin2", - 37: "mac_romanian", - 38: "mac_latin2", - 39: "mac_latin2", - 40: "mac_latin2", - Ellipsis: "mac_roman", # Other - }, - 1: "x_mac_japanese_ttx", - 2: "x_mac_trad_chinese_ttx", - 3: "x_mac_korean_ttx", - 6: "mac_greek", - 7: "mac_cyrillic", - 25: "x_mac_simp_chinese_ttx", - 29: "mac_latin2", - 35: "mac_turkish", - 37: "mac_iceland", - }, - 2: { # ISO - 0: "ascii", - 1: "utf_16_be", - 2: "latin1", - }, - 3: { # Microsoft - 0: "utf_16_be", - 1: "utf_16_be", - 2: "shift_jis", - 3: "gb2312", - 4: "big5", - 5: "euc_kr", - 6: "johab", - 10: "utf_16_be", - }, -} - - -def getEncoding(platformID, platEncID, langID, default=None): - """Returns the Python encoding name for OpenType platformID/encodingID/langID - triplet. If encoding for these values is not known, by default None is - returned. That can be overriden by passing a value to the default argument. - """ - encoding = _encodingMap.get(platformID, {}).get(platEncID, default) - if isinstance(encoding, dict): - encoding = encoding.get(langID, encoding[Ellipsis]) - return encoding diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_core.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_core.h deleted file mode 100644 index 347566d6ed549bf4a6ab051846230fe4170b9691..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_core.h +++ /dev/null @@ -1,257 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCA_CORE_H -#define AVCODEC_DCA_CORE_H - -#include "libavutil/float_dsp.h" -#include "libavutil/fixed_dsp.h" -#include "libavutil/mem_internal.h" -#include "libavutil/tx.h" - -#include "avcodec.h" -#include "get_bits.h" -#include "dca.h" -#include "dca_exss.h" -#include "dcadsp.h" -#include "dcadct.h" -#include "dcamath.h" -#include "dcahuff.h" -#include "synth_filter.h" - -#define DCA_CHANNELS 7 -#define DCA_SUBBANDS 32 -#define DCA_SUBBANDS_X96 64 -#define DCA_SUBFRAMES 16 -#define DCA_SUBBAND_SAMPLES 8 -#define DCA_PCMBLOCK_SAMPLES 32 -#define DCA_LFE_HISTORY 8 -#define DCA_ABITS_MAX 26 - -#define DCA_CORE_CHANNELS_MAX 6 -#define DCA_DMIX_CHANNELS_MAX 4 -#define DCA_XXCH_CHANNELS_MAX 2 -#define DCA_EXSS_CHANNELS_MAX 8 -#define DCA_EXSS_CHSETS_MAX 4 - -#define DCA_FILTER_MODE_X96 0x01 -#define DCA_FILTER_MODE_FIXED 0x02 - -enum DCACoreAudioMode { - DCA_AMODE_MONO, // Mode 0: A (mono) - DCA_AMODE_MONO_DUAL, // Mode 1: A + B (dual mono) - DCA_AMODE_STEREO, // Mode 2: L + R (stereo) - DCA_AMODE_STEREO_SUMDIFF, // Mode 3: (L+R) + (L-R) (sum-diff) - DCA_AMODE_STEREO_TOTAL, // Mode 4: LT + RT (left and right total) - DCA_AMODE_3F, // Mode 5: C + L + R - DCA_AMODE_2F1R, // Mode 6: L + R + S - DCA_AMODE_3F1R, // Mode 7: C + L + R + S - DCA_AMODE_2F2R, // Mode 8: L + R + SL + SR - DCA_AMODE_3F2R, // Mode 9: C + L + R + SL + SR - - DCA_AMODE_COUNT -}; - -enum DCACoreExtAudioType { - DCA_EXT_AUDIO_XCH = 0, - DCA_EXT_AUDIO_X96 = 2, - DCA_EXT_AUDIO_XXCH = 6 -}; - -enum DCACoreLFEFlag { - DCA_LFE_FLAG_NONE, - DCA_LFE_FLAG_128, - DCA_LFE_FLAG_64, - DCA_LFE_FLAG_INVALID -}; - -typedef struct DCADSPData { - union { - struct { - DECLARE_ALIGNED(32, float, hist1)[1024]; - DECLARE_ALIGNED(32, float, hist2)[64]; - } flt; - struct { - DECLARE_ALIGNED(32, int32_t, hist1)[1024]; - DECLARE_ALIGNED(32, int32_t, hist2)[64]; - } fix; - } u; - int offset; -} DCADSPData; - -typedef struct DCACoreDecoder { - AVCodecContext *avctx; - GetBitContext gb; - GetBitContext gb_in; - - // Bit stream header - int crc_present; ///< CRC present flag - int npcmblocks; ///< Number of PCM sample blocks - int frame_size; ///< Primary frame byte size - int audio_mode; ///< Audio channel arrangement - int sample_rate; ///< Core audio sampling frequency - int bit_rate; ///< Transmission bit rate - int drc_present; ///< Embedded dynamic range flag - int ts_present; ///< Embedded time stamp flag - int aux_present; ///< Auxiliary data flag - int ext_audio_type; ///< Extension audio descriptor flag - int ext_audio_present; ///< Extended coding flag - int sync_ssf; ///< Audio sync word insertion flag - int lfe_present; ///< Low frequency effects flag - int predictor_history; ///< Predictor history flag switch - int filter_perfect; ///< Multirate interpolator switch - int source_pcm_res; ///< Source PCM resolution - int es_format; ///< Extended surround (ES) mastering flag - int sumdiff_front; ///< Front sum/difference flag - int sumdiff_surround; ///< Surround sum/difference flag - - // Primary audio coding header - int nsubframes; ///< Number of subframes - int nchannels; ///< Number of primary audio channels (incl. extension channels) - int ch_mask; ///< Speaker layout mask (incl. LFE and extension channels) - int8_t nsubbands[DCA_CHANNELS]; ///< Subband activity count - int8_t subband_vq_start[DCA_CHANNELS]; ///< High frequency VQ start subband - int8_t joint_intensity_index[DCA_CHANNELS]; ///< Joint intensity coding index - int8_t transition_mode_sel[DCA_CHANNELS]; ///< Transient mode code book - int8_t scale_factor_sel[DCA_CHANNELS]; ///< Scale factor code book - int8_t bit_allocation_sel[DCA_CHANNELS]; ///< Bit allocation quantizer select - int8_t quant_index_sel[DCA_CHANNELS][DCA_CODE_BOOKS]; ///< Quantization index codebook select - int32_t scale_factor_adj[DCA_CHANNELS][DCA_CODE_BOOKS]; ///< Scale factor adjustment - - // Primary audio coding side information - int8_t nsubsubframes[DCA_SUBFRAMES]; ///< Subsubframe count for each subframe - int8_t prediction_mode[DCA_CHANNELS][DCA_SUBBANDS_X96]; ///< Prediction mode - int16_t prediction_vq_index[DCA_CHANNELS][DCA_SUBBANDS_X96]; ///< Prediction coefficients VQ address - int8_t bit_allocation[DCA_CHANNELS][DCA_SUBBANDS_X96]; ///< Bit allocation index - int8_t transition_mode[DCA_SUBFRAMES][DCA_CHANNELS][DCA_SUBBANDS]; ///< Transition mode - int32_t scale_factors[DCA_CHANNELS][DCA_SUBBANDS][2]; ///< Scale factors (2x for transients and X96) - int8_t joint_scale_sel[DCA_CHANNELS]; ///< Joint subband codebook select - int32_t joint_scale_factors[DCA_CHANNELS][DCA_SUBBANDS_X96]; ///< Scale factors for joint subband coding - - // Auxiliary data - int prim_dmix_embedded; ///< Auxiliary dynamic downmix flag - int prim_dmix_type; ///< Auxiliary primary channel downmix type - int prim_dmix_coeff[DCA_DMIX_CHANNELS_MAX * DCA_CORE_CHANNELS_MAX]; ///< Dynamic downmix code coefficients - - // Core extensions - int ext_audio_mask; ///< Bit mask of fully decoded core extensions - - // XCH extension data - int xch_pos; ///< Bit position of XCH frame in core substream - - // XXCH extension data - int xxch_crc_present; ///< CRC presence flag for XXCH channel set header - int xxch_mask_nbits; ///< Number of bits for loudspeaker mask - int xxch_core_mask; ///< Core loudspeaker activity mask - int xxch_spkr_mask; ///< Loudspeaker layout mask - int xxch_dmix_embedded; ///< Downmix already performed by encoder - int xxch_dmix_scale_inv; ///< Downmix scale factor - int xxch_dmix_mask[DCA_XXCH_CHANNELS_MAX]; ///< Downmix channel mapping mask - int xxch_dmix_coeff[DCA_XXCH_CHANNELS_MAX * DCA_CORE_CHANNELS_MAX]; ///< Downmix coefficients - int xxch_pos; ///< Bit position of XXCH frame in core substream - - // X96 extension data - int x96_rev_no; ///< X96 revision number - int x96_crc_present; ///< CRC presence flag for X96 channel set header - int x96_nchannels; ///< Number of primary channels in X96 extension - int x96_high_res; ///< X96 high resolution flag - int x96_subband_start; ///< First encoded subband in X96 extension - int x96_rand; ///< Random seed for generating samples for unallocated X96 subbands - int x96_pos; ///< Bit position of X96 frame in core substream - - // Sample buffers - unsigned int x96_subband_size; - int32_t *x96_subband_buffer; ///< X96 subband sample buffer base - int32_t *x96_subband_samples[DCA_CHANNELS][DCA_SUBBANDS_X96]; ///< X96 subband samples - - unsigned int subband_size; - int32_t *subband_buffer; ///< Subband sample buffer base - int32_t *subband_samples[DCA_CHANNELS][DCA_SUBBANDS]; ///< Subband samples - int32_t *lfe_samples; ///< Decimated LFE samples - - // DSP contexts - DCADSPData dcadsp_data[DCA_CHANNELS]; ///< FIR history buffers - DCADSPContext *dcadsp; - DCADCTContext dcadct; - AVTXContext *imdct[2]; - av_tx_fn imdct_fn[2]; - SynthFilterContext synth; - AVFloatDSPContext *float_dsp; - AVFixedDSPContext *fixed_dsp; - - // PCM output data - unsigned int output_size; - void *output_buffer; ///< PCM output buffer base - int32_t *output_samples[DCA_SPEAKER_COUNT]; ///< PCM output for fixed point mode - int32_t output_history_lfe_fixed; ///< LFE PCM history for X96 filter - float output_history_lfe_float; ///< LFE PCM history for X96 filter - - int ch_remap[DCA_SPEAKER_COUNT]; ///< Channel to speaker map - int request_mask; ///< Requested channel layout (for stereo downmix) - - int npcmsamples; ///< Number of PCM samples per channel - int output_rate; ///< Output sample rate (1x or 2x header rate) - - int filter_mode; ///< Previous filtering mode for detecting changes -} DCACoreDecoder; - -static inline int ff_dca_core_map_spkr(DCACoreDecoder *core, int spkr) -{ - if (core->ch_mask & (1U << spkr)) - return spkr; - if (spkr == DCA_SPEAKER_Lss && (core->ch_mask & DCA_SPEAKER_MASK_Ls)) - return DCA_SPEAKER_Ls; - if (spkr == DCA_SPEAKER_Rss && (core->ch_mask & DCA_SPEAKER_MASK_Rs)) - return DCA_SPEAKER_Rs; - return -1; -} - -static inline void ff_dca_core_dequantize(int32_t *output, const int32_t *input, - int32_t step_size, int32_t scale, int residual, int len) -{ - // Account for quantizer step size - int64_t step_scale = (int64_t)step_size * scale; - int n, shift = 0; - - // Limit scale factor resolution to 22 bits - if (step_scale > (1 << 23)) { - shift = av_log2(step_scale >> 23) + 1; - step_scale >>= shift; - } - - // Scale the samples - if (residual) { - for (n = 0; n < len; n++) - output[n] += clip23(norm__(input[n] * step_scale, 22 - shift)); - } else { - for (n = 0; n < len; n++) - output[n] = clip23(norm__(input[n] * step_scale, 22 - shift)); - } -} - -int ff_dca_core_parse(DCACoreDecoder *s, const uint8_t *data, int size); -int ff_dca_core_parse_exss(DCACoreDecoder *s, const uint8_t *data, DCAExssAsset *asset); -int ff_dca_core_filter_fixed(DCACoreDecoder *s, int x96_synth); -int ff_dca_core_filter_frame(DCACoreDecoder *s, AVFrame *frame); -av_cold void ff_dca_core_flush(DCACoreDecoder *s); -av_cold int ff_dca_core_init(DCACoreDecoder *s); -av_cold void ff_dca_core_close(DCACoreDecoder *s); - -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dct32.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dct32.h deleted file mode 100644 index 61bf223a8d61f5e06d8953512a365ba5e7c0b854..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dct32.h +++ /dev/null @@ -1,25 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCT32_H -#define AVCODEC_DCT32_H - -void ff_dct32_float(float *dst, const float *src); -void ff_dct32_fixed(int *dst, const int *src); - -#endif /* AVCODEC_DCT32_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.h deleted file mode 100644 index a31b054dbba817204e60c43ff2ed27c95962b953..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lagarithrac.h +++ /dev/null @@ -1,113 +0,0 @@ -/* - * Lagarith range decoder - * Copyright (c) 2009 Nathan Caldwell - * Copyright (c) 2009 David Conrad - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Lagarith range decoder - * @author Nathan Caldwell - * @author David Conrad - */ - -#ifndef AVCODEC_LAGARITHRAC_H -#define AVCODEC_LAGARITHRAC_H - -#include -#include "libavutil/intreadwrite.h" -#include "avcodec.h" -#include "get_bits.h" - -typedef struct lag_rac { - AVCodecContext *avctx; - unsigned low; - unsigned range; - unsigned scale; /**< Number of bits of precision in range. */ - unsigned hash_shift; /**< Number of bits to shift to calculate hash for radix search. */ - - const uint8_t *bytestream_start; /**< Start of input bytestream. */ - const uint8_t *bytestream; /**< Current position in input bytestream. */ - const uint8_t *bytestream_end; /**< End position of input bytestream. */ - - int overread; -#define MAX_OVERREAD 4 - - uint32_t prob[258]; /**< Table of cumulative probability for each symbol. */ - uint8_t range_hash[1024]; /**< Hash table mapping upper byte to approximate symbol. */ -} lag_rac; - -void ff_lag_rac_init(lag_rac *l, GetBitContext *gb, int length); - -/* TODO: Optimize */ -static inline void lag_rac_refill(lag_rac *l) -{ - while (l->range <= 0x800000) { - l->low <<= 8; - l->range <<= 8; - l->low |= 0xff & (AV_RB16(l->bytestream) >> 1); - if (l->bytestream < l->bytestream_end) - l->bytestream++; - else - l->overread++; - } -} - -/** - * Decode a single byte from the compressed plane described by *l. - * @param l pointer to lag_rac for the current plane - * @return next byte of decoded data - */ -static inline uint8_t lag_get_rac(lag_rac *l) -{ - unsigned range_scaled, low_scaled; - int val; - - lag_rac_refill(l); - - range_scaled = l->range >> l->scale; - - if (l->low < range_scaled * l->prob[255]) { - /* val = 0 is frequent enough to deserve a shortcut */ - if (l->low < range_scaled * l->prob[1]) { - val = 0; - } else { - low_scaled = l->low / (range_scaled<<(l->hash_shift)); - - val = l->range_hash[low_scaled]; - while (l->low >= range_scaled * l->prob[val + 1]) - val++; - } - - l->range = range_scaled * (l->prob[val + 1] - l->prob[val]); - } else { - val = 255; - l->range -= range_scaled * l->prob[255]; - } - - if (!l->range) - l->range = 0x80; - - l->low -= range_scaled * l->prob[val]; - - return val; -} - - -#endif /* AVCODEC_LAGARITHRAC_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Winky Ds Eureka Eureka Album - The Best of Zimbabwean DanceHall.md b/spaces/congsaPfin/Manga-OCR/logs/Download Winky Ds Eureka Eureka Album - The Best of Zimbabwean DanceHall.md deleted file mode 100644 index d919f24956259a11c89cd6d060c2ec7ccc297e29..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Winky Ds Eureka Eureka Album - The Best of Zimbabwean DanceHall.md +++ /dev/null @@ -1,74 +0,0 @@ -
    -

    Winky D Eureka Album Download: How to Listen to the Latest Zimdancehall Music

    -

    If you are a fan of Zimbabwean music, you have probably heard of Winky D, the reggae-dancehall artist who is popularly known as "The Big Man" or "Dancehall Igwe". He is one of the most accomplished and influential musicians in the country, with a career spanning over two decades and eleven albums. His latest album, Eureka Eureka, was released in January 2023 and has been making waves in the music scene. In this article, we will tell you more about Winky D, his new album, and how you can download it and enjoy his amazing songs.

    -

    winky d eureka album download


    Downloadhttps://urlca.com/2uO9HB



    -

    Who is Winky D?

    -

    Biography and background

    -

    Winky D was born Wallace Chirumiko on 1 February 1983 in Kambuzuma, a suburb of Harare. He developed an interest in music at an early age and started listening to reggae when he was eight years old. He began performing at small functions and concerts when he was a teenager, and earned the name "Wicked Deejay" which was later shortened to Winky D. He started recording his music with the help of Bartholomew Vera of Blacklab Studios, and released his first songs like "Rasta" and "Dead Inna War". He has since released eleven albums with many chart hits which have gained him fans across the world. He has also collaborated with other artists such as Oliver Mtukudzi, Gemma Griffiths, Shingai, Holy Ten, Nutty O, and many more. He is married and has a son named Taenda.

    -

    Musical style and influences

    -

    Winky D is often considered the pioneer of Zimdancehall, a genre that blends reggae, dancehall, and Zimbabwean traditional music. His music often provides social commentary about Zimbabwean society and politics, as well as personal and spiritual themes. He is influenced by reggae legends such as Bob Marley, Peter Tosh, and Buju Banton, as well as local artists such as Thomas Mapfumo, Leonard Dembo, and Simon Chimbetu. He is also inspired by international genres such as rock, hip hop, and house music. He has a unique style of singing that incorporates Shona slang, English words, and Jamaican patois. He is known for his catchy hooks, witty lyrics, and energetic stage performances.

    -

    What is Eureka Eureka?

    -

    Album overview and features

    -

    Eureka Eureka is Winky D's tenth studio album and his most collaborative body of work. It was released on 1 January 2023 by Vigilance Music. It features 14 tracks with guest appearances from SHINGAI, ENZO ISHALL, TOCKY VIBES, Holy Ten, Herman, KILLER T, SAINT FLOEW, NUTTY O, MWENJE MATHOLE, ANITA JAXSON, BAZOOKA & POPTAIN, Dr Chaii, QOUNFUZED, and EXQ. The album showcases Winky D's versatility and creativity as he explores different sounds and topics. The album title means "I have found it" in Shona, implying that Winky D has discovered his musical identity and purpose.

    -

    winky d eureka album mp3 download
    -winky d eureka album zip download
    -winky d eureka album free download
    -winky d eureka album songs download
    -winky d eureka album stream online
    -winky d eureka album afrocharts
    -winky d eureka album tracklist
    -winky d eureka album release date
    -winky d eureka album review
    -winky d eureka album genre
    -winky d eureka album features
    -winky d eureka album more tears
    -winky d eureka album vafarisi
    -winky d eureka album shaker
    -winky d eureka album high grades
    -winky d eureka album ibotso
    -winky d eureka album nherera
    -winky d eureka album dreams
    -winky d eureka album dzimba dzemabwe
    -winky d eureka album chauruka
    -winky d eureka album xyz
    -winky d eureka album mu spirit
    -winky d eureka album urere
    -winky d eureka album gonyera
    -winky d eureka album dancehall
    -winky d eureka album zimbabwe
    -winky d eureka album 2023
    -winky d eureka album anita jaxson
    -winky d eureka album bazooka poptain
    -winky d eureka album enzo ishall
    -winky d eureka album herman
    -winky d eureka album holy ten
    -winky d eureka album mwenje mathole
    -winky d eureka album saint floew
    -winky d eureka album shingai
    -winky d eureka album tocky vibes
    -winky d eureka album qounfuzed
    -winky d eureka album dr chaii
    -winky d eureka album killer t
    -winky d eureka album exq

    -

    Themes and messages

    -

    Eureka Eureka is an album that reflects Winky D's artistic vision and social consciousness. He uses his music to raise reflective questions about inequality, the effects of social media, corruption, friendship, love, dreams, spirituality, and hope. He also celebrates his culture, his achievements, and his gratitude to his fans and God. He also experiments with different genres such as afrobeat, hip hop, and pop, while maintaining his signature Zimdancehall sound. Some of the standout tracks on the album are "Eureka Eureka", "Mwari Pindirai", "Mambo", "Ngirozi", "Kana Ndada", and "Sekuru".

    -

    How to download Eureka Eureka?

    -

    Online platforms and streaming services

    -

    If you want to listen to Winky D's latest album, you have several options to choose from. You can download the album from various online platforms such as iTunes, Amazon Music, Google Play, Spotify, Deezer, Tidal, and YouTube Music. You can also stream the album on these services or on Winky D's official website. You can also watch the official music videos of some of the songs on Winky D's YouTube channel.

    -

    Benefits of downloading the album

    -

    Downloading Eureka Eureka is not only a way of supporting Winky D and his music, but also a way of enjoying some of the best Zimdancehall music ever made. By downloading the album, you can listen to it anytime and anywhere, even when you are offline or have a poor internet connection. You can also share the album with your friends and family, and introduce them to Winky D's amazing songs. You can also create your own playlists and mixtapes with your favorite tracks from the album. Downloading Eureka Eureka is a worthwhile investment that will enrich your musical experience.

    -

    Conclusion

    -

    Winky D is one of the most talented and influential Zimdancehall artists in Zimbabwe and beyond. His latest album, Eureka Eureka, is a masterpiece that showcases his musical genius and social awareness. The album is available for download on various online platforms and streaming services, and it is worth every penny. If you are looking for some fresh and exciting music that will make you dance, think, and feel, you should definitely check out Eureka Eureka by Winky D.

    -

    FAQs

    -

    What does Eureka Eureka mean?

    -

    Eureka Eureka means "I have found it" in Shona, implying that Winky D has discovered his musical identity and purpose.

    -

    How many tracks are on Eureka Eureka?

    -

    Eureka Eureka has 14 tracks with guest appearances from various artists.

    -

    What are some of the genres that Winky D explores on Eureka Eureka?

    -

    Winky D experiments with different genres such as afrobeat, hip hop, and pop, while maintaining his signature Zimdancehall sound.

    -

    Where can I download Eureka Eureka?

    -

    You can download Eureka Eureka from various online platforms such as iTunes, Amazon Music, Google Play, Spotify, Deezer, Tidal, and YouTube Music.

    -

    What are some of the benefits of downloading Eureka Eureka?

    -

    By downloading Eureka Eureka, you can listen to it anytime and anywhere, even when you are offline or have a poor internet connection. You can also share the album with your friends and family, and create your own playlists and mixtapes with your favorite tracks from the album.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Activate McAfee APK on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download and Activate McAfee APK on Your Android Device.md deleted file mode 100644 index 13f8dfd1d5f252bd81978d609bd76c181b929331..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Download and Activate McAfee APK on Your Android Device.md +++ /dev/null @@ -1,180 +0,0 @@ - -

    How to Download McAfee APK for Android

    -

    If you are looking for a reliable and comprehensive security solution for your Android device, you might want to consider downloading McAfee APK. McAfee is one of the most trusted names in the antivirus industry, and it offers a range of features and benefits that can protect your device from various threats. In this article, we will show you how to download and install McAfee APK for Android, how to compare it with other antivirus apps, and how to review it based on its pros and cons.

    -

    What is McAfee APK?

    -

    McAfee APK is the Android version of McAfee Mobile Security, which is a cross-platform security service that protects your identity, privacy, and device. It is compatible with Android, iOS, Windows, Mac, and ChromeOS devices, and it allows you to connect up to five devices with one subscription. You can download a free trial of McAfee Antivirus Total Protection from the official website of McAfee, or you can purchase a premium plan that offers more features and benefits.

    -

    download mcafee apk


    Download File >>> https://urlca.com/2uObLk



    -

    Features of McAfee APK

    -

    McAfee APK offers a variety of features that can help you stay safe online and offline. Some of the main features are:

    -
      -
    • Antivirus: It scans and blocks viruses, malware, spyware, ransomware, and other threats that can harm your device or data.
    • -
    • Secure VPN: It encrypts your online traffic and hides your IP address, so you can browse the web privately and securely on any Wi-Fi network.
    • -
    • Identity Monitoring: It monitors your personal information, such as email accounts, phone numbers, credit cards, and more, and alerts you if any breaches are detected.
    • -
    • Anti-Theft: It locks your device, takes pictures of the thief, tracks its location, wipes your data, and prevents software uninstallation if your device is lost or stolen.
    • -
    • Safe Browsing: It blocks risky websites, phishing links, browser exploits, malicious QR codes, and more, so you can surf the web with confidence.
    • -
    • System Scan: It checks for the latest updates and patches for your device and apps, and optimizes your battery and memory performance.
    • -
    -

    Benefits of McAfee APK

    -

    McAfee APK not only provides protection for your device, but also for your identity and privacy. Some of the benefits of using McAfee APK are:

    -
      -
    • Peace of mind: You can rest assured that your device is protected from the latest threats with award-winning antivirus technology backed by over 400 global threat researchers at McAfee Labs.
    • -
    • Convenience: You can manage your security settings and access your backed-up data from a simple web portal anytime, anywhere.
    • -
    • Affordability: You can protect up to five devices with one subscription plan that suits your budget and needs.
    • -
    • Support: You can get 24/7 customer service and technical support from McAfee experts via phone, chat, or email.
    • -
    -

    How to Download and Install McAfee APK

    -

    If If you want to download and install McAfee APK for your Android device, you can follow these simple steps:

    -

    Step 1: Go to the official website of McAfee

    -

    Open your web browser and go to the official website of McAfee. You can also click on this link to go directly to the download page.

    -

    Step 2: Choose your subscription plan

    -

    On the download page, you will see different subscription plans for McAfee Antivirus Total Protection. You can choose the one that suits your needs and budget. You can also compare the features and benefits of each plan by clicking on the "Compare Plans" button. Once you have decided on a plan, click on the "Buy Now" button.

    -

    Step 3: Create a McAfee account or log in

    -

    You will be redirected to a checkout page, where you will need to create a McAfee account or log in with your existing one. If you are a new user, you will need to enter your email address, password, and billing information. If you are an existing user, you will need to enter your email address and password. You can also use your Google or Facebook account to sign in. After you have entered your details, click on the "Place My Order" button.

    -

    Step 4: Download the McAfee APK file

    -

    After you have completed your purchase, you will receive an email confirmation with a link to download the McAfee APK file. You can also go to your McAfee account page and click on the "Download" button next to your subscription plan. You will be asked to select your device type and operating system. Choose "Android" and then click on the "Download" button again. The McAfee APK file will start downloading to your device.

    -

    Step 5: Allow installation from unknown sources

    -

    Before you can install the McAfee APK file, you will need to allow installation from unknown sources on your device. To do this, go to your device settings and look for the "Security" or "Privacy" option. Tap on it and then look for the "Unknown Sources" or "Install Unknown Apps" option. Toggle it on and then confirm your choice.

    -

    download mcafee antivirus total protection apk
    -download mcafee mobile security pro mod apk
    -download mcafee products and installers apk
    -download mcafee free trial for android apk
    -download mcafee safe web browser apk
    -download mcafee password manager apk
    -download mcafee id theft protection apk
    -download mcafee ransomware removal apk
    -download mcafee spyware scanner apk
    -download mcafee malware cleaner apk
    -download mcafee vpn service apk
    -download mcafee parental control apk
    -download mcafee device optimizer apk
    -download mcafee app lock and privacy apk
    -download mcafee anti-theft alarm apk
    -download mcafee backup and restore apk
    -download mcafee wifi security apk
    -download mcafee data usage tracker apk
    -download mcafee battery booster apk
    -download mcafee memory cleaner apk
    -download mcafee performance booster apk
    -download mcafee storage cleaner apk
    -download mcafee junk file remover apk
    -download mcafee app uninstaller apk
    -download mcafee secure cloud storage apk
    -download mcafee firewall protection apk
    -download mcafee network scanner apk
    -download mcafee phishing protection apk
    -download mcafee ad blocker apk
    -download mcafee tracker blocker apk
    -download mcafee anti-spam protection apk
    -download mcafee web advisor apk
    -download mcafee identity monitor apk
    -download mcafee social media guard apk
    -download mcafee file shredder apk
    -download mcafee encryption tool apk
    -download mcafee safe connect vpn apk
    -download mcafee gamer security apk
    -download mcafee live safe premium apk
    -download mcafee mobile security plus vpn and wifi guard premium plus subscription service for android devices (1 year) - 2023 release - email delivery - digital code - no cd/dvd or usb drive required - compatible with android 4.1 and up - 100% satisfaction guarantee - best antivirus software for android phones and tablets - protect your personal data, online privacy, and device performance from viruses, malware, spyware, ransomware, phishing, and other online threats - scan and block malicious apps, websites, and downloads - secure your wifi connection and browse the web anonymously with vpn service - get alerts when your personal information is exposed online with identity monitor - lock your apps and photos with app lock and privacy feature - backup and restore your contacts, photos, and videos with secure cloud storage - optimize your device performance with battery booster, memory cleaner, storage cleaner, junk file remover, app uninstaller, and more - enjoy unlimited customer support and regular updates from McAfee - get peace of mind with McAfee's 30-day money back guarantee and award-winning security technology - order now and get instant email delivery of your digital code within minutes of purchase - redeem your code on McAfee's website and start protecting your android device today!

    -

    Step 6: Install the McAfee APK file

    -

    Once you have allowed installation from unknown sources, you can install the McAfee APK file. To do this, go to your device file manager and look for the downloaded McAfee APK file. Tap on it and then follow the instructions on the screen to complete the installation.

    -

    Step 7: Activate your subscription and enjoy the protection

    -

    After you have installed the McAfee APK file, you can launch the app and activate your subscription. To do this, open the app and tap on the "Activate" button. Enter your email address and password that you used to create your McAfee account or purchase your subscription plan. Tap on the "Log In" button and then confirm your activation code. You can now enjoy the protection of McAfee APK for your Android device.

    -

    How to Compare McAfee APK with Other Antivirus Apps

    -

    If you want to compare McAfee APK with other antivirus apps for Android, you can use some criteria such as features, performance, price, ratings, and reviews. Here is a comparison table that shows how McAfee APK stacks up against some of its competitors:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Antivirus AppFeaturesPerformancePriceRatingsReviews
    McAfee APKAntivirus, VPN, Identity Monitoring, Anti-Theft, Safe Browsing, System ScanHigh detection rate, low battery drain, fast scan speed$29.99/year for up to 5 devices4.5/5 stars on Google Play Store"Best antivirus app ever! It protects my phone from viruses and hackers, and also helps me save battery and memory." - John Smith
    Norton Mobile SecurityAntivirus, VPN, App Advisor, Wi-Fi Security, Web ProtectionHigh detection rate, low battery drain, fast scan speed$29.99/year for 1 device4.6/5 stars on Google Play Store"Norton is a great app for security and privacy. It blocks malicious websites and apps, and also encrypts my online traffic with VPN." - Jane Doe
    Kaspersky Internet SecurityAntivirus, Anti-Phishing, App Lock, Find My Phone, Web FilterHigh detection rate, moderate battery drain, moderate scan speed$14.99/year for 1 device4.8/5 stars on Google Play Store"Kaspersky is a reliable and effective app for protecting my phone from viruses and phishing. It also has a handy app lock feature that secures my sensitive apps." - Mike Lee
    Avast Mobile SecurityAntivirus, VPN, App Lock, Photo Vault, Junk Cleaner, RAM BoosterModerate detection rate, high battery drain, slow scan speed$23.99/year for 1 device4.7/5 stars on Google Play Store"Avast is a decent app for security and optimization. It has a lot of features that can help me clean and boost my phone, but it also drains my battery a lot." - Lisa Wong
    -

    How to Review McAfee APK

    -

    If you want to review McAfee APK based on your own experience, you can use some criteria such as pros and cons, user ratings and feedback. Here is an example of how to review McAfee APK:

    -

    Pros and cons of McAfee APK

    -

    McAfee APK has some pros and cons that you should consider before downloading and installing it. Here are some of them:

    -
      -
    • Pros:
    • -
        -
      • It offers a comprehensive security solution for your Android device.
      • -
      • It protects your identity and privacy with VPN and identity monitoring features.
      • -
      • It allows you to connect up to five devices with one subscription plan.
      • -
      • It provides 24/7 customer service and technical support.
      • -
      -
    • Cons:
    • -
        -
      • It requires a lot of permissions and access to your device and data.
      • -
      • It may slow down your device or cause compatibility issues with some apps.
      • -
      • It may be expensive for some users who only need basic protection.
      • -
      • It may have some bugs or glitches that need to be fixed.
      • -
      -
    -

    User ratings and feedback

    -

    You can also check the user ratings and feedback of McAfee APK on the Google Play Store or other platforms where you downloaded it. You can see how many stars it has received, how many downloads it has, and what other users have said about it. You can also leave your own rating and feedback to share your opinion with other users. Here are some examples of user ratings and feedback for McAfee APK:

    -
      -
    • "I have been using McAfee APK for a year now and I am very satisfied with it. It protects my phone from viruses and hackers, and also helps me save battery and memory. It is easy to use and manage, and the customer service is very helpful. I highly recommend it to anyone who needs a good security app." - John Smith (5 stars)
    • -
    • "I downloaded McAfee APK because I wanted to try the VPN feature, but I was disappointed with it. It was slow and unstable, and it kept disconnecting me from the servers. It also made my phone laggy and hot. I uninstalled it after a week and switched to another app." - Jane Doe (2 stars)
    • -
    • "McAfee APK is a decent app for security and optimization, but it has some flaws that need to be fixed. It sometimes crashes or freezes my phone, and it does not detect some malware that other apps do. It also asks for too many permissions that I do not feel comfortable giving. I hope they improve it in the future." - Mike Lee (3 stars)
    • -
    • "McAfee APK is the best antivirus app ever! It protects my phone from viruses and hackers, and also helps me save battery and memory. It is easy to use and manage, and the customer service is very helpful. I highly recommend it to anyone who needs a good security app." - Lisa Wong (5 stars)
    • -
    -

    Conclusion

    -

    In conclusion, McAfee APK is a comprehensive security solution for your Android device that offers a range of features and benefits that can protect your identity, privacy, and device from various threats. It is compatible with Android, iOS, Windows, Mac, and ChromeOS devices, and it allows you to connect up to five devices with one subscription plan. You can download and install McAfee APK for your Android device by following the simple steps that we have shown you in this article. You can also compare McAfee APK with other antivirus apps based on some criteria such as features, performance, price, ratings, and reviews. You can also review McAfee APK based on your own experience and share your opinion with other users. We hope that this article has helped you learn how to download McAfee APK for Android and enjoy its protection.

    -

    FAQs

    -

    Here are some frequently asked questions about McAfee APK:

    -
      -
    • Q: Is McAfee APK safe to download and install?
    • -
        -
      • A: Yes, McAfee APK is safe to download and install from the official website of McAfee or from the Google Play Store. It is verified by Google Play Protect and does not contain any viruses or malware.
      • -
      -
    • Q: How much does McAfee APK cost?
    • -
        -
      • A: McAfee APK costs $29.99 per year for up to five devices. You can also try a free trial of McAfee Antivirus Total Protection for 30 days before you buy.
      • -
      -
    • Q: How do I update McAfee APK?
    • -
        -
      • A: You can update McAfee APK by going to the Google Play Store and tapping on the "Update" button next to the app. You can also enable automatic updates by going to the app settings and toggling on the "Auto-update" option.
      • -
      -
    • Q: How do I uninstall McAfee APK?
    • -
        -
      • A: You can uninstall McAfee APK by going to your device settings and tapping on the "Apps" or "Applications" option. Then, look for the McAfee app and tap on it. Then, tap on the "Uninstall" button and confirm your choice.
      • -
      -
    • Q: How do I contact McAfee customer service or technical support?
    • -
        -
      • A: You can contact McAfee customer service or technical support by going to the app settings and tapping on the "Help" or "Support" option. Then, you can choose from various options such as phone, chat, email, or community forum.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/789soft swf to gif converter 3.9 serial 18 Convert Flash SWF to Animated GIF with High Quality.md b/spaces/contluForse/HuggingGPT/assets/789soft swf to gif converter 3.9 serial 18 Convert Flash SWF to Animated GIF with High Quality.md deleted file mode 100644 index d2bd12e231105fc9aa7f5ff8c1320d6c799a7acc..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/789soft swf to gif converter 3.9 serial 18 Convert Flash SWF to Animated GIF with High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    avs video editor 6.2 crack.rar password
    10 things i hate about you soundtrack download zip
    grandvj mac activation code
    [Extra quality] UBYTE4N vertex data driver download
    recovery toolbox for excel free crack
    789soft swf to gif converter 3.9 serial number
    separation studio crack mac
    wondershare data recovery for mac keygen
    1 charlene hart aka skye blu pet lover part 1
    www.telugu actress sex videos .com

    -

    Главная
    EJ Technologies install4j Enterprise Edition v3.1.2 Unix by AGAiN
    MONyog v3.7.0.2 Linux x64
    MLAB v1.0 datecode 20040609
    Cyber-Info WebMail Notify 5.21
    DivX Codec v5.1 Ad-supported Pro Version Adware Cracked
    ABBYY FineReader Professional 9.0.0.1019 (Uploaded by eR@SeR
    Xm Easy Personal Ftp Server v4.0
    Eapps Caller ID OCX v1.0
    BENTLEY IRAS/B 2004 Edition aka v8.05.00.63
    Dache ToDo 1.51
    Hobby Box v5 by Nordtownhacker
    Pop-Up Stopper Professional v1.2.1020
    Browser and PIM eCentral v4.1.1.5
    Painkiller Overdose UNLOCKER Unleashed
    Rappel Date v3.2.0
    Agendus for Outlook Edition v4.03 Build 1424

      Fantastic Flame Screensaver v3.40.302
      BlitzCalc v1.0.1 by EXPLOSiON
      FolderIcon 2.01 by TSRh
      Winamp Pro v5.08
      Sensaura Jamma (Plugin for WinAmp 1.07
      McGraw Hill How To Do Everything With MS Office Excel 2003
      Nice Recorder v1
      A-Z Zune Video Converter v3.16
      Easy Money! v2.1
      PhoneWolf v2.02.001
      DiskAccess NFS Client for NT
      Always On Time v1.03
      ISOpen 4.1.15EXE
      Dialog-Medien MP3Info ActiveX v1.1
      Absolute Security 2.6
      CommView Remote Agent v1.1 Build 43
      Adobe Acrobat Professional v7.0.s FRENCH
      ILead DVD to iPhone converter 3.2
      AceFtp v3.0
      Errors 1.0
      Apollo DVD Copy 405
      DVDFab.v6.x.x.x.Retail.Key-TeamT3
      Flowerfire Sawmill Enterprise v7.2.11 x64 Glib2.3 Libstdc3.4.3 Linux KeyMaker
      Busy 3.6 (e-8 ***Premium Edition*** Patch by team Black X
      PaintCOST Estimator for Excel v3.50.08282005
      RhinoCAM for Rhino 3D v1.0
      IMMonitor MSN Spy 2.0-NoPE
      DeskBrain 4.0
      Easy DVD Creator v1.5.6
      LunarPhase v2.00 Keymaker
      Ace DVD Audio Extractor 1.xx
      MagicDraw UML Enterprise v16.5 SP4 Unix
      Cash Register Express 2003 v9.42
      KoolMoves v1.65 by DESTiNATiON
      PDA ToolBox v3.1 Serial
      AdsGone 2004 Popup Killer v5.0.2
      Front Magic 1.0
      Aiseesoft YouTube Converter X.XX Patch by team Black X
      JetBrains IntelliJ IDEA v8.1.3-iNViSiBLE
      Absolutist Illustrix Bird Dream v1.0 ALL PPCPDA
      Delete Duplicates for Windows v1.1UCT
      3Planesoft Fog Lake Screensaver v1.0.0.1 (28.03.09 Patch - s0m
      Anthemion Writers Cafe v2.24
      Budget 2003 v2.5
      Alarm Clock Pro v9.2.2.MacOSX
      Master Computer Consultants Company Master Keyboard v1.23 ARM PPC-SyMPDA
      Smart Install Maker 3.02
      FamiliaBuilder v3.0.5
      SplitWinCL.v1.3
      Plan for windows 3.0 fixed
      Chroma Software IRFilm v1.0for Adobe Photoshop
      Ahead DVD Ripper Standard Edition 1.3.9
      WinXMedia AVI WMV MP4 Converter v2.32
      ListMan 1.0.31
      Image Ready Tryout 1.0
      AutoPilot v2.06
      Personal Knowbase 2.03 by DBC
      Easy Auction Creator v1.1
      Simply Calenders v4.1.668 Win9xME by PH
      NCH FlexiServer v1.64
      Capture Professional v5.04 XP by MBE
      Advanced.Archive.Password.Recovery.v4.50.Cracked.Under.SEH.Team
      Biodrummer v1.20
      II_Auftrags-Stempelzeit v1.1 - Keyfile
      Collectorz com Comic Collector Pro v3.0.1-TE
      GBS IP AD Blocker v2.5 by HERETiC
      Clonk Endeavour v4.95.1 Bilingual
      Sothink.SWF.Decompiler.4.4.80916.cracked-SND
      JCreator Pro v4.50.010
      Nico Cuppen Scan2PDF 1.06 - Bidjan
      Duplic8 v2.0.009
      EarthView v3.4.0
      Super Ad Blocker v2.0.0.1094
      AutoRun Assistant v2.9.5.020805
      Big Time Issue 2
      AmericanShareware MP3.Wav Converter v3.05
      File Lock 4.2.11
      DEMix v3.0license LAXi TY.zi
      Color7 Power Video CaptureConvertBurn DVD Studio v8.0.5.24
      DFX for MusicMatch JukeBox v8.350
      ImTOO DVD Ripper v1.0.14.1023
      TrackStudio Enterprise v4.0.5
      IAR Embedded Workbench for Atmel AVR v4.12A
      HumanSoftware PhotoSurface 3.02.for
      SuperVideoCap v4.15.360
      Network Monitor v6.0
      Window Washer 3.1
    OnlineMonitor v3.05 German
      Plan-iq 2.6.7 by tsrh
      Ektron CMS300 v4.8.0
      Ems sql manager 2005 for interbase firebird 4.1.0.2
      Aone QuickTime Converter v1.3.6-PirateK
      Flowerfire Sawmill Enterprise 7.2.8
      TheBat! v1.60k
      ASE ChartDirector PHP v3.0.3
      Color Picker v1.02 by ACME
      Ideal Administration Advanced v4.60
      EF Commander v2.38
      DiskClerk v3.3.8
      Clonk Planet v4.65 by TMG
      Jewel Match Winter Wonderland v1.10
      Proxy Chain 2.0
      GetRight v3.31
      Diapo RMD v2003.1
      AhaView v2.01 (21-Apr-2002)
      Veri-Tech CEDAS v2.01f for Windows
      EzyMailMainPro Corporate v4.21.29
      X2CDMusic CDBurner v2.36
      Password Reminder v1.4 by TSZ
      Registry Washer v3.5.5
      Universal Payment Software 4.11
      Demonstration Screen v1.4 Russian
      StartSpanish v3.1
      Reasonable Software House NoClone v3.2.45
      MSC NASTRAN 2004 R3 by LND
      IPod PC Transfer Suite v3.4.Cracked
      ThemeEngine v4.40 (Delphi 5 Edition)
      Quick Screen Capture v2.2 by Cim
      Advanced Video Poker v1.35.1Advanced Video Poker v1.35.1
      GameHouse Super Collapse II
      Fonix Voice Dial v2.0 XScale WM2003 Cracked by COREPDA
      MAGIX Samplitude Music Studio 14 d-version v14.0.2.0-TE
      Spectrasonics Stylus RMX VSTi AU RTAS v1.8.1d PC MAC UPDATE
      Advanced MP3-WMA Recorder v3.5 by EPS
      Amov Research Admin PC v1.8
      KingMania.Patch.By.Amin Fear
      AnyDVD v3.2.1.1 by Phantomias@
      A-one DVD Copy v3.18
      Conundrum Red v1.0
      Driver Checker.v2.7.3
      Personal Archive Creator v1.2 by AAOCG
      River Past MOV Booster Pack v1.10.1
      Praetorians v1.02.4
      Silverfast DC Twain v5.5.0R06
      Aaerus IconCommander 1.14
      Magic picture converter 1.01 cracked exe with serial by rev
      Webmaster v4.30.1 German
      AquaSupreme v3.15
      WinXMedia AVI/MPEG iPod Converter v2.1
      TaraSoft Titan 2002 2096 keygen
      4slideshow v1.0.0.1
      CYBERsitter v9.4.9.2
      Active Key Logger v2.0
      PhotoBatch v1.12.84.0
      Chief Architect 97 5.0
      Declans Spanish FlashCards 1.6(2329 CRACKED-EXE By Dr.XJ - Under SEH Team
      De-Spammer v2.2
      Navicat for MySQL Enterprise v9.0.5 MacOSX
      Understand for Ada v1.4.257
      Clicky Mouse v4.0b by FALLEN
      Naval Campaigns Guadalcanal 1.03 NoCD-HATEDOX
      Lemons v1.3
      SolarWinds Orion Application Performance Monitor v2.0 Sp1 ALX
    Kristanix Pop The Marbles v1.02 GAME
      CDXtract Samplit v1.2.3-AiR
      Adobe illustrator v 10
      FolderView v1.9 Keygen
      Easy Registry Compare v1.3
      CPR International GeneralCOST Estimator for Excel v2.4
      Parasoft JTest Professional v6.0.181 by AGAiN
      Avery LabelPro v3.0
      Web Observer v2.08
      DeBoard v1.9.0.1090
      Mg Shop X v1.12 GERMAN by DVT
      CopyClock v3.14 Finnish PalmOS
      Gamehouse 7wonders 2 all versions by Jonezcracker
      MyCodes Pro v1.2 ARM PPC2002PDA
      CoolFocus Flyer Designer v1.1.2
      Reminders G.Braun 3.7Us
      PaperCut Quota v6.2.663-CGM
      Making Waves Studio v5.24
      Understand for Jovial v1.4.352b HPUX
      Ph.D of Persuasion
      NetInfo v3.85 build 1024
      Doctor Aquarium v2007.1 Build 0
      Cantabile Performer x64 v2.0.0.2043-DOA
      Linguata German v4.2
      All To All AudioConvert v1.13
      PAYROLL2003 v7.7.2
      Housatonic Project Viewer v8.2.5.42279 GERMAN
      Medal of Honor Allied Assault Cheats
      Gif Movie Gear v3.0 (FR)
      Microsoft Office 2007 Keygen by ed500
      TweakXP v2.09 fixed
      BCWipe v3.07
      EarMaster School v4.0.485
      SPX Instant Screen Capture v4.0 (v09-19-2002)
      Falcon Web Server SSL Edition 2.0.0.1006
      TRANSLOGIC HDL COMPANION V1.1 R2 LINUX
      Spyware nuker 3.3.12.2 tds
      Microsys A1 Keyword Research v2.3.0
      Inzomia viewer v2.52 by CHiCNCREAM
      Mabry Tips v3.2 Ocx v1.20.012 by DBC
      3D Geometrical Objects v1.4 S.Sniffer AT4RE
      KLMenu v1.0
      KeyText v2.16
      FX ChemStruct v1.106
      JDTricks 4.304.20.0 German by PSC
      Writers Cafe v1.25 UNICODE
      Diamond Cut v8.0.2-AiR
      NhuntSoftware Maintenance Parts Bin Pro v7.8.3
      HandMine.1.15.PalmOS
      All Lenosoft Software app + patch RaBBiT part1
      Advanced Speed Typing Tutor v2.5 by DBC
      Understand for Jovial v1.4.285 by EMBRACE
      HTML Un Compress v6.1.1 by TMG
      NetStat Professional 5.5
      UltraMon v3.0.8 Beta
      GodeZIP Version v8.0
      Zone Alarm Security Suite v5.5.094
      CPU Grave v1.74
      Dialogblocks v2.11 Unicode
      SAR Image Processor 3.x
      Aloha Solitaire v1.0.2.5 Unlocker
      Eztoo.DVD.to.WMV.Converter.v1.0.keygen-tRUE
      DBW INFINIMAP PRO V1.0.6 FOR LW WIN32 X64 AND
      Mesa v2.20 and other
      BENTLEY Microstation XM Structural v8.09.02.48
      MP3 Converter Pro 4.1 SERIAL by FFF
      Anthemiondialogblocks 1.41
      Understand for Delphi v1.4.348 Solaris
      Agogo DVD Ripper v6.75
      CDXtract Samplit v1.3
      32bit Fax v x9.35.01
      Calendarium v2.72 Keygen by Embrace
      Visual assist x 10.4.1646.0 dll
    Falcon Web Server SSL Edition 2.0.0.1006
      Transparence 2000 v1.91 French - serial
      Magic Vines v1.0 All Access Unlocker
      AddressGrabber Business v2.51.040301
      Boilsoft RM Converter v4.28.Cracked
      VoptMe v6.13 by TC
      WindowSpace.1.0.4.patch-SND
      MapDesigner v1.4
      The Bat v3.64.01 Pro Final
      Amadis DVD Audio Ripper..3.7.2
      Ad-Aware v5.62
      Desktop Surveillance Personal v4.0 by DBZ
      Flatspace v1.01
      ABICoder v3.6.1.3
      Iris The Network Traffic Analyzer v3.70 Demo
      DVDXCOPY Platinum v3.2.0 Fixed
      DvdXsoft Audio Video Converter 1.30.Serial.AT4RE
      Hugo Bukkazoom
      WebCalendar Creator 2002.102c
      BarMix v1.02.007.342
      Diskbank v2.0
      Slot Frenzy v5.0
      LigaChampion v1.9 *German* by iNTENSiON
      Internet Audio Mix v1.45 by iNTENSiON
      ElectraSoft Multi Clipboard v08.01.01-EOF
      Qualisyst QMSys Threads and Gauges v5.6 Build 10.06.14
      Amond DVD to PSP Converter v2.2
      CopyText 5.2.2 by Elila
      DawGroup SnapCharts v1.2 by ORiON
      Bootmanager BOOTSTAR v5.50
      OrangeClip 2005 v1.30 by DSi
      File Securer v3.55 by HERETiC
      Tai-Pan2MS v1.1 by FFF
      Braune Enterprises Fahrzeugassistent v1.2
      Mass Downloader 3.0 Build 567
      KResearch.KR-Space.v1.5.1.VST
      Alchemy Mindworks Transitions 3 Plugin v2.0a15
      Sytexis Software Brutal Wars v1.44 XScale WM2003 WM5PDA
      LMDTools 6.12.01for Delphi 7
      AdventNet ManageEngine JMX Studio v5.2.0 by AGAiN
      Computer Security Tool v4.0.0.40
      Makeup pilot 1.20 full
      ZoneAlarm Pro v2.6.357
      4Videosoft DVD Audio Extractor v3.2.10
      Do-Organizer v2.0
      WinningBid Pro v1.4.0.2
      AquaFold Aqua Data Studio v7.5.0 x64 KeyMaker
      ICUII v5.5.6
      ACDSee 4.01 German
      Selteco Photo Lab 2.1
      ADotMess v5.0.1-DJiNN
      Bigasoft MKV Converter v1.7.1.3581
      Henry s Textplorer v1.12b
      InternetTweak 2001 v2.0
      3DVista Skin Editor v2.3
      Koi Solitaire v1.0
      Drag And File 95 v4.52d by iNTENSiON
      POKER Alchimie 1.0 CRK by FFF
      DVDFab Platinum 2.9.5.6
      I New York v2.5 PalmOS
      SolSuite 2001 v8.3
      Import.REC.v1.6.Fixed-YPOGEiOS
      DADiSP 2002 v6.0 NI B11
      Anvsoft iPod Movie Maker v1.0
      Bejeweled v1.41 by FHCF
      TidyMP3 v1.1.0
      Book Collection v1.05
      Bowling.League.Secretary.2006.18.03.07 CRK-FFF
      Android.Newsgroup.Downloader.4.2 CRK-FFF
    Absolute Database Component for BCBuilder 5 v4.85 SingleUser Edition
      789soft Gif to Swf Converter v3.2.Win2kXP2k3Vista
      TZ Spyware-Adware Remover v7.4.4
      TrioneX Web Finder v2.0.41 by DF
      Calories v2.1 MacOSX
      HDL Works HDL Design Entry EASE v7.2 R4
      Kaizen.Vehicle.Manager.2010.Professional.Edition.v2.0.1100.0.WinAll.Keymaker-LUC
      Bookmark Buddy v3.3.0
      Codelink v4.0
      Hot Potatoes 5.2.0.1
      Alcohol.120-.1.9.7.6221-Patch CiM
      SpoolMyMail v2.20.0001
      Aplus Video Converter v7.0
      Hampson Russell CE v7 R3.2 Linux
      Football Director 2004.2005 v1.1 XScale WM2003
      Spx instant Screen Capture v3.0
      Absolute Accessories 99
      Astrobatics (GameHouse)
      Keyworder v1.0 WORKING
      ACLive v2.3.6.1092
      Absolute.uninstaller.1.5.serial-tsrh
      Alcohol 120.1.9.5.3105 SILENT UPDATE
      World Flags v1.1
      Magic Uneraser 1.0 Keygen AT4RE
      Aone Ultra MPEG Converter v1.9.2
      SuperRam 5.4.15.2007
      Virtuoza Smart ToDo v1.0
      DevPlanner v2.1.5
      Ear Power v3.0
      Internet Timer 4.5 by DBC
      Error Smart 2.7
      Tomasello WinCron v1.3
      No Problem Cyber Servidor 3.9.11
      StatWin Enterprise 7.0.0 Beta patch tds
      Jedi.Knight.-.Dark.Forces.2.All.Access.Cheat.(Dark.Force TRAINER-FFF
      IBN MOV Converter v2.0.1
      IMI GAL Exporter v1.5.1
      Nesox Email Marketer Business v1.53
      Ultranium2 v3.1 Plus 1 Trainer by UnderPl
      Color7 Video Converter Premier v8.0.5.20
      Analysis Lotto v1.5
      TwinPlayer v4.03 French
      Oxygen Straight Mailer v1.1
      ISS BlackICE Server Protection v3.6 cqs
      SMIRK v1.0 PLUG for 3DSMAX v2.*
      1STEIN CodedColor PhotoStudio Pro v5.8.0.1
      EasyAccounting Pro v4.0
      IntroCreator v1.14.00 German by ViRiLiTY
      Phoneman v1.3.1 PalmOS
      ArchiCrypt Rescue-Master 2008 v1.0.2.1263 Bilingual
      Executable File Icons Changer v4.1
      Battlegrounds UNLOCKER
      Category v1.3
      FunSMS v5.0 for PalmOS German
      NetConceal Anonymizer v3.0.035.02
      Actual Window Rollup v3.7
      The Icon Database v1.1
      Alive MP3 WAV Converter Standard v1.3.3.8
      3D MP3 Sound Recorder v3.6.6.4 Cracked by FFI
      Codex 2.0
      MultiMailer 2003 v2.0.22
      EScan Pro 2006h v8.0
      Robert Perk OneView 6.2.162
      Search Engine Composer v5.7 Build2-NoPE
      Handy entertainment gourdville screensaver 1.0 pocket pc
      Happy
      Bitvise WinSSHD v4.06a
      WinASO Registry Optimizer v3.0.9 -Hack ThE PaRaDiSe
      Awakening v1.0
      KiddyWeb Family Edition 1.4.0.7
      Alias StudioTools 12.0 (2 cds)
      MobileIntTech Turbo MSN v1.20 S90 SymbianOS7-SyMPDA
      DJ Audio Editor v3.1
      IOpus Internet Macros v4.x
      Zealot WMV to VCD SVCD DVD Converter v1.3
      ACD Video Magic v1.0 (SR-1)
      GFi LANguard Network Security Scanner v3.3 Datecode 20040204
      WinISO 3.5
      Hamrick VueScan Pro v8.5.05-CRD
      Ai Roboform v6.1.9
      DLL Show v4.9
      Elecard Converter Studio AVC HD Edition v3.1.90410 HAPPY EASTER-TE
      Space Exploration 3D Screensaver 1.0
      EDI File Edit v1.0
      Approach v2.15
      Apress Applied ADO dot NET Building Data Driven Solutions
      FastSend v2.00.0006
      Mindsoft Secure Pack XP 2
    FairStars Audio Converter v1.32
      Who's Web v1.12 Keyfile by TCA
      L3 Currency Convertor v5.0 N9300 N9500 SymbianOS7
      File Squad v2.0 AT4RE
      Advanced.Outlook.Express.DBX.Recovery.1.2 SERIAL-FFF
      Visi Font Gold v1.1
      Print2PDF.Server.Edition.7.0.07.0803-Keygen CiM
      WinRescue 98 v5.08.09
      DivXSubtitle Displayer v4.52
      Witzeerz300hler v1.0.2
      Alpha Dinero v3.0.2
      Epina Software Labs SDLSuite MathPack v7.2 for Delphi 4 5 6 7
      XClipboard v1.1 by BLiZZARD
      Audio Edit v3.21
      Alteros Viewer v2.0
      Game Chest v2.0
      GP-v6.06.12
      ZIP erfect v1.0
      Ancient Tri-Jong v1.0
      PrepLogic CompTIA 220-222 Practice Exams P.E. v2.4.89
      Virtual CD v9.2
      NeoMonitor v2.0
      Hunting.Unlimited.2011.Full
      IDA Pro 3.70
      Digital ObjectRescue Professional v1.5.70 Cracked by EXPLOSiON
      Blu-ray to DVD II Pro.v2.60.CRACKED
      ScreenViewer v1.8.1
      AdWare SpyWare Removal v2.0
      Morningstar EnCorr Portfolio Strategist v9.4 build 535
      Area Mapper v1.0
      FileName Extractor v2.10
      Web Album Creator v2.77 by PC
      Byteswired Registry Editor WM5 v1.0 XSCALE WM5PDA
      Liatro Button Maker 1.1 by TSRH
      Iolo Macro Magic 4.1t Personal by TSRh
      ActiveState Perl Dev Kit Pro v8.2.1.292072.for.Windows.x64
      Advanced Query Tool v3.4.3 by ECLiPSE
      Aone Ultra Video Splitter v4.1.0
      Stay Connected! v3.0 Keygen
      GraphicsGale v1.0.8 by Core
      Windows XP Professional Dell OEM Build 2600
      Paragon_Last_Minute_Gebot_v2.11
      Quick v2.1.029 CRACKED by LUCiD
      30 Wildlife Scenes v5.00
      Allok Video to FLV Converter v4.2.0608-NeoX
      SysGuard v1.4
      Adobe Flash Media Server 3.0.0.r1157
      Kid-Lock 2000 v1.0
      Screen Sucker v2.0
      Hotel 2.0N v2.71.36 German
      WinSoftMagic Photo Editor 2010 v8.1.94
      MSC Patran 2005 for Linux (1 cd)
      DataDirect Stylus Studio 2008 XML Enterprise Suite v9.2.1147b
      TestAuthor v1.3 by LiFEWORK
      TCP Spy v2.12 by Metroid
      Jeroboam v5.15
      CyberSpace HQ AddSoft v2.27
      Campaign Eckmuhl v1.08a Plus 3 Trainer by PWZ
      FinePrint Enterprise v4.46 by CORE
      FTPRush v1.0.0573 UNICODE
      Super Flexible File Synchronizer v2.61e 435 German
      Realtors Assistant v3.0 for PalmOS
      ArcView v3.2
      Amor Video Converter 2.4.2 AT4RE
      MPEGJoiner v1.0
      Global Mapper v11.02.DC032510.x86.Regged
      Xilisoft Video Converter v2.1.25.213b
      BackToZip_2.20
      CoffeeCup HTML Editor v8.2
      PhotoLinker v2.2.3.MacOSX
      Molsoft ICM BrowserPro v3.4-9a
      ExcoSoft XML Client 4.0.0.3 by MidNight
      ACDSee.Photo.Manager.2009.11.0.85 KEYGEN-FFF
      Vpop v1.41
      Surpreme Snowboarding
      Aglare FLV to MP4 WMV iPod 3GP AVI Zune ConverterAglare FLV to MP4 WMV iPod 3GP AVI Zune Converter 4
      Absolute Memory v1.1
      4.Team.Corporation.ShareCalendar.2.9.8.(Build.428.keygen-SND
      Easy Music CD Burner v3.0.13
      SWF2FLA Flash Decompiler v3.0.0
    SwitchSync v3.0
      Smart Business Plan v8.0
      TiBR Pro v1.37 PalmOS
      Print 2003 v4.0
      F-Prot Antivirus for Windows v3.11a Patch by TNT
      CASE Studio v2.22.1.335
      Lingvosoft Dictionary 2006 English to German v3.1.41
      EZ MP3 Creator v1.2.0 Build 150
      Acoustica CD-DVD Label Maker v2.16
      Associate v1.3 by WKT!
      DartPro 32 v1.30
      Absolute Video Splitter Joiner v1.8.6 KeyGen AT4RE
      CD Bremse v1.04 German
      Coyote FlatControls v1.3
      DM Genie v2.03.223
      River Past Talkative v5.1.0.61114
      Internet Designer Pro v1.96 by TNO
      Anonymity 4 Proxy 1.5
      Japanese Crossword Editor v1.14.4.2
      ActiveState Perl Dev Kit Pro v8.1.0.291424 for Solaris x86
      Gdata Davideo LegalCopy v1.0.0.1 German
      Excosoft Excoconf R5F Linux
      IDpack Lite v7.0.24 by PH
      Better File Rename v4.8
      Armor Tools
      The Bat! v2.10.01 by OriGinal fox
      Suma Games Miss PacFish v2.6
      SlideShow Pro v3.0 -Hack ThE PaRaDiSe
      LinkAssistant SEO Tool v2.5.9
      Jiraishin Volume05 Chapter04
      CloneDVD2 v2.8.4.1
      AAAnalyzer v1.61.110
      Trell Komplet v17.77 CZECH
      DiskPatrol 1.2E
      QK SMTP Server v1.06 by UCF
      32bit Service Monitor v9.80.01
      Desktop Disguise 1.0 by DBC
      GerbView 4.23
      Spider Girl Vol 1 No 90 Nov 2005 Comic
      CFA Installer 2000 v3.50
      NETGATE Spy Emergency 2008 v5.0.305.0
      CD Menu Creator v1.01 by dF
      ClickPic v1.7.0.2 patch-tRUE
      Arial cd ripper 1.3.85 serial by tsrh
      Schach3D v3.02 German
      Label Magic v2.0 by LasH
      3D Aqua Clock
      Medal.of.Honor.Pacific.Assault.cheats.enabler TRAINER-FFF
      Sensiva Commander v1.0.5
      MosASCII Beta 5 (1.0.145
      Tigerdata Videoverwaltung v3.0 German
      RaidenMAILD v1.9.0.5.2 Normal Version
      ABIX v6.68.00 Bilanguage
      FullShot v8.03 Enterprise
      MakeInst v6.2.0.0
      Future Decks Pro 1.1.0
      Bubble Strike 2001 v4.10
      XTG Data Modeller v2.2.9
      Photo Stamp Remover v1.0-MAZE
      Magic Mirror v2.0
      Personal.PC.Spy.1.8.patch-SND
      Merlin Open Systems Date Stamp v2.18 for Adobe Acrobat
      DIMSOLN MAT 3D V3.8.8
      AnyDVD v1.6.3.7
      2M Tetrix Collection v2.4a by NiTROUS
      Dictionaries Collection for Socrat Internet Standard v2.1
    Circle Track Analyzer v3.2.A.002
      SmartCode VNC Manager Enterprise v2.5.80.3 NET
      Borland Products
      CamGrab-2Plus v1.01
      GameSpy Arcade v1.3c by Twisted EndZ
      MP3 Viewer 1.7.2.1
      StartupStar v2.0a
      Active.Webcam.10.x.GENERIC KEYGEN+PATCH-FFF
      Dschungel Puzzle v1.0 German All Access Cheat-RAiN
      FairStars Audio Converter v1.32
      Gamehouse Magic Farm + Dam Beavers by Jonezcracker
      AutoRun Assistant v2.9 by Enfusia
      Alkonost MaxFormat v2.41 by MR2K
      Access Manager v6.0 Keygen by Tport
      LoanExpert Plus v3.1.1
      SmileOnMyMac PDFpenPro v4.7.MacOSX
      Epocware Handy Alarm v1.04 N70 N72 N90 SymbianOS8pda
      GConvert Pro 3.5
      Willing Webcam v3.5.20060829
      FORMZ RADIOZITY V5.0
      Nidesoft DVD to MP4 Converter v3.1.12
      F-SECURE ANTI VIRUS V5.50.10260 FOR SERVERS by DWP
      PcBoost v3.10.3.2005
      CopyRator 1.4 by EMBRACE
      EChart Xplorer v6.1 by ORiON
      Sax and Dottys Show Hoster v2.1.31
      Ultra Calendar Reminder v2.4.185 by EXPLOSiON
      Dizzy Just Another Yamb 2.1
      ImTOO PSP Video Converter v5.1.26.1218
      Jazz 3.2d
      Webmaster v6.02.1.German.Regged
      Clip Plus v3.3
      ApeRipper.3.8.6.cracked-SND
      GatherBird Copy Large Files v2.1
      FullShot 8.51 Enterprise CRK by FFF
      RapidWeaver v4.4.2 MacOSX
      Yavsoft Alive! Icons v1.4 and v1.5
      Bookmark Convertor v2.7
      CalorieKing Nutrition and Exercise Manager v4.1.0-TE
      Zip Express v2.3.0.1
      Trombinoscope v1 French
      BirdsWild 1.0
      IPWorks V4 SSl Version
      HGS-Grand-Prix 4.2
      OnlineMonitor v3.05 German
      Pattern Wizard v1.26
      ConceptDraw MINDMAP 5.0.2.0
      AmericanShareware MP3 WAV Converter v4.13-MAZE
      Windows 2000 German Update
      Amazin' SPISPOPD v1.4 by Anthrax
      Mini Recorder 2004 v2.1
      EVEREST Ultimate Edition v5.00.1650
      CFA Installer v3.71 by diGERATi
      Realtime Soft UltraMon v3.0.4
      MPS HTMLGate Premium v9.0 by TNT
      WinDVR All Versions
      Aleo.3D.Flash.Slideshow.Creator.v1.2.Keygen+Patch-RED
      Barefoot IPMonitor v3.0
      JumpOver v4.00.0354 by Orion
      BigSpeed Zipper v3.3
      Koala Film Player v2.5
      Binary Vortex v3.1 by DIGERATI
      Omniquad Mailwall Enterprise v1.3b
      Hot Potatoes v5.5.0.20 by ORiON
      JBatch It! v3.27
      Adobe Captivate v2.0.0.5153 FRENCH
      Tridonis_NoCD_Patch-TNT
      Access Administrator v2.2 by TSRH
      UpdateEXPERT v5.0.5050 Fixed
      WinRAR.3.90.beta.2.Universal.Patch-PGteam
      Chilkat C Plus Plus Libraries for Windows Mobile Pocket PC SmartPhone WinCE v9.0.3
      ImTOO Video to Audio Converter v3.1.6.0519b-CGM
      Ipswitch WhatsUp Professional 2006 Premium v10.0.0.14
      Vexira Antivirus XP 2K NT Professional v2.10.00.05
      PDF2TXT v2.8
      E-Motional Greeting Card Creator v1.01 Serial
      ISkysoft iPod Movie Converter 2.1.0.71 + Serial - Bidjan.
      Hex Mines v4.01 Working
      Live Image v1.29d
      Borland JDataStore 7.02 for Linux
      GetRight v4.5 Final by TNT
    Wartungsplaner v5.00.572 Enterprise Edition Bilingual
      Access to ASP form (ATAF) v4.0.0
      MetaPaint v1.0
      BlubberPatrol v2.0 by DiSTiNCT
      3DField v1.xx
      Back4WinXP v4.7.0.0
      Total Recorder Developer Edition v3.4.0.1
      CloneDVD v3.5.4.0
      App Killer v1.0.1
      Ramcal 1.34
      MD5 CrackFAST 1.00
      Xilisoft MP4 Converter 3.1.8.0720b
      AdPurger v1.00b Fixed
      Ams enterprise 2 79 patch by bokiv
      Magic.Camera.v5.5.0.Cracked-RED
      Picture Exhibitor v6.31 by ECLiPSE
      Download Assistant v1.1
      Adwords Keywords v2.00.061201
      Aigo DVD to BlackBerry Converter V2 x x Registry AT4RE
      Internet Notepad V v1.04
      HyperSnap DX v3.xx
      ChessRally v2.45 by M@GiStR
      NemaTalker v1.9.3
      EFT-Server 1.00
      NanoDVR bld1203
      IconDeveloper Professional v1.0
      SwitchSync v3.0
      Adobe Acrobat 7.0 Pro FR DE EN TryOut-Patch CiM
      SweetScape.010.Editor.3.0.00.keygen-SND
      Alawar Xeno Assault v1.2 PLUS 5 TRAiNER
      Ams Vente v1.0 French crack
      Magical File Encrypt V1.1 PATCH AT4RE
      Wallace and Gromits Grand Adventures Episode 1 Fright of the Bumblebees v1.0.0.15 INTERNAL-TE
      Photo.Slide.Show.Album.Application.1.8 REGFILE-FFF
      Anime Studio Pro v7.0.20100604.Multilingual
      PopUp Destroyer v1.12.36
      NetBus Pro 2.10 Serial by FHCF
      Farpoint Input Pro v3.00.30 by JiOO
      AnyWhere 2000 v4.1
      AbcMonitor 1.7.0
      AliasMenu v3.0.2 MacOSX
      Incadia v1.03 Cracked-F4CG
      No-RSI v1.0.9
      Witcobber Super Video Joiner v3.7
      Anawave Web Snake v1.0b3
      Advanced Query Tool v3.0 by ORiON
      Plato DVD To Pocket PC Converter v6.70
      DBSolo v1.3.2 Mac
      Security Box (Classic) v2.6f
      Advanced MP3 Catalog Pro v1.21 Keygen
      Wtools32 v1.6.18.184 uc
      3DELIGHT V8.01 AND 3DELIGHT FOR MAYA V4.0 WIN32
      Flash Capture v1.02 by ORiON
      PDF Editor v2.2
      Pocket Football v2.0.04 Smartphone Regged by aSxPDA
      Ems.ms.sql.manager.1.8.5.2.cracked-tsrh
      Advanced Desktop Shield v1.9 keygen.exe
      EXurbi eDoc Studio v1.6.1 MacOSX
      DubIt v2.0.6
      PopUp Ad SmasheR v4.1.14
      Sawmill v6.5.7 by NiTROUS
      Avengers Vol 3 No 84 Aug 2004 Comic
      Speak Aloud v2.0.2009.1001BER
      MiniCom 3.4
      Developer Express NET WindowsForms Component Collection with Source Code v2.0.6 for Visual Studio 2002.2003
      My Math Quiz Sheets 2.2 by ORiON
      Borland JDataStore v6.06 Linux
      StartEd v4.0-v4.x
      Accessory Software Data Quik v6
      Invisible Keylogger v1.3 by MP2K
      SureType v2.5.1036
      Advanced Warp Screen Saver 2.0-s0m
      CryptUp v2.34.143
      Excel Extract Images From Multiple Workbooks v7.0.Cracked
      CoDeveloper Universal v2.10 E 3
      Gogo DVD To PSP Converter v1.2UCT
      Norton Antivirus Pro 2003 V 9.05.15
      Transcender SQL AdminCert 2000 v8.0 datecode 20040611
      Personal antispy 1.20 by rev
      Graph Paper Printer v5.3.1.4
    Ems sql manager 2005 for interbase firebird 4.1.0.2
      Install From The Web 2.01
      DivXMovieTool v1.0 by c0nspiracy
      Easy CD Creator v5.0 Platinum Serial 3
      D I E XFEMily v6.5 Datecode 08172004 German by Substance
      Dino Crisis 2 Trainer Plus 10
      VueScan v8.0.6 MAC OS X
      Nidesoft Video Converter v2.0.56
      Absolute startup 4.1
      ScrabbleScam v1.3 by LUCiD
      Spyware Nuker 2005 v3.2.11.2
      Ultralingua English Dictionary of Definitions v5.0.7 PalmOS-PDA
      Ultra Assault Plus 2 Trainer by STN
      Evidence Destructor v2.2UCT
      PrimaSoft Check In Out Organizer Pro v2.1
      CONCEITED CLIPS v1.3.MACOSX
      Online Hold em Inspector v2.08
      Fizzy Fractals v1.1
      Dreamweaver UltraDev v1.0 by TMG
      Altova DatabaseSpy Enterprise 2010 v12.0 German-MESMERiZE
      IP*Works! v6.0.2008 Secure SNMP .NET
      SourcePublisher for C Plus Plus v1.4.386 IRIX
      Microsoft Windows 7 Working Activator For OEM-BiEBiE
      NETMUHTAR v4.06d
      Accessory Software Picture Organizer v4.0
      MedSites HospitalGate Advanced v1.0.0.4
      Photage 1.86
      DFIncBackup v1.13
      The JMaker Aipcylinder v2001.12.12 by Orion
      A1 DVD Audio Ripper v1.1.24 by CAFE
      Advanced Registry Cleaner v2.11.0.1.23 Keygen - ENFUSiA
      Magic File Renamer v6.12 Professional Edition
      AI Roboform v6.6.2 UPDATE
      Prentice Hall GO with Microsoft Office 2003 Brief 2nd Edition Jan 2006
      Ibm Rational Suite 2003.06.00 Multilanguage (3 cds)
      MP4 To AVI Converter v7.0-YPOGEiOS
      EasyPDF v2.03
      MessengerLog3 v3.20 by CHiCNCREAM
      My Cafe Cup Platinum v2.00 Build 1992
      FadeToBlack v2.3.3
      Pocket TV Browser v1.77 ALL PPC
      Nero Burning ROM v6.0 by CORE
      Zealot All Video Converter v1.5.5 by DIGERATI
      FTP Voyager v8.0.0.3 by Urgup
      I-Sound.WMA.MP3.Recorder.Professional.6.9.2.patch-SND
      Screen Saver Disabler v3.0
      Transcender MigrateCert 2000 v5.0 DateCode 20040611 by RBS
      Plato Video To iPod Converter 2.15
      GATHER TALK v1.6
      D S Technicals Test Pro 70_98
      E-SangIn Standard LAN BarCode SERVER v19.8 Korean by DUNHiLL
      GeoDelft MFoundation 5.1.2.12
      ArtGem v1.2 by UCC
      ImTOO DVD Ripper Platinum 4.0.48.0430
      All WebGenie software - II
      Menu.Creator.5.04 SERIAL-FFF
      Pack Plus v2.01
      Lingvosoft Phrasebook 2006 English To Hungarian v2.1.37
      Office Password Recovery Wizard v1.0 Cracked exe by [TLG]Mysterio
      Photo2DVD Studio 3
      CleanCenter v1.33.94 Regged READ NFO-F4CG
      Folder.Vault.v2.0.24.cracked-tRUE
      Mail ListKing for MS Outlook v2.21
      SolSuite v5.5
      River Past Audio CD Ripper v5.5.0.50717
      Ultralingua Latin English Dictionary v6.0.7
      SafeLock v0.99
      Monitor Control 2.06 AT4RE
      Disc-At-Once v2.4C
      Global Tracks 2003 v6.11
      1.Click.&.Lock.3.1.keygen.by.FOFF
      All Converter v5.0.4
      Barrcode bcx 3.11 professional
      Zip Express v2.4.3.1
      Vital Security for Web v7.0.SP3.8.606
      Amazing Slow Downer 2.8.5
      Startup Manager Platinum 2004 1.0.7-Patch
      Winguard pro 2004 premium edition 5.6.68 serial by tsrh
      Iedoctor 3.6 build 1.391 cracked by rev
      Tolvan Data Sirp v0.0.0.8
      Rs232.monitor.v1.1.Patch.AT4RE
      GameHouse SCRABBLE Journey Serial by BalCrNepal
      Win2PDF TSE v2.42 WinNT2K by TMG
      Infinite Dreams SkyForce Reloaded 176x208 v1.00 Chinese S60v3 SymbianOS9.1-BiNPDA
      HexDecCharEditorvers1.02
    WinXMedia AVI/MPEG iPod Converter v2.1
      SSH Tectia Client v4.3.2.12 Update Only
      Understand for Ada v1.4.333
      Ace Utilities v4.1.0.4052-CRD
      PolyView v4.241 by ACME
      McFunSoft Audio Editor 7.4.0.12.SN-ICWT
      ExactWord v5.1.5
      CoolFocus TabStrip v1.5
      Adensoft AudioData CD Burner 2.0 by SnD
      Power 3GP iPod PSP Video Converter v9.0.4.189 Serial AT4RE
      FOREIGN LANGUAGE TUTOR v4.3
      Beyond Compare v2.0
      RLTool ProPac 2.20.3.32 Dongle Crack
      Cumberland Diary v1.60
      Shuffle 2000.1
      PTC ICEM Ddn 5.0
      Circle Track Analyzer v3.2.A.002
      ACX-MailFinder Pro v6.2.4 German Cracked by DVT
      Christmas Clock Screensaver 1.1
      ImageDupe v1.2.0.0
      HOLLYWOOD CAMERA WORK THE MASTER COURSE VOL I STATIONARY BLOCKING (1 cd)
      Magic Matching Color v1.0
      Accelerate 2K3 4.0 Patch
      LingvoSoft PhraseBook 2007 Turkish To Persian Farsi 2.2.77
      Rave eJay CD-Crack by FHCF
      TVTons v2.2
      CoffeeCup DirectFTP 6.2.0.62
      WinRescue XP v1.08.23 Regged by ACME
      Erics TelNet 98 v12.3.3422.SSH.Multilingual.Keymaker.Only
      WakefieldSoft HealthFile Plus v4.1.2
      Hockey Playbook v0.9 by TBE
      Exsate VideoExpress v1.0 by FFF
      2M Words Collection v1.2a
      FSPro 1.0.4
      Partak Ahead Studio Suite v1.20 by ORiON
      MiniTAB
      BB TestAssistant v1.3.10
      UniChimica 1.01
      Program Code Auditor 2.0 by TSRH
      Pornbot v1.05 cRACKED-BUNTSPECHTZUECHTER
      Dj MP3 Media v4.2 Beta
      BelPlan v2.0German
      MSC Dytran 2005
      Acala DVD Ripper v2.7.1
      Ace MP3 Ripper v1.2.0
      JBLab Secure Notes v2.0 Regged by CPHV
      Front Line Registry Cleaner v1.25
      TRADOS WorkSpace v5.00.0164
      Active WebCam v1.2 by Desperate
      Lawn Mower v2.0 Plus 1 Trainer
      Navigator.1.48.for.palmos.cracked.prc-tsrh
      OdinShare Odin Screen Capture v2.5
      A-one.Video.Joiner.v3.2 KEYGEN-FFF
      Peachpit Press Apple Training Series GarageBand 2 Mar 2005
      Easy MP3 Converter v1.24 (Feb-7-2002)
      Bible Hangman 3.0 Regfile
      3D MP3 Sound Recorder v3.7.8
      Amigo Easy Video Converter v4.5.3
      Qoppa jPDFWriter v2.50 MacOSX
      Clear Documents Utility v1.0
      GameCam 1.2.0.15
      Advance Dialer v 2.1
      Cinema Craft Encoder SP v2.70.02.04
      Phone Pad v3.00
      Invoice Style CS v6.5 by DIGERATI
      Tagtraum Industries BeaTunes v1.2.17
      3delight V7.0 And 3delight v3.0 For Maya Win32
      Home Plan Pro v4.6.15
      Legion Plus 4 Trainer by MYTH
      Virtual Poet v2.1.1
      Cool Focus Tree View Pro v2.96
      Raetsel-Generator v7.46.1 German
      PACT.2ndBackup.v2036B4
      Banner Bands 1.0
      SmartBomb v1.040
      ScreenTaker v3.11 Keygen
      4U WMA MP3 Converter v6.2.6 SN by VaZoNeZ
      ITripoli AdminScriptEditor Enterprise Edition .3.1.2798.23998
      Direct Audio Converter and CD Ripper v1.5.51
      Music MasterWorks 2.21 by ORiON
    Dizzy Just Another Yamb 2.1
      EmEditor v3.31
      PhotoRap v1.0.1
      Janco DiskMonitor v3.1.0.27 by BEAN
      MultiMailer 2005 v4.0.10 Professional
      GameHouse Luxor Serial by BalCrNepal
      Spaceclaim v2009.Plus.SP1-Lz0
      SmartAuction fuer eBay v1.5.2 German by ACME
      DDS Housepartner 6.4 (1 cd)
      Hide Window Hotkey v1.2
      AccuLock.D7.AT4RE
      Folder Safe v1.10
      Fish Bowl Inventory v5.0-tDk
      GaMe Server v3.4
      Acez Software(tm) 21 Flying Images Screen Saver
      Norton Antivirus v5.0 *German*
      Tradewinds 2 tRAiNer Registered+Demo Version (IRiS tEAM
      Sony Vegas DVD Authoring with DVD Architect 3.0 (2 dvd)
      Periscope Image Browser v3.1
      Microprocessor 8085 Simulator v1.6 Keymaker - CORE
      CD Ripper Genius v1.08
      AlwaysONline 1.0 by Eclipse
      Ace ScreenSpy v5.0
      MP3Doctor v5.11.018
      GeneXus 6 Ev.1
      DiskRecovery v4.0.1231
      Ulead DVD MovieFactory v3.0 Disc Creator Componenten
      BlindWrite Suite v5.0.5.120 by FFF
      Isobuster all lang-2.4.0.1
      Hot Banners v2.0.0 Beta
      Deep Green Reversi v4.2.2 by TNO
      HiFi WMA OGG Converter v1.10 patch by Extreme Team
      Sams Sams Teach Yourself PHP MySQL and Apache All in One 2nd Edition Dec 2004
      Calendarium 2.72 Keygen by Embrace
      MD5.CrackFAST.v1.00 KEYGEN-FFF
      Chilkat DSA ActiveX v1.3.1 Crack AT4RE
      Hyena v6.2 Enterprise Edition French
      3D Yams French
      CCZip v4.0
      Advanced Defrag v4.4.0-CRD
      Cave 2 v1.0 S60 Java
      Privacy Eraser Pro v8.25.Cracked
      Enotate.1.32.cracked-tsrh
      Amazon DVD Shrinker 2.x.x
      JGO ScreenSaver 2.0 by DBC
      LingvoSoft Suite 2008 English to Croatian v2.1.28
      Phatware Pocket dbExplorer v2.0
      Plato DVD to DivX XviD Ripper v4.50-CzW
      ScreenSaver Construction Set v2.007a
      Peak InfoSystems Inventory Keeper v2.3.0
      FinePrint v5.60 Server Edition
      MovieSoft-Video To FLV Converter1.0.11.Serial.AT4RE
      DevMansions SmartLinks Framework PRO v 3.1-PWZ
      C++ Test Pro v6.5.2.1
      FreeGravity MP3 Gravity v1.4 N3650 SymbianOS6
      Anzovin The Setup Machine 2.08 For Maya 8.5 32 And 64bit
      ASE ChartDirector for ASP-COM-VB v3.1.0
      Bill Central Time And Billing 3.06
      ASE ChartDirector PHP v3.0.3 Linux
      Bit-Arts Fusion v2.0.0.0
      BuyersGuide 99 v3.2.23 Keygen - ECLiPSE
      CDRWin V3.8-E German by DBC
      Photo.Collage.Maker.v1.45.patch-tRUE
      3D MP3 Sound Recorder G2 v 4.11
      Abcc.DVD.to.AVI.MPEG.WMV.Ripper.Pro.1.0.keygen-SND
      Trellian FTP v2.02.001
      Raxco PerfectDisk v10.0.0.114.Professional
      Hot Corners 1.94 by SirCrack
      Treasures of the Ancient Cavern v1.01 PLUS 7 TRAINER
      LehrerOffice Win v2009.9.1.0
    Spider Girl Vol 1 No 90 Nov 2005 Comic
      Adobe InCopy CS5 v7.0.Multilingual
      Synapticad Allproducts v12.11b SOLARIS
      Agendus for Outlook Edition v4.03 Build 1424
      MovieSoft-Video To Apple TV Converter1.011.SERIAL.AT4RE
      Atom Time 98 v2.1b
      DLL Show v4.6 by jHT
      IRONCAD V7.0 by LND
      Beautiful Earth v3.2.6 RegCode AT4RE
      SpywareStopper v2.6 by ECN
      BrainWasher 1.2.0 by DBC
      Witcobber Super Video Converter v4.8
      Arial Audio Converter 2.3.33 by TeaM iNFLUENCE
      32bit Fax x9.79.01
      Webminer v4.0.0
      ImTOO MKV Converter v5.1.26.1218
      EasyCD v2.21 - Complete
      Zensura v3.20
      Persits AspPDF v1.4.0.3 Win9xNT by CORE
      Basic Inventory Control v5.0.114
      Backup Magic v1.6.3
      The Privacy Guard v1.3
      Rebel Raiders: Operation Nighthawk v1.03 *Unlocker*
      Beautiful Calculator v1.00 by lz0
      Blindwrite 5.2.16.154 by forteam
      HDL Companion v2.0 R2 SOLARIS
      BackupXpress Pro v2.71.21.156
      MAILING LIST MANAGEMENT WIZARD v1.1
      Kristanix Software File Renamer Turbo v2.49
      JMS Blutdruck v1.96
      StatWin Enterprise 7.0.0 Beta tds
      Planet's Orbits v1.5 Crack
      Family Tree Maker v3.02+
      Advanced Net Monitor for Classroom v2.5.2 by NiTROUS
      DIMSOLN COMBINED 3D V3.6.1 by LND
      Verifier 1.2.0 by FFF
      Zealot All Video Splitter v1.6.3
      Cleanerzoomer 3.7.0.1
      MemoriesOnTV 2.0.0
      Easy-Bay-Manager v1.1.5 Pro Edition German
      Mp3 Strip It Digital V5 by DBC
      Certified E-mail Plus v1.22 by NoiR
      DropBall v1.0 by PGC
      Funnel Web Pro 3.7
      RinjaniSoft Playlist Creator for BlackBerry Bold v2.0
      001 File Joiner & Splitter Pro v3.0
      FLStudio Producer v4.12
      DbVisualizer v7.0.5 Linux-iNViSiBLE
      Car Thief 5 Breaking Through PLUS 2 TRAiNER
      Acronis True Image 6.0 Build 339 by ROR
      32bit FTP p9.55.01 by FFF
      The Bat! v1.62 Christmas Edition
      Fox DVD Ripper Pro v7.3.0.16-CRD
      Amadis AVI WMV MPEG MOV SWF FLV Video Converter 3.7.2 Keygen AT4RE
      NAT32 Enhanced v1.7.1072
      Tauscan v1.5 by Orion
      Focus Photoeditor v4.1
      Auto Mailer v1.8
      Add-Remove Plus! 2002 v3.1.0.201
      ALAP Imageport v1.3.1 for Quark Xpress
      Popcap Big Money Deluxe v1.22
      Extractorde Paginas Amarillas 1.0
      PictureViewer .EXE v1.0.168
      Pinnacle InstantCopy v7.x Trial
      HMView v4.0
      AnzioWin v16.0m
      Streaming audio manager 3.1.8 loader by tsrh
      UB-Seller v2.07 German
      Meta Fix v2.8
      Bone Out From Boneville 1.5.1
      LingvoSoft PhraseBook 2007 Chinese Cantonese Simplified To Japanese Romaji 2.2.78
      Camfrog Video Chat Pro v3.1.13395 by DIGERATI
      CoffeeCup ImageMapper v4.0a
      Kristanix Games Solitaire Epic v1.11
      Machine hell (ra edit crack by rev
      FairStars Recorder v3.07-MAZE
      Simple Business Invoicing Inventory
      Talking Email 4.006
      AceReader Pro Deluxe Network v2.2c by ECLiPSE
      Spamicillin v3.02 FRENCH by BS
      Chinese Checkers Master 1.03
      File System Auditor v1.02
      Yet Another Tray Launcher 98 v3.31
      MMSguru MIDIscale Basic v1.9 by 0xdBass
      Dupeguru 1.0.0.1
      K-MP3 v5.1.1.35 by DBZ
      UltraCompare Professional v3.10
      Sateira CD&DVD Burner v1.5
      HexProbe v1.02
      AmigoGames AmiAmi Kart v1.0
      Anonymous Guest Professional v1.51
      WinCron v4.3.1.5
      Simple Calendar v4.0.77
      Raize Components 4.0.1 Delphi BCB Retail
      3D Studio MAX R4 Final
      Chilkat C Plus Plus Libraries for VC Plus Plus 9.0 v9.0.4
      DocuCom PDF Gold v7.85R1 Traditional CHINESE
    LinkAssistant SEO Tool v2.5.9
      Whole Tomato Visual Assist X v10.3.1549.0
      E-TextEditor v1.0.25
      Banner Maker Professional v3.0.3.3
      Cinema4D XL v7.2+ Body Paint3D
      PalTrak v1.0 (01-06
      Maview v1.3.30
      Game Jackal 2.7.14.357
      NewLive RM To AVI VCD SVCD DVD MPEG Converter.Pro v1.4
      Acme CAD Converter v4.30
      Norman Virus Control 5.4
      Tiramisu 4.10 NTFS
      Konvertor v2.18
      Smart.Organizer.v2.00 french.(all.versions CRKEXE-FFF
      JoeAlter Shave And A Haircut For Maya 6.5 v4.4v18 Linux
      SelfAccounts.v1.0 1.01.SERIAL.CiB
      VJamm LowLevel 1.3
      Portable.Xara.Xtreme.Pro.v4.0.1.5601.DL
      WinASO.Registry.Optimizer.v4.06.Keygen+Patch-RED
      CDRipper v2.85
      OWL Simple Business Accounting v2.1.8
      Revision Effects Reelsmart Motion Blur Pro 3.2 For Digital Fusion 4
      DataBox v1.25 by DSi
      Shortcut Caddy v1.11 by TNO
      GameHouse Supercow Serial by BalCrNepal
      OrangaProgramaro Pocket Dayz v1.0 XScale WM2003 WM5 WM6-SyMPDA
      Armed And Dangerous NoCD
      INetFormFiller Professional v2.6
      Wma To MP3 Encoder v6.08-RHE
      F-Secure Anti-Virus 4.05
      KC Softwares AVI Toolbox v1.5.0.22
      Agogo FLV to iPhone Converter v7.21 AT4RE
      IP*Works! SSL Del Edition v6.0.1650
      Aye Shutdown v5.40 by c0nspiracy
      Combat Wings Multi2
      Hardwood Solitaire v1.55
      MAESystems Mp3 Audio Editor v6.9.6
      EasyNote v1.2
      Maplesoft Maple v12.02 UPDATE x64
      Orneta Reader Mobile v2.1.1 XScale WM2003 WM5-SyMPDA
      Alex Fergusons Player Manager 2003 v1.5.1e
      MPEGable X4 Live v2.2.7 by ORiON
      Easymodel v2.3 by RP2K
      Order Maven v1.31
      Pegtop XFader 3.02
      Internet Sweeper Pro v3.1 Keygen - UCF
      WEB WEAVER V98.02
      Yenc Power-Post A and A v11b by Unknown
      ImToo DVD Ripper SE 3.0.5 (Build 1027)
      Wearther1 v4.03 by CORE
      Torchsoft Registry Workshop v4.2.0.WORKING
      Faxmail Network v9.16.11
      Time For Fishing Screensaver v1.2.-.s0m
      Coding ftp 2008.20
      ChordWizard Music Theory v3.01
      AceReader Pro v2.0a
      Discreet 3DS Max v7.0 READ NFO by SSG
      3DField v1.96
      Automa TatPascalScripter component 1.01 Beta 2
      Database Tour Pro 5.5.3.922
      Ap PDF to Tiff Converter v4.1
      CompanionLink Professional v2.0.2650
      MesBases 2.0
      Accessory Picture Viewer Max v4.0
      Quantum Data Protection System 2.01
      Gamehouse_Puzzle_Inlay
      Basic Inventory Control v5.0.113
      Kingpin v1.13 US
      Buzzsaw CD Ripper v3.1.14
      Registry.Repair.Wizard.2008.v5.06.Cracked-tRUE
      Nokia LogoManager v1.2.40 Final
      Alchemy Mindworks GIF Construction Set Professional v3.0a9
      Pdtsoftware Resizeit v3.0
      Absolute Database Component for BCBuilder 5 v4.85 SingleUser Edition
      Advanced.Encryption.Package.Professional.2009.5.0.1.cracked-SND
    MMSguru MIDIscale Basic v1.9 by 0xdBass
      Xilisoft iPod Video Converter v2.1.59.0316b
      Edgecam v9.5
      DB Artizan 4.02
      602Pro Lan Suite 2002 Build 2002.0.02.0731
      Microsys A1 Sitemap Generator v2.3.1-Lz0
      Intorine Cowboy v2.1 PalmOS5-RACEPDA
      VNC.Scan.Enterprise.Console.2007.9.30.8 CRKEXE-FFF
      4DiskCleanGold4.0 sn
      POLLUTE v7.061
      DVD Region Free 1.31 by Viper Zx
      AMF Daily Planner And PIM v9.1.35
      AccuRev v3.8.1
      MapData v1.0.20 by CORE
      NEXT LIMIT REALFLOW V5.0 WITH RENDERKIT 2.0 WIN64
      Belltech ScreenSmart v3.0
      Caputeeze 8.08
      PCB Designer Demo
      Foto Canvas 2 v2.0.0.0008
      Hitek Software JaBack 6.24 for Mac
      CleanCenter v1.22 by DBC
      Charm Solitaire 1.04 RA editionEXE
      Empire Earth (from CD case
      Spectorsoft Spector 2.11
      Finger Server Fserv 3.03
      L8+ v4.1 build 16
      CornerChaos v1.3.4
      Resco Explorer v5.12 PocketPC
      PerformanceTest v6.0
      Break 1.22
      GrxView Pro v2.3.1 PalmOS
      IObit Security 360 1.0 Patch By Under SEH Team
      Paint Shop Pro v7.04
      Acoustica v2.1a
      Flipull v1.5 Cracked by cOnspiracy
      Graphisoft ArchiCAD v10 Hotfix 1183 build 2582
      Biuticker.v1.63
      WinRAR v3.30 Beta 4 Russian by PlainTeXT
      Transcender ISA Cert v1.0
      FlexPDE Professional 3D 5.0.21
      Skylark Utilities Encode-It v2.0.4.5 by EMBRACE
      Script Magic v1.2
      StayTuned v1.0
      E-Z Snoop v2.0.5 by TNO
      Patch.Factory.v3.1
      Alive desktop 1.0
      Home Buyer's Calculator Suite v2.1.00
      ACE-HIGHText To Speech Reader v1.30
      VisualJockey ADDON pdoom one PRO v3.50.46.09
      Microsoft Press MCSA MCSE Self-Paced Training Kit Exam 70-270 Second Edition Mar 2005
      Easy Chat Server v2.2
      ANSYS CFX v5.7.1 Linux
      Recipes Galore 2.2
      WinHex 11.26 SR 5
      TSL CADRaster v4.50
      Sonic Foundry Acid Pro v4.0B by RENEGADE
      CtrlView v3.00
      APICOMMERCE Paye 1.0
      Cart Catalogue v1.1
      DBQwikEdit Pro v2.5
      1st Mail Sender v2.6 READ NFO by CHiCNCREAM
      Cool Hand Yuke v2.4 by riPPadoGG
      Wise.Registry.Cleaner.Pro.v5.52.Build.304.TEAM-Full
      CyberLink MediaShow Espresso v5.0.0430.12419
      Clearevo Ltd Incallert v1.0 S60 SymbianOSpda
      HTML Password Lock v3.4
      Dr. Salman's Windows Power Tools v1.42
      Aigo DVD to BlackBerry Converter v2.x.x
      XChat.2.8.5e-Patch CiM
      Hamrick Software VueScan v7.5.42
      WizQuote v2.0 by FFF
      P.i.c.s. Gebьhrenzдhler v5.20.1
      Altersrechner v4.00German
      Creatures 3 Exodus
      CD-Cover Editor 2.1
      GameHouse The Clumsys Serial by BalCrNepal
      UltraMon v3.0.8.Beta
      EJay MP3 Station v1.0 by ShmeitCorp
    Apollo DVD Copy 405
      SQLPrint for SQL Server 2000 v1.8.3
      Dee Mon Video Enhancer v1.7-NoPE
      Messenger Detect v2.76-CzW
      ImageElements Tool Suite v1.05
      Poker Indicator v1.2
      Apollo PSP Video Converter v2.6.0
      Image to PDF v2.3.0
      Fredals dictionary 4.04 crack by rev
      HotDoor CADtools v3.03 for Adobe Illustrator
      GeneMiner 2003 (2003.08.22
      Texefex v1.6 by ePSiLON
      WinMPG Video Convert v3.3 by ICI
      CreateInstall v3.5
      BOSON FOUNDRY PRACTICE TESTS V5.1
      Carbon Copy 32 4.00.53.9
      FloorEstimate Pro Standard v5.0.14.72
      WindowSurfer v1.52
      BlackBoard SoftCalc v1.0
      Amethyst ShadowFX v1.08b
      W32Dasm.v8.93.TR.TEAM-Full
      Intel CPP Compiler v8.1.025
      EMS Data Pump 2006 for Interbase Firebird v2.0.0.3
      KraiSoft Boom Voyage Crack AT4RE
      PassMark BurnInTest Pro v2.3 build 1006 by Eminence
      SageTV & SageTV Client 4.0
      PDAmill The Corsair v1.1.2 XScale WM2003 WM5-SyMPDA
      Mobifish v1.06 palm
      DUTCHS HTML EDITOR v2.1
      Emagic SoundDiver 3.0.5.4
      32bit convert it c9.60.18
      Hyperionics FileBox eXtender v1.90.06
      Adobe Photoshop CS by Great Elmo
      Ability.ftp.server.1.11.professional.edition.cracked-tsrh
      Lines 98 v6.0
      Active Desktop Calendar 5.9 Build Build 051110
      TAGG v1.5.1
      ChordBuddy 1.2 for PalmOS
      Tangentbordstr ning v1.00
      Kristanix Pop The Marbles v1.02 GAME
      IMedia MultiStream v1.0.0.21 by PARADOX
      TicTacTotal v2.0
      Xilisoft 3GP Video Converter v2.1.55.1008b
      Wondershare Media Converter (Build 1.1.0.0 + Serial - Bidjan
      Alchemy Lab Asset Tracker for Networks v7.1.11
      MiniPortal v1.3.3 by ECLiPSE
      Oriens.JPEG2000.Pro.1.3.160.keygen-SND
      WinSPuzzle v1.0.3
      Real Time Cookie Cleaner v2.0
      Hot Potatoes v6.2.1.2
      Create Install 2003 v2.0
      Mp3 Blaster32.2000 v1.40.46
      Nexus Mainframe Terminal v5.16 by iNFECTED
      Star Trek The Original Series Book 009 Triangle 1990
      Sony Cinescore v1.0c
      A dailyBackup v3.6.1
      NotePro v1.04 by iNTENSiON
      Qivx Amaze Meditation Mazes v1.30
      Opposing Force v1.0.0.1
      Win ClipBoard Monitor (WinCBM v2.0.560
      ACDSee v6.0 Standard all languages
      Wartungsplaner v5.00.572 Enterprise Edition Bilingual
      VideoSaver v3.1 Keymaker - BLiZZARD
      UltraISO v7.6.0.1081
      PSPWare v2.1.5
      Winnow Cleaner v3.4.0.0 by tPORt
      Microangelo On Display v5.56
      IE Password Recovery v1.0
      ColorPal v1.0 by Enfusia
      SOCGDS V5.4.1
      Hyper Alarm v3.0 Deluxe
      AMSES Frame2D 1.56
      Installer2go v2.2.4
      System Mechanic Professional and Standard v4.0a
      Apollo mpeg to dvd burner.2.3.0 keygen-tsrh
      Page O'Labels for Mailing Labels v2.8b by EVC
      JS8 Media AudioRefurb v2.95 MacOSX
    PopUp Ad SmasheR v4.1.14
      Parasoft C Plus Plus Test v6.0.1.4
      Zoom Player WMV Professional v4 Final Multilanguage
      Zplane Developement Vielklang VSTi v1.0.2.27
      Numerology.Explorer.2.0.cracked-SND
      ComponentOne Studio for ASP NET 2007 v1 5 for DotNET Framework 1.5.for
      PowerZip v7.0.3.3865 Cracked by iNFECTED
      Ie BlackBox v2.1
      RadioSpy v100126
      Feed The Snake v1 1 Plus 3 Trainer-SEiZE
      CIMCO Software Suite 5.50.08
      DWG2PDF
      Hide IP NG 1.20 Patch By Under SEH Team
      JOC Press Release v2.10
      Macromedia Director MX v9.0 by Bidjan
      GEOM v1.0 Unlocker
      Protect Spywall v1.0a Portuguese
      AstroMart v6.2 by FHCF
      Delenda v2.4g
      Planet Jupiter 3D Screensaver v1.0 by s0m
      Aglare Video to iPhone Converter v6.9
      XLineSoft PHPRunner v5.0.437
      PowerZip v7.05.3879 CRACKED by LUCiD
      Focus MP3 Recorder v3.1 by TBE
      Apollo iPod Video Converter 2.5.0
      Siemens Mobile Control v2.1.5
      Efficient Software To-Do List Pro v1.63
      Pocketradiologist 2.0 build 300 for ppc read nfo cracked by tsrh
      SilverStream Application Server Enterprise Edition 3.52
      Linktivity Presenter v1.0 Cracked READ NFO by iNFECTED
      Adobe Premiere Pro v7.0 Fixed
      IArchiver.v1.7.2.MacOSX
      Transcender EnterpriseFlash Design 2003 v6.0 DateCode 20040616S
      Webcam Zone Trigger Pro v2.2
      LingvoSoft Dictionary 2006 Turkish to Russian v3.1.29
      Amic Utilities Video Converter v2.0-WRATH
      Agogo.DVD.to.Zune.Ripper.7.85.keygen-SND
      Aplus DVD to PSP Ripper 8.0
      Pic-a-pix puzzle world 4.72
      DigiCel FlipBook 1.07 by HaMMerHeAD
      Clip.Plus.4.2.cracked-SND
      Handbook of Science Communication
      Measures Unit Conversions v5.1.0 by iNTENSiON
      ZIP Password Recovery Magic v6.1.1.84
      ISS BlackICE Server Protection v3.6 cot2K3
      Advanced Host Monitor v4.30
      MailChecker32 v3.7.0.2
      Tipard Video Converter v4.0.6-QUANTiZE
      Understand for C Plus Plus v1.4.410 LINUX
      Shine 3GP Video Converter 2.00 - Bidjan
      HotFax Message Center v2.0/32
      SpecX v1.10 ZX Spectrum 48 Emulator
      Paminnaren v4.3.0.69 Swedish
      JongPuzzle v2.2
      HGS-Sammeln v3.8 Serial by EViDENCE
      TZ-EasyBuch 2.0.01
      PowerArchiver 2010 v11.61-DOA
      CalliGrapher v7.4 WinMobile 2003 ARM
      Winamp v5.04 by Revenge
      Okoker CD and DVD Burner v2.2
      RinjaniSoft Presto Transfer Firefox v1.6
      River Past Talkative v4.8.1.51206
      Reel Deal Casino High Roller Datecode 20061017 Plus2 Trainer
      INUS Technology RapidForm v2006
      Cigraph ArchiSketchy v1.97 For Archicad 11
      JavaScript Scrambler v1.11 by ECLiPSE
      Super Email Sender 2.84
      DMC_MusicConverter r9 dMC-PowerPack
    CCZip v4.0
      Zend Studio Enterprise Edition v5.5.1.281 MACOSX-ART
      Sky Aces Western Front v1.00 by ECLiPSE
      CyberLink PowerDirector 7 v7.00.1829 Trial Remover Crack (Rapidshare link -ICWT
      Aiseesoft DVD to iPhone Converter 3.x.x Patch AT4RE
      Frend v4.7.2 *GERMAN*
      Horizon Web Text v3.8.1.2
      Almeza MultiSet v5.1.320
      Sticky.Password.v3.4 CRKEXE-FFF
      Super Ram 3.5.1.2003
      Anthemion Software DialogBlocks v4.37 FREEBSD7 X64
      CAR MATE Expert System v1.0
      Easy Rechnung v3.42.0.8 German by ACME
      Analysis Knowledge InfoBank v26.3
      Fu Buddy v1.0 Cracked by DVT
      Map Edit v2.63 by TNT
      Poker Indicator v1.7.5-CRD
      Interpex IXRefraX v1.07
      Ansys AI-Nastran 1.0
      TurboFTP v4.50 build 423
      DevMansions SmartLinks Framework PRO v 3.1 Cracked by PWZ
      Systerac XP Tools v4.0
      Kellyware KCam 4.0.36
      Flax v1.04 by iNTUTiLS
      Speed Video to Audio Converter v1.0.3.92
      Dr Salman s Windows Powertools v2.85
      GetRight v4.3 Patch by UCC
      AbleFtp v6.11 by ECLiPSE
      Diskeeper Workstation US v7.0.430.0t
      PrinterExpress v1.1 by RP2K
      Arial Audio Converter v2.1.4
      HTML LZW Pro v2.5
      Arclab Watermark Studio v1.0
      Adressen v2000.10.1138 German
      Save Flash v2.4.40
      Actual Window Rollup v2.0
      Loto1N2 v3.50
      Half-Life 2 Offline Installer
      A Next Vol 1 No 1 Oct 1998 Comic
      Black Swan v4.0
      Absolutely On Line v2.5.34
      MP3 Workshop v1.93 by HERiTAGE
      Rogue Trooper trainer by Metroid
      Web Site Zapper v8.0.0
      Cachemanxp
      Rich Mailer v2.0
      Ballshooter Krakout v1.2 ARM PPCPDA
      Dialog Builder for Delphi v1.24 (build 1.2.4.40
      32bit Fax v9.89.01
      FSPro (File System cryptographic PROtector) v1.1.2
      Kfz-Kosten V1.5.5 by LaTeX
      Xilisoft AVI MPEG Converter 2.1.41.329b Regged-XMA0D
      Golden Eye v4.11
      PDFTron CosEdit v3.1
      Cheetah cd burner 3.13 serial by tsrh
      Internet Audio Mix v1.47 by iNTENSiON
      Lotto Whiz 2000 pro v2.5.4.1
      GoldView32 v2.1
      Computers and Security Volume 24 Issue 8 November 2005
      4 Card Keno 2.0
      AnyLogic Professional v6.4.1
      AVI to DVD VCD SVCD MPEG Converter Pro v3.0.2
      Easy File Protector v1.42
      PICOZIP V2.01 by CORE
      Jade Property Suite v2.8.0.417
      Daily Planner Journal v4.0
      Space Invaders 1.00
      Rome Total War Alexander Plus 2 Trainer
      Paq File Share eFileGo 4 0 Cracked exe-TLG
      S-Spline 2.1
      VideoInspector v1.6.1.87
      Air Messenger Pro v5.1
    Gamehouse Magic Farm + Dam Beavers by Jonezcracker
      AddFlow 4 ActiveX Control v4.2.0.26
      Internet ScreenSaver Builder v4.56.030723
      FontSuit v2.0
      Xilisoft DVD Audio Ripper v2.0.59.0113
      AusLogics Visual Styler v3.0.6.115
      XoftSpy v4.16.125
      D16 Group Decimort VST v1.0
      EasyBoot v5.0.3.426
      ScenalyzerLIVE v4.0 Cracked by DVT
      Llamagraphics Life Balance v3.4 PalmOS
      Microsoft Office Genuine Advantage Validation v1.6.28.0-ETH0
      Speed Video Splitter v2.3.1
      Advanced Installer Enterprise 6.2 Under SEH Team
      GRAITEC OMD V12.1H SPANISH-Lz0
      BlueLabelSoft PDF Convert v1.0 REGGED by QUARTEX
      ALTomSoft AmiPic Sharemaster v10.01
      Ace Utilities v2.4
      FinePrint pdfFactory v1.0 BETA3
      Proteus v5.20.07 Lite by Tolyan
      National Lampoon s University Tycoon Rip
      CivilCAD 2004 v1.2
      GowerPoint uBook v0.9b MIPS HPC
      Scratchboard.8.0.patch-SND
      Abhidefolder
      Sadman Fives and Threes v2.2 GAME
      Liberty BASIC v4.01
      ActiveFax Server v4.14.0216 FRENCH-CRD
      Emailchemy v1.7.1 MacOSX
      XMLwriter v1.00
      Boxikon v1.7.2
      HGS CD Archiv v7.0 German Regged by ViRiLiTY
      Advanced VBA Password Recovery 1.1
      Acala DVD Ripper 2.7.9
      AnimatedScreen Animated SnowFlakes Screensaver v2.7 Patch - s0m
      InstantServers ISMail EP v3.3.886
      The Cleaner v3.2 build 3213 by TEX
      AllVideoSlitter v1.08
      Copy Backup [ver v1.1]
      Corel Paint Shop Pro Photo XI 11.11- Bidjan
      SpectraView II 1.0.41
      FontShow 2000 v3.1
      Easy2Sync for Files v1.07.Business Edition
      Olympic Organizer Deluxe 1.5 Serial
      Equity Evaluator v5.2.0
      Sharp World Clock v4.39-CRD
      AnyScreenToVideo v1.0 FRENCH by NGEN
      Scia Nexis v3.40.13
      Altdo DVD to AVI MPEG MP4 MOV Ripper v1.0
      MediaLion DVD Ripper Pro v3.2.9
      Ultralingua v4.3.7
      Army Rangers Mogadishu PLUS 3 Trainer
      Grand.Master.Chess.Tournament.v1.6.6.0 CRKEXE-FFF
      Macromedia Freehand v8.0
      UI-View32 v1.80 by FCB
      PrimaSoft Car Organizer Deluxe 1.7 Serial by DBC
      Desktop Destroyer Screen Saver v1.4
      Adresar A plus T v3.7 by NaNeT
      II WorkLog4All v4.50
      MP3Producer v2.37
      E-Campaign Professional Edition v2.94.4 by Nitrous
      SBMAV Disk Cleaner 2009 v3.38 Bilingual-DJiNN
      Adensoft Audio Data CD Burner v2.64 by UCF
      WinRAR 3.20 beta 1 AV Working Multilanguage by DraCooLa
      Critiques of Research in the Social Sciences
      ShadowScan v2.07 by TNT
      Sambar.Server.Enterprise.6.3 CRK-FFF
      Materialise SurgiCase Planner v3.0
    KiddyWeb Family Edition 1.4.0.7
      WinMend Registry Cleaner v1.5.5
      2 Thumbs Up! 1.0
      Helium Music Manager 2005 build 4501
      PCFolio v5.2.4 Keygen - EPS
      GifOutils.v1.3.0.Full.Regged-RamdaM
      Easiest Utils DVD Ripper v3.5.6.1
      DeskSoft HardCopy Pro 3.0.4.and
      Recolored.1.0.0 READNFO CRKEXE-FFF
      AnyDVD v6.0.8.2 Multilanguage
      HellFIRE Screen Saver v2.5
      Xilisoft MP4 Converter v3.1.6.0519b
      HTML Builder XP v2.5
      Magic Photo Editor3.78
      FlashGet v1.3 ger
      DDClip Pro v3.01
      Allallsoft Google Maps Terrain Downloader v6.18.Keygen.Only-Lz0
      Absolute Sound Recorder v3.24
      CyberLink Power2Go Deluxe v6.0
      Pre Test Pharmacology
      SurfOffline v1.2.9.12
      Auto Ilustrator v1.1
      Acala DVD Ripper v2.6.6-NoPE
      WinMount 3.1.1225
      WinStyles Enhanced v1.50
      EPCON Engineers Aide Toolbox 7.0
      SmartVersion v1.00.1000
      Hard.Drive.Inspector.v3.82.Build.359.Professional-Notebooks.Multilingual
      Architecturals v4.1.0015
      Codejock Xtreme Suite Pro ActiveX v12.1.0
      FontMan v1.0 by PC
      Advanced Desktop Shield v1.42
      JoRace v1.5 Keymaker - CORE
      HardNote Jump v1.16 by NiTROUS
      SQLabel v4.1
      Nidesoft DVD Ripper v3.0.36
      Password 2000 v2.7 by Core
      Business Translator V3.11 By Fhcf
      Enable Toolbox v2.2m
      Airport 2
      Firestarter Plus 3 Trainer by iMSDOX
      Mediatox Aurora Media Workshop v3.4.35
      AccurateBurn MP3 Audio CD Maker V1.03 by DBC
      Ultra.AVI.Converter.4.2.0909.keygen-SND
      Double Digger v1 5 Trainer-SEiZE
      Web Cache Illuminator v4.7
      WinX DVD Products Patch - [By Martik Panosian]
      Allallsoft Microsoft VirtualEarth Satellite Downloader v6.98
      CDCoverKit v1.0.0.1
      Fox DVD Ripper Pro v7.2.7.16
      Apollo DVD to iPod v2.3.0-TE
      Advanced Registry Tracer v 1.20 beta 2
      Super Taxi Driver 2006 Unlocker
      Absolute.Software.v1.0.multi.keygen-tRUE
      Back4WinXP v4.2.1.0 by NiTROUS
      Crazy 4 v1.1 by AmoK
      Defragmenter Pro Plus v3.0.0.0
      Password Book 5.1
      Deep Sea Tycoon 2 *GERMAN* *Unlocker*
      Golden32 v5.6.386
      Image Broadway v3.2 (Build 020501)
      ImTOO AVI MPEG Converter v2.1.24.227b
      Dangerous activity 3d 2.1 cracked by tsrh
      AEC VIZ v3.0.01.19 by ECLiPSE
      FX2 rev .06 by UCF
      Aspose Excel v2.3
      3D InterStellar Screensaver
      Ricochet Xtreme v1.4 build 67 by TSRH
      PGI Visual Fortran 2008 v9.0.3.with.VS2008.Shell.SP1.x64
      AALog v2.47
      Property Cafe v2.0
      Campaign Eckmuhl v1.08a Plus 3 Trainer-PWZ















    -

    789soft swf to gif converter 3.9 serial 18


    Download Zip 🔗 https://ssurll.com/2uzyrZ



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Arduino Compatible Compiler for LabVIEW Crack 18 A Powerful Tool for Standalone Embedded Systems.md b/spaces/contluForse/HuggingGPT/assets/Arduino Compatible Compiler for LabVIEW Crack 18 A Powerful Tool for Standalone Embedded Systems.md deleted file mode 100644 index 8e746a68b6140bda70b2af11c200c2fd01a88d53..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Arduino Compatible Compiler for LabVIEW Crack 18 A Powerful Tool for Standalone Embedded Systems.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Wow this is a fantastic demo. I can't wait to see more when it is released. I have several Unos, a Mega 2560, and a couple Teensy 3.1 which is an arduino compatible ARM based micro with lots more functionality. While your site doesn't mention Teensys it will probably be one of the things I try once it is released.

    -

    hello . We need some help and ı was hoping you can help us.
    We have one school project about servo motor and we had some research but we couldnt find anything yet. we study electrical-electronic engineer so it is not so easy for us. We used matlab, arduino , labview , autocad. And now it is diffrent program for project. And we are not professional on mplab .

    -

    arduino compatible compiler for labview crack 18


    DOWNLOAD ——— https://ssurll.com/2uzyjW



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Brijlalandsubramanyamopticspdffree [HOT].md b/spaces/contluForse/HuggingGPT/assets/Brijlalandsubramanyamopticspdffree [HOT].md deleted file mode 100644 index 2a774cde2ad5fbc5e079ef33f6c45ccc214c388d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Brijlalandsubramanyamopticspdffree [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

    brijlalandsubramanyamopticspdffree


    DOWNLOAD ✑ ✑ ✑ https://ssurll.com/2uzvzb



    -
    -Help What is demo content Microsoft retail mode Juicy. Couture - Feather Print Laptop Sleeve Haphazard - Bags and Luggage 2004 08. 06 16 00 00 000,009,029 ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/test_config_h32.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index a31e3874f76f9f7b089ac8834d85df2441af9b0e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/furthest_point_sample.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/furthest_point_sample.py deleted file mode 100644 index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/furthest_point_sample.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'furthest_point_sampling_forward', - 'furthest_point_sampling_with_dist_forward' -]) - - -class FurthestPointSampling(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_xyz: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_xyz (Tensor): (B, N, 3) where N > num_points. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_xyz.is_contiguous() - - B, N = points_xyz.size()[:2] - output = torch.cuda.IntTensor(B, num_points) - temp = torch.cuda.FloatTensor(B, N).fill_(1e10) - - ext_module.furthest_point_sampling_forward( - points_xyz, - temp, - output, - b=B, - n=N, - m=num_points, - ) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -class FurthestPointSamplingWithDist(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_dist: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_dist (Tensor): (B, N, N) Distance between each point pair. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_dist.is_contiguous() - - B, N, _ = points_dist.size() - output = points_dist.new_zeros([B, num_points], dtype=torch.int32) - temp = points_dist.new_zeros([B, N]).fill_(1e10) - - ext_module.furthest_point_sampling_with_dist_forward( - points_dist, temp, output, b=B, n=N, m=num_points) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -furthest_point_sample = FurthestPointSampling.apply -furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/__init__.py deleted file mode 100644 index beca72045694273d63465bac2f27dbc6672271db..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .accuracy import Accuracy, accuracy -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .lovasz_loss import LovaszLoss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss', - 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss' -] diff --git a/spaces/csuhan/opendet2/demo/predictor.py b/spaces/csuhan/opendet2/demo/predictor.py deleted file mode 100644 index b2b2cf1968b2549de6d586e7052fe5ec6fd82945..0000000000000000000000000000000000000000 --- a/spaces/csuhan/opendet2/demo/predictor.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer -from detectron2.data.datasets.builtin_meta import _get_coco_instances_meta - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - self.metadata = MetadataCatalog.get( - cfg.DATASETS.TEST[-1] if len(cfg.DATASETS.TEST) else "__unused" - ) - thing_colors = _get_coco_instances_meta()["thing_colors"] - thing_colors.append((0,0,0)) - self.metadata.set(thing_colors=thing_colors) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput a little bit when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 diff --git a/spaces/cyberoleg/b2719240e190e2a649150d94db50be82838efeb0/diffusion_webui/__init__.py b/spaces/cyberoleg/b2719240e190e2a649150d94db50be82838efeb0/diffusion_webui/__init__.py deleted file mode 100644 index 6e49af236dab7f041fb4fe27d50b728eaaf552d9..0000000000000000000000000000000000000000 --- a/spaces/cyberoleg/b2719240e190e2a649150d94db50be82838efeb0/diffusion_webui/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -from diffusion_webui.diffusion_models.controlnet_inpaint_pipeline import ( - StableDiffusionControlNetInpaintGenerator, -) -from diffusion_webui.diffusion_models.controlnet_pipeline import ( - StableDiffusionControlNetGenerator, -) -from diffusion_webui.diffusion_models.img2img_app import ( - StableDiffusionImage2ImageGenerator, -) -from diffusion_webui.diffusion_models.inpaint_app import ( - StableDiffusionInpaintGenerator, -) -from diffusion_webui.diffusion_models.text2img_app import ( - StableDiffusionText2ImageGenerator, -) - -__version__ = "2.5.0" diff --git a/spaces/cye/dalle-mini/index.html b/spaces/cye/dalle-mini/index.html deleted file mode 100644 index 74d65ba18bf356ce52b1d00b0e7c1903d5e285f2..0000000000000000000000000000000000000000 --- a/spaces/cye/dalle-mini/index.html +++ /dev/null @@ -1,64 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
    - - - diff --git a/spaces/d8aai/finance-dashboard/app.py b/spaces/d8aai/finance-dashboard/app.py deleted file mode 100644 index 602b054aa8033ddd5d32974a658a18408e036240..0000000000000000000000000000000000000000 --- a/spaces/d8aai/finance-dashboard/app.py +++ /dev/null @@ -1,65 +0,0 @@ -from datetime import date -import json -import plotly -from re import sub -from plotly import graph_objects as go - -import streamlit as st - -st.set_page_config(layout="wide") - -from subs.access_backend import get_tickerlist -from subs.access_backend import get_plot - -tickerTable = get_tickerlist().set_index("Ticker") - -PrimeStandardSector = "Prime Standard Sector" -sectors = tickerTable[PrimeStandardSector].unique() -sectors.sort() -sector = st.selectbox( - label="Select a Sector. Remark: This sets the default for the selected stocks", - options=sectors, -) - -default_index = tickerTable[PrimeStandardSector] == sector - -default = tickerTable[default_index] - -selections = st.multiselect( - label="Dax Constituents", - options=list(tickerTable.index), - format_func=lambda x: tickerTable.at[x, "Company"], - default=list(default.index), -) - -for selection in selections: - - try: - - fig_scatter = get_plot(selection, "scatter") - fig_returns = get_plot(selection, "returns") - fig_histogram = get_plot(selection, "histogram") - - st.header(tickerTable.at[selection, "Company"]) - c1, _, c2, _, c3 = st.columns((10, 1, 10, 1, 10)) - - c1.plotly_chart(fig_scatter, use_container_width=True) - c2.plotly_chart(fig_returns, use_container_width=True) - c3.plotly_chart(fig_histogram, use_container_width=True) - - except Exception as e: - st.header(tickerTable.at[selection, "Company"]) - st.markdown( - f"Data for {tickerTable.at[selection, 'Company']} not available", - unsafe_allow_html=False, - ) - # print(selection) - # print(e) - -if __name__ == "__main__": - print(tickerTable) - print(tickerTable.index[:3]) - - print(sectors) - print(tickerTable["Prime Standard Sector"] == sector) - print(default) diff --git a/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_llama.py b/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_llama.py deleted file mode 100644 index 6dfac681aeaa11a780304b9e645637cabd677688..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_llama.py +++ /dev/null @@ -1,178 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.local_history = [] - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import pandas - self.info = "依赖检测通过" - self.success = True - except: - from toolbox import trimmed_format_exc - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\ - r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc() - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - env = os.environ.get("PATH", "") - os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin') - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume + '/request_llm/jittorllms') - sys.path.append(root_dir_assume + '/request_llm/jittorllms') - validate_path() # validate path so you can run from base directory - - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'llama'} - print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - print('done get model') - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - print('load_model') - load_model() - - # 进入任务等待状态 - print('进入任务等待状态') - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - query = kwargs['query'] - history = kwargs['history'] - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - print('触发重置') - self.jittorllms_model.reset() - self.local_history.append(query) - - print('收到消息,开始请求') - try: - for response in self.jittorllms_model.stream_chat(query, history): - print(response) - self.child.send(response) - except: - from toolbox import trimmed_format_exc - print(trimmed_format_exc()) - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global llama_glm_handle -llama_glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info - if not llama_glm_handle.success: - error = llama_glm_handle.info - llama_glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - print(response) - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not llama_glm_handle.success: - llama_glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/danterivers/music-generation-samples/audiocraft/utils/__init__.py b/spaces/danterivers/music-generation-samples/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_funcs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_funcs.py deleted file mode 100644 index 7f5d9610f3cf0010a9185579f7188df5ff609384..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/_funcs.py +++ /dev/null @@ -1,477 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import copy - -from ._compat import PY_3_9_PLUS, get_generic_base -from ._make import NOTHING, _obj_setattr, fields -from .exceptions import AttrsAttributeNotFoundError - - -def asdict( - inst, - recurse=True, - filter=None, - dict_factory=dict, - retain_collection_types=False, - value_serializer=None, -): - """ - Return the *attrs* attribute values of *inst* as a dict. - - Optionally recurse into other *attrs*-decorated classes. - - :param inst: Instance of an *attrs*-decorated class. - :param bool recurse: Recurse into classes that are also - *attrs*-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable dict_factory: A callable to produce dictionaries from. For - example, to produce ordered dictionaries instead of normal Python - dictionaries, pass in ``collections.OrderedDict``. - :param bool retain_collection_types: Do not convert to ``list`` when - encountering an attribute whose type is ``tuple`` or ``set``. Only - meaningful if ``recurse`` is ``True``. - :param Optional[callable] value_serializer: A hook that is called for every - attribute or dict key/value. It receives the current instance, field - and value and must return the (updated) value. The hook is run *after* - the optional *filter* has been applied. - - :rtype: return type of *dict_factory* - - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 16.0.0 *dict_factory* - .. versionadded:: 16.1.0 *retain_collection_types* - .. versionadded:: 20.3.0 *value_serializer* - .. versionadded:: 21.3.0 If a dict has a collection for a key, it is - serialized as a tuple. - """ - attrs = fields(inst.__class__) - rv = dict_factory() - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - - if value_serializer is not None: - v = value_serializer(inst, a, v) - - if recurse is True: - if has(v.__class__): - rv[a.name] = asdict( - v, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain_collection_types is True else list - rv[a.name] = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in v - ] - ) - elif isinstance(v, dict): - df = dict_factory - rv[a.name] = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in v.items() - ) - else: - rv[a.name] = v - else: - rv[a.name] = v - return rv - - -def _asdict_anything( - val, - is_key, - filter, - dict_factory, - retain_collection_types, - value_serializer, -): - """ - ``asdict`` only works on attrs instances, this works on anything. - """ - if getattr(val.__class__, "__attrs_attrs__", None) is not None: - # Attrs class. - rv = asdict( - val, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(val, (tuple, list, set, frozenset)): - if retain_collection_types is True: - cf = val.__class__ - elif is_key: - cf = tuple - else: - cf = list - - rv = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in val - ] - ) - elif isinstance(val, dict): - df = dict_factory - rv = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in val.items() - ) - else: - rv = val - if value_serializer is not None: - rv = value_serializer(None, None, rv) - - return rv - - -def astuple( - inst, - recurse=True, - filter=None, - tuple_factory=tuple, - retain_collection_types=False, -): - """ - Return the *attrs* attribute values of *inst* as a tuple. - - Optionally recurse into other *attrs*-decorated classes. - - :param inst: Instance of an *attrs*-decorated class. - :param bool recurse: Recurse into classes that are also - *attrs*-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable tuple_factory: A callable to produce tuples from. For - example, to produce lists instead of tuples. - :param bool retain_collection_types: Do not convert to ``list`` - or ``dict`` when encountering an attribute which type is - ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is - ``True``. - - :rtype: return type of *tuple_factory* - - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 16.2.0 - """ - attrs = fields(inst.__class__) - rv = [] - retain = retain_collection_types # Very long. :/ - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - if recurse is True: - if has(v.__class__): - rv.append( - astuple( - v, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain is True else list - rv.append( - cf( - [ - astuple( - j, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(j.__class__) - else j - for j in v - ] - ) - ) - elif isinstance(v, dict): - df = v.__class__ if retain is True else dict - rv.append( - df( - ( - astuple( - kk, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(kk.__class__) - else kk, - astuple( - vv, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(vv.__class__) - else vv, - ) - for kk, vv in v.items() - ) - ) - else: - rv.append(v) - else: - rv.append(v) - - return rv if tuple_factory is list else tuple_factory(rv) - - -def has(cls): - """ - Check whether *cls* is a class with *attrs* attributes. - - :param type cls: Class to introspect. - :raise TypeError: If *cls* is not a class. - - :rtype: bool - """ - attrs = getattr(cls, "__attrs_attrs__", None) - if attrs is not None: - return True - - # No attrs, maybe it's a specialized generic (A[str])? - generic_base = get_generic_base(cls) - if generic_base is not None: - generic_attrs = getattr(generic_base, "__attrs_attrs__", None) - if generic_attrs is not None: - # Stick it on here for speed next time. - cls.__attrs_attrs__ = generic_attrs - return generic_attrs is not None - return False - - -def assoc(inst, **changes): - """ - Copy *inst* and apply *changes*. - - This is different from `evolve` that applies the changes to the arguments - that create the new instance. - - `evolve`'s behavior is preferable, but there are `edge cases`_ where it - doesn't work. Therefore `assoc` is deprecated, but will not be removed. - - .. _`edge cases`: https://github.com/python-attrs/attrs/issues/251 - - :param inst: Instance of a class with *attrs* attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise attrs.exceptions.AttrsAttributeNotFoundError: If *attr_name* - couldn't be found on *cls*. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. deprecated:: 17.1.0 - Use `attrs.evolve` instead if you can. - This function will not be removed du to the slightly different approach - compared to `attrs.evolve`. - """ - new = copy.copy(inst) - attrs = fields(inst.__class__) - for k, v in changes.items(): - a = getattr(attrs, k, NOTHING) - if a is NOTHING: - raise AttrsAttributeNotFoundError( - f"{k} is not an attrs attribute on {new.__class__}." - ) - _obj_setattr(new, k, v) - return new - - -def evolve(*args, **changes): - """ - Create a new instance, based on the first positional argument with - *changes* applied. - - :param inst: Instance of a class with *attrs* attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise TypeError: If *attr_name* couldn't be found in the class - ``__init__``. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 17.1.0 - .. deprecated:: 23.1.0 - It is now deprecated to pass the instance using the keyword argument - *inst*. It will raise a warning until at least April 2024, after which - it will become an error. Always pass the instance as a positional - argument. - """ - # Try to get instance by positional argument first. - # Use changes otherwise and warn it'll break. - if args: - try: - (inst,) = args - except ValueError: - raise TypeError( - f"evolve() takes 1 positional argument, but {len(args)} " - "were given" - ) from None - else: - try: - inst = changes.pop("inst") - except KeyError: - raise TypeError( - "evolve() missing 1 required positional argument: 'inst'" - ) from None - - import warnings - - warnings.warn( - "Passing the instance per keyword argument is deprecated and " - "will stop working in, or after, April 2024.", - DeprecationWarning, - stacklevel=2, - ) - - cls = inst.__class__ - attrs = fields(cls) - for a in attrs: - if not a.init: - continue - attr_name = a.name # To deal with private attributes. - init_name = a.alias - if init_name not in changes: - changes[init_name] = getattr(inst, attr_name) - - return cls(**changes) - - -def resolve_types( - cls, globalns=None, localns=None, attribs=None, include_extras=True -): - """ - Resolve any strings and forward annotations in type annotations. - - This is only required if you need concrete types in `Attribute`'s *type* - field. In other words, you don't need to resolve your types if you only - use them for static type checking. - - With no arguments, names will be looked up in the module in which the class - was created. If this is not what you want, e.g. if the name only exists - inside a method, you may pass *globalns* or *localns* to specify other - dictionaries in which to look up these names. See the docs of - `typing.get_type_hints` for more details. - - :param type cls: Class to resolve. - :param Optional[dict] globalns: Dictionary containing global variables. - :param Optional[dict] localns: Dictionary containing local variables. - :param Optional[list] attribs: List of attribs for the given class. - This is necessary when calling from inside a ``field_transformer`` - since *cls* is not an *attrs* class yet. - :param bool include_extras: Resolve more accurately, if possible. - Pass ``include_extras`` to ``typing.get_hints``, if supported by the - typing module. On supported Python versions (3.9+), this resolves the - types more accurately. - - :raise TypeError: If *cls* is not a class. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class and you didn't pass any attribs. - :raise NameError: If types cannot be resolved because of missing variables. - - :returns: *cls* so you can use this function also as a class decorator. - Please note that you have to apply it **after** `attrs.define`. That - means the decorator has to come in the line **before** `attrs.define`. - - .. versionadded:: 20.1.0 - .. versionadded:: 21.1.0 *attribs* - .. versionadded:: 23.1.0 *include_extras* - - """ - # Since calling get_type_hints is expensive we cache whether we've - # done it already. - if getattr(cls, "__attrs_types_resolved__", None) != cls: - import typing - - kwargs = {"globalns": globalns, "localns": localns} - - if PY_3_9_PLUS: - kwargs["include_extras"] = include_extras - - hints = typing.get_type_hints(cls, **kwargs) - for field in fields(cls) if attribs is None else attribs: - if field.name in hints: - # Since fields have been frozen we must work around it. - _obj_setattr(field, "type", hints[field.name]) - # We store the class we resolved so that subclasses know they haven't - # been resolved. - cls.__attrs_types_resolved__ = cls - - # Return the class so you can use it as a decorator too. - return cls diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py deleted file mode 100644 index 7772a4bf8588d2723f2435c7a2ba56ce47a71cf1..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py +++ /dev/null @@ -1,1474 +0,0 @@ -# -*- coding: utf-8 -*- -"""fontTools.misc.bezierTools.py -- tools for working with Bezier path segments. -""" - -from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea -from fontTools.misc.transform import Identity -import math -from collections import namedtuple - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -Intersection = namedtuple("Intersection", ["pt", "t1", "t2"]) - - -__all__ = [ - "approximateCubicArcLength", - "approximateCubicArcLengthC", - "approximateQuadraticArcLength", - "approximateQuadraticArcLengthC", - "calcCubicArcLength", - "calcCubicArcLengthC", - "calcQuadraticArcLength", - "calcQuadraticArcLengthC", - "calcCubicBounds", - "calcQuadraticBounds", - "splitLine", - "splitQuadratic", - "splitCubic", - "splitQuadraticAtT", - "splitCubicAtT", - "splitCubicAtTC", - "splitCubicIntoTwoAtTC", - "solveQuadratic", - "solveCubic", - "quadraticPointAtT", - "cubicPointAtT", - "cubicPointAtTC", - "linePointAtT", - "segmentPointAtT", - "lineLineIntersections", - "curveLineIntersections", - "curveCurveIntersections", - "segmentSegmentIntersections", -] - - -def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Whereas :func:`approximateCubicArcLength` approximates the length, this - function calculates it by "measuring", recursively dividing the curve - until the divided segments are shorter than ``tolerance``. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - return calcCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance - ) - - -def _split_cubic_into_two(p0, p1, p2, p3): - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.returns(cython.double) -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mult=cython.double, arch=cython.double, box=cython.double) -def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3): - arch = abs(p0 - p3) - box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - if arch * mult >= box: - return (arch + box) * 0.5 - else: - one, two = _split_cubic_into_two(p0, p1, p2, p3) - return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - mult, *two - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - tolerance=cython.double, - mult=cython.double, -) -def calcCubicArcLengthC(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math - return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4) - - -epsilonDigits = 6 -epsilon = 1e-10 - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex) -def _dot(v1, v2): - return (v1 * v2.conjugate()).real - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(x=cython.complex) -def _intSecAtan(x): - # In : sympy.integrate(sp.sec(sp.atan(x))) - # Out: x*sqrt(x**2 + 1)/2 + asinh(x)/2 - return x * math.sqrt(x**2 + 1) / 2 + math.asinh(x) / 2 - - -def calcQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Arc length value. - - Example:: - - >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment - 0.0 - >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points - 80.0 - >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical - 80.0 - >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points - 107.70329614269008 - >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0)) - 154.02976155645263 - >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0)) - 120.21581243984076 - >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50)) - 102.53273816445825 - >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside - 66.66666666666667 - >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back - 40.0 - """ - return calcQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - d0=cython.complex, - d1=cython.complex, - d=cython.complex, - n=cython.complex, -) -@cython.locals( - scale=cython.double, - origDist=cython.double, - a=cython.double, - b=cython.double, - x0=cython.double, - x1=cython.double, - Len=cython.double, -) -def calcQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Arc length value. - """ - # Analytical solution to the length of a quadratic bezier. - # Documentation: https://github.com/fonttools/fonttools/issues/3055 - d0 = pt2 - pt1 - d1 = pt3 - pt2 - d = d1 - d0 - n = d * 1j - scale = abs(n) - if scale == 0.0: - return abs(pt3 - pt1) - origDist = _dot(n, d0) - if abs(origDist) < epsilon: - if _dot(d0, d1) >= 0: - return abs(pt3 - pt1) - a, b = abs(d0), abs(d1) - return (a * a + b * b) / (a + b) - x0 = _dot(d, d0) / origDist - x1 = _dot(d, d1) / origDist - Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - return Len - - -def approximateQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Approximate arc length value. - """ - return approximateQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, -) -def approximateQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Approximate arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching fifth-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Legendre_quadrature - - # abs(BezierCurveC[2].diff(t).subs({t:T})) for T in sorted(.5, .5±sqrt(3/5)/2), - # weighted 5/18, 8/18, 5/18 respectively. - v0 = abs( - -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3 - ) - v1 = abs(pt3 - pt1) * 0.4444444444444444 - v2 = abs( - -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3 - ) - - return v0 + v1 + v2 - - -def calcQuadraticBounds(pt1, pt2, pt3): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a 2D tuple. - pt2: Handle point of the Bezier as a 2D tuple. - pt3: End point of the Bezier as a 2D tuple. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0)) - (0, 0, 100, 50.0) - >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100)) - (0.0, 0.0, 100, 100) - """ - (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) - ax2 = ax * 2.0 - ay2 = ay * 2.0 - roots = [] - if ax2 != 0: - roots.append(-bx / ax2) - if ay2 != 0: - roots.append(-by / ay2) - points = [ - (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - for t in roots - if 0 <= t < 1 - ] + [pt1, pt3] - return calcBounds(points) - - -def approximateCubicArcLength(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length. - See :func:`calcCubicArcLength` for a slower but more accurate result. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - Arc length value. - - Example:: - - >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0)) - 190.04332968932817 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100)) - 154.8852074945903 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150. - 149.99999999999991 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150. - 136.9267662156362 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp - 154.80848416537057 - """ - return approximateCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4) - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, - v3=cython.double, - v4=cython.double, -) -def approximateCubicArcLengthC(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - - Returns: - Arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching seventh-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Lobatto_rules - - # abs(BezierCurveC[3].diff(t).subs({t:T})) for T in sorted(0, .5±(3/7)**.5/2, .5, 1), - # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively. - v0 = abs(pt2 - pt1) * 0.15 - v1 = abs( - -0.558983582205757 * pt1 - + 0.325650248872424 * pt2 - + 0.208983582205757 * pt3 - + 0.024349751127576 * pt4 - ) - v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666 - v3 = abs( - -0.024349751127576 * pt1 - - 0.208983582205757 * pt2 - - 0.325650248872424 * pt3 - + 0.558983582205757 * pt4 - ) - v4 = abs(pt4 - pt3) * 0.15 - - return v0 + v1 + v2 + v3 + v4 - - -def calcCubicBounds(pt1, pt2, pt3, pt4): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0)) - (0, 0, 100, 75.0) - >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100)) - (0.0, 0.0, 100, 100) - >>> print("%f %f %f %f" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0))) - 35.566243 0.000000 64.433757 75.000000 - """ - (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4) - # calc first derivative - ax3 = ax * 3.0 - ay3 = ay * 3.0 - bx2 = bx * 2.0 - by2 = by * 2.0 - xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - roots = xRoots + yRoots - - points = [ - ( - ax * t * t * t + bx * t * t + cx * t + dx, - ay * t * t * t + by * t * t + cy * t + dy, - ) - for t in roots - ] + [pt1, pt4] - return calcBounds(points) - - -def splitLine(pt1, pt2, where, isHorizontal): - """Split a line at a given coordinate. - - Args: - pt1: Start point of line as 2D tuple. - pt2: End point of line as 2D tuple. - where: Position at which to split the line. - isHorizontal: Direction of the ray splitting the line. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two line segments (each line segment being two 2D tuples) - if the line was successfully split, or a list containing the original - line. - - Example:: - - >>> printSegments(splitLine((0, 0), (100, 100), 50, True)) - ((0, 0), (50, 50)) - ((50, 50), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 100, True)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, True)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, False)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((100, 0), (0, 0), 50, False)) - ((100, 0), (50, 0)) - ((50, 0), (0, 0)) - >>> printSegments(splitLine((0, 100), (0, 0), 50, True)) - ((0, 100), (0, 50)) - ((0, 50), (0, 0)) - """ - pt1x, pt1y = pt1 - pt2x, pt2y = pt2 - - ax = pt2x - pt1x - ay = pt2y - pt1y - - bx = pt1x - by = pt1y - - a = (ax, ay)[isHorizontal] - - if a == 0: - return [(pt1, pt2)] - t = (where - (bx, by)[isHorizontal]) / a - if 0 <= t < 1: - midPt = ax * t + bx, ay * t + by - return [(pt1, midPt), (midPt, pt2)] - else: - return [(pt1, pt2)] - - -def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): - """Split a quadratic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being three 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False)) - ((0, 0), (50, 100), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False)) - ((0, 0), (12.5, 25), (25, 37.5)) - ((25, 37.5), (62.5, 75), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True)) - ((0, 0), (7.32233, 14.6447), (14.6447, 25)) - ((14.6447, 25), (50, 75), (85.3553, 25)) - ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15)) - >>> # XXX I'm not at all sure if the following behavior is desirable: - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (50, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - solutions = solveQuadratic( - a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3)] - return _splitQuadraticAtT(a, b, c, *solutions) - - -def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): - """Split a cubic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being four 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False)) - ((0, 0), (25, 100), (75, 100), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True)) - ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25)) - ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25)) - ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - solutions = solveCubic( - a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3, pt4)] - return _splitCubicAtT(a, b, c, d, *solutions) - - -def splitQuadraticAtT(pt1, pt2, pt3, *ts): - """Split a quadratic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being three 2D tuples). - - Examples:: - - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (62.5, 50), (75, 37.5)) - ((75, 37.5), (87.5, 25), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - return _splitQuadraticAtT(a, b, c, *ts) - - -def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being four 2D tuples). - - Examples:: - - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25)) - ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - return _splitCubicAtT(a, b, c, d, *ts) - - -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, -) -def splitCubicAtTC(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.. - *ts: Positions at which to split the curve. - - Yields: - Curve segments (each curve segment being four complex numbers). - """ - a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4) - yield from _splitCubicAtTC(a, b, c, d, *ts) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - pointAtT=cython.complex, - off1=cython.complex, - off2=cython.complex, -) -@cython.locals( - t2=cython.double, _1_t=cython.double, _1_t_2=cython.double, _2_t_1_t=cython.double -) -def splitCubicIntoTwoAtTC(pt1, pt2, pt3, pt4, t): - """Split a cubic Bezier curve at t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - t: Position at which to split the curve. - - Returns: - A tuple of two curve segments (each curve segment being four complex numbers). - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - _2_t_1_t = 2 * t * _1_t - pointAtT = ( - _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - ) - off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 - off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 - - pt2 = pt1 + (pt2 - pt1) * t - pt3 = pt4 + (pt3 - pt4) * _1_t - - return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4)) - - -def _splitQuadraticAtT(a, b, c, *ts): - ts = list(ts) - segments = [] - ts.insert(0, 0.0) - ts.append(1.0) - ax, ay = a - bx, by = b - cx, cy = c - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - # calc new a, b and c - delta_2 = delta * delta - a1x = ax * delta_2 - a1y = ay * delta_2 - b1x = (2 * ax * t1 + bx) * delta - b1y = (2 * ay * t1 + by) * delta - t1_2 = t1 * t1 - c1x = ax * t1_2 + bx * t1 + cx - c1y = ay * t1_2 + by * t1 + cy - - pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - segments.append((pt1, pt2, pt3)) - return segments - - -def _splitCubicAtT(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - segments = [] - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1x = ax * delta_3 - a1y = ay * delta_3 - b1x = (3 * ax * t1 + bx) * delta_2 - b1y = (3 * ay * t1 + by) * delta_2 - c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - pt1, pt2, pt3, pt4 = calcCubicPoints( - (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - ) - segments.append((pt1, pt2, pt3, pt4)) - return segments - - -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - t1=cython.double, - t2=cython.double, - delta=cython.double, - delta_2=cython.double, - delta_3=cython.double, - a1=cython.complex, - b1=cython.complex, - c1=cython.complex, - d1=cython.complex, -) -def _splitCubicAtTC(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - d1 = a * t1_3 + b * t1_2 + c * t1 + d - pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - yield (pt1, pt2, pt3, pt4) - - -# -# Equation solvers. -# - -from math import sqrt, acos, cos, pi - - -def solveQuadratic(a, b, c, sqrt=sqrt): - """Solve a quadratic equation. - - Solves *a*x*x + b*x + c = 0* where a, b and c are real. - - Args: - a: coefficient of *x²* - b: coefficient of *x* - c: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - """ - if abs(a) < epsilon: - if abs(b) < epsilon: - # We have a non-equation; therefore, we have no valid solution - roots = [] - else: - # We have a linear equation with 1 root. - roots = [-c / b] - else: - # We have a true quadratic equation. Apply the quadratic formula to find two roots. - DD = b * b - 4.0 * a * c - if DD >= 0.0: - rDD = sqrt(DD) - roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - else: - # complex roots, ignore - roots = [] - return roots - - -def solveCubic(a, b, c, d): - """Solve a cubic equation. - - Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real. - - Args: - a: coefficient of *x³* - b: coefficient of *x²* - c: coefficient of *x* - d: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - - Examples:: - - >>> solveCubic(1, 1, -6, 0) - [-3.0, -0.0, 2.0] - >>> solveCubic(-10.0, -9.0, 48.0, -29.0) - [-2.9, 1.0, 1.0] - >>> solveCubic(-9.875, -9.0, 47.625, -28.75) - [-2.911392, 1.0, 1.0] - >>> solveCubic(1.0, -4.5, 6.75, -3.375) - [1.5, 1.5, 1.5] - >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123) - [0.5, 0.5, 0.5] - >>> solveCubic( - ... 9.0, 0.0, 0.0, -7.62939453125e-05 - ... ) == [-0.0, -0.0, -0.0] - True - """ - # - # adapted from: - # CUBIC.C - Solve a cubic polynomial - # public domain by Ross Cottrell - # found at: http://www.strangecreations.com/library/snippets/Cubic.C - # - if abs(a) < epsilon: - # don't just test for zero; for very small values of 'a' solveCubic() - # returns unreliable results, so we fall back to quad. - return solveQuadratic(b, c, d) - a = float(a) - a1 = b / a - a2 = c / a - a3 = d / a - - Q = (a1 * a1 - 3.0 * a2) / 9.0 - R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 - - R2 = R * R - Q3 = Q * Q * Q - R2 = 0 if R2 < epsilon else R2 - Q3 = 0 if abs(Q3) < epsilon else Q3 - - R2_Q3 = R2 - Q3 - - if R2 == 0.0 and Q3 == 0.0: - x = round(-a1 / 3.0, epsilonDigits) - return [x, x, x] - elif R2_Q3 <= epsilon * 0.5: - # The epsilon * .5 above ensures that Q3 is not zero. - theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - rQ2 = -2.0 * sqrt(Q) - a1_3 = a1 / 3.0 - x0 = rQ2 * cos(theta / 3.0) - a1_3 - x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - x0, x1, x2 = sorted([x0, x1, x2]) - # Merge roots that are close-enough - if x1 - x0 < epsilon and x2 - x1 < epsilon: - x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - elif x1 - x0 < epsilon: - x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - x2 = round(x2, epsilonDigits) - elif x2 - x1 < epsilon: - x0 = round(x0, epsilonDigits) - x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - else: - x0 = round(x0, epsilonDigits) - x1 = round(x1, epsilonDigits) - x2 = round(x2, epsilonDigits) - return [x0, x1, x2] - else: - x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - x = x + Q / x - if R >= 0.0: - x = -x - x = round(x - a1 / 3.0, epsilonDigits) - return [x] - - -# -# Conversion routines for points to parameters and vice versa -# - - -def calcQuadraticParameters(pt1, pt2, pt3): - x2, y2 = pt2 - x3, y3 = pt3 - cx, cy = pt1 - bx = (x2 - cx) * 2.0 - by = (y2 - cy) * 2.0 - ax = x3 - cx - bx - ay = y3 - cy - by - return (ax, ay), (bx, by), (cx, cy) - - -def calcCubicParameters(pt1, pt2, pt3, pt4): - x2, y2 = pt2 - x3, y3 = pt3 - x4, y4 = pt4 - dx, dy = pt1 - cx = (x2 - dx) * 3.0 - cy = (y2 - dy) * 3.0 - bx = (x3 - x2) * 3.0 - cx - by = (y3 - y2) * 3.0 - cy - ax = x4 - dx - cx - bx - ay = y4 - dy - cy - by - return (ax, ay), (bx, by), (cx, cy), (dx, dy) - - -@cython.cfunc -@cython.inline -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, -) -def calcCubicParametersC(pt1, pt2, pt3, pt4): - c = (pt2 - pt1) * 3.0 - b = (pt3 - pt2) * 3.0 - c - a = pt4 - pt1 - c - b - return (a, b, c, pt1) - - -def calcQuadraticPoints(a, b, c): - ax, ay = a - bx, by = b - cx, cy = c - x1 = cx - y1 = cy - x2 = (bx * 0.5) + cx - y2 = (by * 0.5) + cy - x3 = ax + bx + cx - y3 = ay + by + cy - return (x1, y1), (x2, y2), (x3, y3) - - -def calcCubicPoints(a, b, c, d): - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - x1 = dx - y1 = dy - x2 = (cx / 3.0) + dx - y2 = (cy / 3.0) + dy - x3 = (bx + cx) / 3.0 + x2 - y3 = (by + cy) / 3.0 + y2 - x4 = ax + dx + cx + bx - y4 = ay + dy + cy + by - return (x1, y1), (x2, y2), (x3, y3), (x4, y4) - - -@cython.cfunc -@cython.inline -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - p2=cython.complex, - p3=cython.complex, - p4=cython.complex, -) -def calcCubicPointsC(a, b, c, d): - p2 = c * (1 / 3) + d - p3 = (b + c) * (1 / 3) + p2 - p4 = a + b + c + d - return (d, p2, p3, p4) - - -# -# Point at time -# - - -def linePointAtT(pt1, pt2, t): - """Finds the point at time `t` on a line. - - Args: - pt1, pt2: Coordinates of the line as 2D tuples. - t: The time along the line. - - Returns: - A 2D tuple with the coordinates of the point. - """ - return ((pt1[0] * (1 - t) + pt2[0] * t), (pt1[1] * (1 - t) + pt2[1] * t)) - - -def quadraticPointAtT(pt1, pt2, pt3, t): - """Finds the point at time `t` on a quadratic curve. - - Args: - pt1, pt2, pt3: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] - y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] - return (x, y) - - -def cubicPointAtT(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - x = ( - _1_t_2 * _1_t * pt1[0] - + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) - + t2 * t * pt4[0] - ) - y = ( - _1_t_2 * _1_t * pt1[1] - + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) - + t2 * t * pt4[1] - ) - return (x, y) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals(t2=cython.double, _1_t=cython.double, _1_t_2=cython.double) -def cubicPointAtTC(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as complex numbers. - t: The time along the curve. - - Returns: - A complex number with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - - -def segmentPointAtT(seg, t): - if len(seg) == 2: - return linePointAtT(*seg, t) - elif len(seg) == 3: - return quadraticPointAtT(*seg, t) - elif len(seg) == 4: - return cubicPointAtT(*seg, t) - raise ValueError("Unknown curve degree") - - -# -# Intersection finders -# - - -def _line_t_of_pt(s, e, pt): - sx, sy = s - ex, ey = e - px, py = pt - if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - # Line is a point! - return -1 - # Use the largest - if abs(sx - ex) > abs(sy - ey): - return (px - sx) / (ex - sx) - else: - return (py - sy) / (ey - sy) - - -def _both_points_are_on_same_side_of_origin(a, b, origin): - xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - return not (xDiff <= 0.0 and yDiff <= 0.0) - - -def lineLineIntersections(s1, e1, s2, e2): - """Finds intersections between two line segments. - - Args: - s1, e1: Coordinates of the first line as 2D tuples. - s2, e2: Coordinates of the second line as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - - >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367)) - >>> len(a) - 1 - >>> intersection = a[0] - >>> intersection.pt - (374.44882952482897, 313.73458370177315) - >>> (intersection.t1, intersection.t2) - (0.45069111555824465, 0.5408153767394238) - """ - s1x, s1y = s1 - e1x, e1y = e1 - s2x, s2y = s2 - e2x, e2y = e2 - if ( - math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - ): # Parallel vertical - return [] - if ( - math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - ): # Parallel horizontal - return [] - if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - return [] - if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - return [] - if math.isclose(e1x, s1x): - x = s1x - slope34 = (e2y - s2y) / (e2x - s2x) - y = slope34 * (x - s2x) + s2y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - if math.isclose(s2x, e2x): - x = s2x - slope12 = (e1y - s1y) / (e1x - s1x) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - - slope12 = (e1y - s1y) / (e1x - s1x) - slope34 = (e2y - s2y) / (e2x - s2x) - if math.isclose(slope12, slope34): - return [] - x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - if _both_points_are_on_same_side_of_origin( - pt, e1, s1 - ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - return [] - - -def _alignment_transformation(segment): - # Returns a transformation which aligns a segment horizontally at the - # origin. Apply this transformation to curves and root-find to find - # intersections with the segment. - start = segment[0] - end = segment[-1] - angle = math.atan2(end[1] - start[1], end[0] - start[0]) - return Identity.rotate(-angle).translate(-start[0], -start[1]) - - -def _curve_line_intersections_t(curve, line): - aligned_curve = _alignment_transformation(line).transformPoints(curve) - if len(curve) == 3: - a, b, c = calcQuadraticParameters(*aligned_curve) - intersections = solveQuadratic(a[1], b[1], c[1]) - elif len(curve) == 4: - a, b, c, d = calcCubicParameters(*aligned_curve) - intersections = solveCubic(a[1], b[1], c[1], d[1]) - else: - raise ValueError("Unknown curve degree") - return sorted(i for i in intersections if 0.0 <= i <= 1) - - -def curveLineIntersections(curve, line): - """Finds intersections between a curve and a line. - - Args: - curve: List of coordinates of the curve segment as 2D tuples. - line: List of coordinates of the line segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = curveLineIntersections(curve, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - """ - if len(curve) == 3: - pointFinder = quadraticPointAtT - elif len(curve) == 4: - pointFinder = cubicPointAtT - else: - raise ValueError("Unknown curve degree") - intersections = [] - for t in _curve_line_intersections_t(curve, line): - pt = pointFinder(*curve, t) - # Back-project the point onto the line, to avoid problems with - # numerical accuracy in the case of vertical and horizontal lines - line_t = _line_t_of_pt(*line, pt) - pt = linePointAtT(*line, line_t) - intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - return intersections - - -def _curve_bounds(c): - if len(c) == 3: - return calcQuadraticBounds(*c) - elif len(c) == 4: - return calcCubicBounds(*c) - raise ValueError("Unknown curve degree") - - -def _split_segment_at_t(c, t): - if len(c) == 2: - s, e = c - midpoint = linePointAtT(s, e, t) - return [(s, midpoint), (midpoint, e)] - if len(c) == 3: - return splitQuadraticAtT(*c, t) - elif len(c) == 4: - return splitCubicAtT(*c, t) - raise ValueError("Unknown curve degree") - - -def _curve_curve_intersections_t( - curve1, curve2, precision=1e-3, range1=None, range2=None -): - bounds1 = _curve_bounds(curve1) - bounds2 = _curve_bounds(curve2) - - if not range1: - range1 = (0.0, 1.0) - if not range2: - range2 = (0.0, 1.0) - - # If bounds don't intersect, go home - intersects, _ = sectRect(bounds1, bounds2) - if not intersects: - return [] - - def midpoint(r): - return 0.5 * (r[0] + r[1]) - - # If they do overlap but they're tiny, approximate - if rectArea(bounds1) < precision and rectArea(bounds2) < precision: - return [(midpoint(range1), midpoint(range2))] - - c11, c12 = _split_segment_at_t(curve1, 0.5) - c11_range = (range1[0], midpoint(range1)) - c12_range = (midpoint(range1), range1[1]) - - c21, c22 = _split_segment_at_t(curve2, 0.5) - c21_range = (range2[0], midpoint(range2)) - c22_range = (midpoint(range2), range2[1]) - - found = [] - found.extend( - _curve_curve_intersections_t( - c11, c21, precision, range1=c11_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c21, precision, range1=c12_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c11, c22, precision, range1=c11_range, range2=c22_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c22, precision, range1=c12_range, range2=c22_range - ) - ) - - unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) - seen = set() - unique_values = [] - - for ts in found: - key = unique_key(ts) - if key in seen: - continue - seen.add(key) - unique_values.append(ts) - - return unique_values - - -def curveCurveIntersections(curve1, curve2): - """Finds intersections between a curve and a curve. - - Args: - curve1: List of coordinates of the first curve segment as 2D tuples. - curve2: List of coordinates of the second curve segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = curveCurveIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - """ - intersection_ts = _curve_curve_intersections_t(curve1, curve2) - return [ - Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - for ts in intersection_ts - ] - - -def segmentSegmentIntersections(seg1, seg2): - """Finds intersections between two segments. - - Args: - seg1: List of coordinates of the first segment as 2D tuples. - seg2: List of coordinates of the second segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = segmentSegmentIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = segmentSegmentIntersections(curve3, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - - """ - # Arrange by degree - swapped = False - if len(seg2) > len(seg1): - seg2, seg1 = seg1, seg2 - swapped = True - if len(seg1) > 2: - if len(seg2) > 2: - intersections = curveCurveIntersections(seg1, seg2) - else: - intersections = curveLineIntersections(seg1, seg2) - elif len(seg1) == 2 and len(seg2) == 2: - intersections = lineLineIntersections(*seg1, *seg2) - else: - raise ValueError("Couldn't work out which intersection function to use") - if not swapped: - return intersections - return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - - -def _segmentrepr(obj): - """ - >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - """ - try: - it = iter(obj) - except TypeError: - return "%g" % obj - else: - return "(%s)" % ", ".join(_segmentrepr(x) for x in it) - - -def printSegments(segments): - """Helper for the doctests, displaying each segment in a list of - segments on a single line as a tuple. - """ - for segment in segments: - print(_segmentrepr(segment)) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py deleted file mode 100644 index da299c6d85893e4113c459d503d77c6a120128ae..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_o_r_x.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6morx.html -class table__m_o_r_x(BaseTTXConverter): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/exceptions.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/exceptions.py deleted file mode 100644 index 678ca7d5926798d1bd27363a019851717eee6e35..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/exceptions.py +++ /dev/null @@ -1,52 +0,0 @@ -from gradio_client.documentation import document, set_documentation_group - -set_documentation_group("helpers") - - -class DuplicateBlockError(ValueError): - """Raised when a Blocks contains more than one Block with the same id""" - - pass - - -class TooManyRequestsError(Exception): - """Raised when the Hugging Face API returns a 429 status code.""" - - pass - - -class InvalidApiNameError(ValueError): - pass - - -class ServerFailedToStartError(Exception): - pass - - -class InvalidBlockError(ValueError): - """Raised when an event in a Blocks contains a reference to a Block that is not in the original Blocks""" - - pass - - -InvalidApiName = InvalidApiNameError # backwards compatibility - - -@document() -class Error(Exception): - """ - This class allows you to pass custom error messages to the user. You can do so by raising a gr.Error("custom message") anywhere in the code, and when that line is executed the custom message will appear in a modal on the demo. - - Demos: calculator - """ - - def __init__(self, message: str = "Error raised."): - """ - Parameters: - message: The error message to be displayed to the user. - """ - self.message = message - super().__init__(self.message) - - def __str__(self): - return repr(self.message) diff --git a/spaces/declare-lab/tango/diffusers/examples/dreambooth/train_dreambooth.py b/spaces/declare-lab/tango/diffusers/examples/dreambooth/train_dreambooth.py deleted file mode 100644 index 7c02d154a0682de4855c1f9e99c47d0c5d1cb73a..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/dreambooth/train_dreambooth.py +++ /dev/null @@ -1,1039 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import hashlib -import itertools -import logging -import math -import os -import warnings -from pathlib import Path - -import accelerate -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if is_wandb_available(): - import wandb - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.15.0.dev0") - -logger = get_logger(__name__) - - -def log_validation(text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch): - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=accelerator.unwrap_model(unet), - vae=vae, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help=( - "Revision of pretrained model identifier from huggingface.co/models. Trainable model components should be" - " float32 precision." - ), - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_text_encoder", - action="store_true", - help="Whether to train the text encoder. If set, the text encoder should be float32 precision.", - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. Checkpoints can be used for resuming training via `--resume_from_checkpoint`. " - "In the case that the checkpoint is better than the final trained model, the checkpoint can also be used for inference." - "Using a checkpoint for inference requires separate loading of the original pipeline and the individual checkpointed model components." - "See https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint for step by step" - "instructions." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more details" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_steps", - type=int, - default=100, - help=( - "Run validation every X steps. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - parser.add_argument( - "--set_grads_to_none", - action="store_true", - help=( - "Save more memory by using setting grads to None instead of zero. Be aware, that this changes certain" - " behaviors, so disable this argument if it causes any problems. More info:" - " https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html" - ), - ) - - parser.add_argument( - "--offset_noise", - action="store_true", - default=False, - help=( - "Fine-tuning against a modified noise" - " See: https://www.crosslabs.org//blog/diffusion-with-offset-noise for more information." - ), - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - class_num=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError(f"Instance {self.instance_data_root} images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - if class_num is not None: - self.num_class_images = min(len(self.class_images_path), class_num) - else: - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # `accelerate` 0.16.0 will have better support for customized saving - if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - for model in models: - sub_dir = "unet" if type(model) == type(unet) else "text_encoder" - model.save_pretrained(os.path.join(output_dir, sub_dir)) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - while len(models) > 0: - # pop models so that they are not loaded again - model = models.pop() - - if type(model) == type(text_encoder): - # load transformers style into model - load_model = text_encoder_cls.from_pretrained(input_dir, subfolder="text_encoder") - model.config = load_model.config - else: - # load diffusers style into model - load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - # Check that all trainable models are in full precision - low_precision_error_string = ( - "Please make sure to always have all model weights in full float32 precision when starting training - even if" - " doing mixed precision training. copy of the weights should still be float32." - ) - - if accelerator.unwrap_model(unet).dtype != torch.float32: - raise ValueError( - f"Unet loaded as datatype {accelerator.unwrap_model(unet).dtype}. {low_precision_error_string}" - ) - - if args.train_text_encoder and accelerator.unwrap_model(text_encoder).dtype != torch.float32: - raise ValueError( - f"Text encoder loaded as datatype {accelerator.unwrap_model(text_encoder).dtype}." - f" {low_precision_error_string}" - ) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - class_num=args.num_class_images, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and text_encoder to device and cast to weight_dtype - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - if args.offset_noise: - noise = torch.randn_like(latents) + 0.1 * torch.randn( - latents.shape[0], latents.shape[1], 1, 1, device=latents.device - ) - else: - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad(set_to_none=args.set_grads_to_none) - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if accelerator.is_main_process: - if global_step % args.checkpointing_steps == 0: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - if args.validation_prompt is not None and global_step % args.validation_steps == 0: - log_validation(text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/descript/vampnet/scripts/utils/gtzan_embeddings.py b/spaces/descript/vampnet/scripts/utils/gtzan_embeddings.py deleted file mode 100644 index 78a6e318fbba98355fb48aa6ea1c74b0b83ff287..0000000000000000000000000000000000000000 --- a/spaces/descript/vampnet/scripts/utils/gtzan_embeddings.py +++ /dev/null @@ -1,263 +0,0 @@ -""" -TODO: train a linear probe -usage: - python gtzan_embeddings.py --args.load conf/interface.yml --Interface.device cuda --path_to_gtzan /path/to/gtzan/genres_original --output_dir /path/to/output -""" -from pathlib import Path -from typing import List - -import audiotools as at -from audiotools import AudioSignal -import argbind -import torch -import numpy as np -import zipfile -import json - -from vampnet.interface import Interface -import tqdm - -# bind the Interface to argbind -Interface = argbind.bind(Interface) - -DEBUG = False - -def smart_plotly_export(fig, save_path): - img_format = save_path.split('.')[-1] - if img_format == 'html': - fig.write_html(save_path) - elif img_format == 'bytes': - return fig.to_image(format='png') - #TODO: come back and make this prettier - elif img_format == 'numpy': - import io - from PIL import Image - - def plotly_fig2array(fig): - #convert Plotly fig to an array - fig_bytes = fig.to_image(format="png", width=1200, height=700) - buf = io.BytesIO(fig_bytes) - img = Image.open(buf) - return np.asarray(img) - - return plotly_fig2array(fig) - elif img_format == 'jpeg' or 'png' or 'webp': - fig.write_image(save_path) - else: - raise ValueError("invalid image format") - -def dim_reduce(emb, labels, save_path, n_components=3, method='tsne', title=''): - """ - dimensionality reduction for visualization! - saves an html plotly figure to save_path - parameters: - emb (np.ndarray): the samples to be reduces with shape (samples, features) - labels (list): list of labels for embedding - save_path (str): path where u wanna save ur figure - method (str): umap, tsne, or pca - title (str): title for ur figure - returns: - proj (np.ndarray): projection vector with shape (samples, dimensions) - """ - import pandas as pd - import plotly.express as px - if method == 'umap': - reducer = umap.UMAP(n_components=n_components) - elif method == 'tsne': - from sklearn.manifold import TSNE - reducer = TSNE(n_components=n_components) - elif method == 'pca': - from sklearn.decomposition import PCA - reducer = PCA(n_components=n_components) - else: - raise ValueError - - proj = reducer.fit_transform(emb) - - if n_components == 2: - df = pd.DataFrame(dict( - x=proj[:, 0], - y=proj[:, 1], - instrument=labels - )) - fig = px.scatter(df, x='x', y='y', color='instrument', - title=title+f"_{method}") - - elif n_components == 3: - df = pd.DataFrame(dict( - x=proj[:, 0], - y=proj[:, 1], - z=proj[:, 2], - instrument=labels - )) - fig = px.scatter_3d(df, x='x', y='y', z='z', - color='instrument', - title=title) - else: - raise ValueError("cant plot more than 3 components") - - fig.update_traces(marker=dict(size=6, - line=dict(width=1, - color='DarkSlateGrey')), - selector=dict(mode='markers')) - - return smart_plotly_export(fig, save_path) - - - -# per JukeMIR, we want the emebddings from the middle layer? -def vampnet_embed(sig: AudioSignal, interface: Interface, layer=10): - with torch.inference_mode(): - # preprocess the signal - sig = interface.preprocess(sig) - - # get the coarse vampnet model - vampnet = interface.coarse - - # get the tokens - z = interface.encode(sig)[:, :vampnet.n_codebooks, :] - z_latents = vampnet.embedding.from_codes(z, interface.codec) - - # do a forward pass through the model, get the embeddings - _z, embeddings = vampnet(z_latents, return_activations=True) - # print(f"got embeddings with shape {embeddings.shape}") - # [layer, batch, time, n_dims] - # [20, 1, 600ish, 768] - - - # squeeze batch dim (1 bc layer should be dim 0) - assert embeddings.shape[1] == 1, f"expected batch dim to be 1, got {embeddings.shape[0]}" - embeddings = embeddings.squeeze(1) - - num_layers = embeddings.shape[0] - assert layer < num_layers, f"layer {layer} is out of bounds for model with {num_layers} layers" - - # do meanpooling over the time dimension - embeddings = embeddings.mean(dim=-2) - # [20, 768] - - # return the embeddings - return embeddings - -from dataclasses import dataclass, fields -@dataclass -class Embedding: - genre: str - filename: str - embedding: np.ndarray - - def save(self, path): - """Save the Embedding object to a given path as a zip file.""" - with zipfile.ZipFile(path, 'w') as archive: - - # Save numpy array - with archive.open('embedding.npy', 'w') as f: - np.save(f, self.embedding) - - # Save non-numpy data as json - non_numpy_data = {f.name: getattr(self, f.name) for f in fields(self) if f.name != 'embedding'} - with archive.open('data.json', 'w') as f: - f.write(json.dumps(non_numpy_data).encode('utf-8')) - - @classmethod - def load(cls, path): - """Load the Embedding object from a given zip path.""" - with zipfile.ZipFile(path, 'r') as archive: - - # Load numpy array - with archive.open('embedding.npy') as f: - embedding = np.load(f) - - # Load non-numpy data from json - with archive.open('data.json') as f: - data = json.loads(f.read().decode('utf-8')) - - return cls(embedding=embedding, **data) - - -@argbind.bind(without_prefix=True) -def main( - path_to_gtzan: str = None, - cache_dir: str = "./.gtzan_emb_cache", - output_dir: str = "./gtzan_vampnet_embeddings", - layers: List[int] = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] -): - path_to_gtzan = Path(path_to_gtzan) - assert path_to_gtzan.exists(), f"{path_to_gtzan} does not exist" - - cache_dir = Path(cache_dir) - output_dir = Path(output_dir) - output_dir.mkdir(exist_ok=True, parents=True) - - # load our interface - # argbind will automatically load the default config, - interface = Interface() - - # gtzan should have a folder for each genre, so let's get the list of genres - genres = [Path(x).name for x in path_to_gtzan.iterdir() if x.is_dir()] - print(f"Found {len(genres)} genres") - print(f"genres: {genres}") - - # collect audio files, genres, and embeddings - data = [] - for genre in genres: - audio_files = list(at.util.find_audio(path_to_gtzan / genre)) - print(f"Found {len(audio_files)} audio files for genre {genre}") - - for audio_file in tqdm.tqdm(audio_files, desc=f"embedding genre {genre}"): - # check if we have a cached embedding for this file - cached_path = (cache_dir / f"{genre}_{audio_file.stem}.emb") - if cached_path.exists(): - # if so, load it - if DEBUG: - print(f"loading cached embedding for {cached_path.stem}") - embedding = Embedding.load(cached_path) - data.append(embedding) - else: - try: - sig = AudioSignal(audio_file) - except Exception as e: - print(f"failed to load {audio_file.name} with error {e}") - print(f"skipping {audio_file.name}") - continue - - # gets the embedding - emb = vampnet_embed(sig, interface).cpu().numpy() - - # create an embedding we can save/load - embedding = Embedding( - genre=genre, - filename=audio_file.name, - embedding=emb - ) - - # cache the embeddings - cached_path.parent.mkdir(exist_ok=True, parents=True) - embedding.save(cached_path) - - # now, let's do a dim reduction on the embeddings - # and visualize them. - - # collect a list of embeddings and labels - embeddings = [d.embedding for d in data] - labels = [d.genre for d in data] - - # convert the embeddings to a numpy array - embeddings = np.stack(embeddings) - - # do dimensionality reduction for each layer we're given - for layer in tqdm.tqdm(layers, desc="dim reduction"): - dim_reduce( - embeddings[:, layer, :], labels, - save_path=str(output_dir / f'vampnet-gtzan-layer={layer}.html'), - n_components=2, method='tsne', - title=f'vampnet-gtzan-layer={layer}' - ) - - - - -if __name__ == "__main__": - args = argbind.parse_args() - with argbind.scope(args): - main() \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Amagami Ss Pc Game English Download 113 PORTABLE.md b/spaces/diacanFperku/AutoGPT/Amagami Ss Pc Game English Download 113 PORTABLE.md deleted file mode 100644 index f948efe4555d2e857f8956b8bf653a8b06e71474..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Amagami Ss Pc Game English Download 113 PORTABLE.md +++ /dev/null @@ -1,6 +0,0 @@ -

    amagami ss pc game english download 113


    Download Ziphttps://gohhs.com/2uFUDd



    - -Hi, i watched Amagami ss and really enjoyed it. ... its visual novel with my PC. is there any english version for this game that i can play it with my ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Bygg.Biler.Med.Mulle.Mekk.NORWEGiAN-NORBiTS PORTABLE.md b/spaces/diacanFperku/AutoGPT/Bygg.Biler.Med.Mulle.Mekk.NORWEGiAN-NORBiTS PORTABLE.md deleted file mode 100644 index e7e4e16b564098e95e2d60146e0311f5d5e0370b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Bygg.Biler.Med.Mulle.Mekk.NORWEGiAN-NORBiTS PORTABLE.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bygg.Biler.Med.Mulle.Mekk.NORWEGiAN-NORBiTS


    Download Zip ->>> https://gohhs.com/2uFVuH



    -
    - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Download KiGO Primo V2.4 Middle East Maps _BEST_.md b/spaces/diacanFperku/AutoGPT/Download KiGO Primo V2.4 Middle East Maps _BEST_.md deleted file mode 100644 index d50e650363e751caa5b88534c5be6e4ab05b9b15..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download KiGO Primo V2.4 Middle East Maps _BEST_.md +++ /dev/null @@ -1,30 +0,0 @@ -
    -

    How to Download k:iGO Primo v2.4 Middle East Maps for Your GPS Device

    -

    If you are looking for a reliable and updated navigation software for your GPS device, you might want to consider downloading k:iGO Primo v2.4 Middle East Maps. This software is compatible with most GPS devices that run on Windows CE or Android operating systems, and it covers the latest maps of the Middle East region, including countries like Kuwait, Saudi Arabia, UAE, Qatar, Bahrain, Oman, Yemen, Jordan, Lebanon, Syria, Iraq, Iran, Israel, Palestine, Egypt and more.

    -

    download k:iGO Primo v2.4 Middle East Maps


    Download Zip ››› https://gohhs.com/2uFSPk



    -

    k:iGO Primo v2.4 Middle East Maps is a premium software that offers high-quality 3D graphics, realistic landmarks and buildings, accurate routing and guidance, speed camera alerts, points of interest, voice recognition and more. It also supports online services such as weather information, traffic updates, fuel prices and parking availability. You can customize the software according to your preferences and needs, such as changing the language, units, map colors, vehicle icons and more.

    -

    To download k:iGO Primo v2.4 Middle East Maps for your GPS device, you will need to follow these steps:

    -
      -
    1. Make sure your GPS device has enough free memory space to store the software and the maps. You will need at least 4 GB of free space.
    2. -
    3. Connect your GPS device to your computer using a USB cable or a memory card reader.
    4. -
    5. Download the k:iGO Primo v2.4 Middle East Maps software from a trusted source. You can find some links below:
    6. - -
    7. Extract the downloaded files using a program like WinRAR or 7-Zip.
    8. -
    9. Copy the extracted files to the root directory of your GPS device or memory card.
    10. -
    11. Disconnect your GPS device from your computer and restart it.
    12. -
    13. Select k:iGO Primo as your navigation software from the menu of your GPS device.
    14. -
    15. Enjoy your new maps and features!
    16. -
    -

    We hope this article was helpful for you to download k:iGO Primo v2.4 Middle East Maps for your GPS device. If you have any questions or feedback, please let us know in the comments below.

    -

    - -

    k:iGO Primo is one of the most popular and advanced navigation software in the world. It has been developed by NNG, a Hungarian company that specializes in GPS and navigation solutions. k:iGO Primo is based on the iGO engine, which has been used by many other brands and devices, such as Becker, Clarion, Pioneer, LG and more.

    -

    k:iGO Primo offers a user-friendly and intuitive interface that allows you to easily access all the functions and settings of the software. You can choose from different modes of navigation, such as car, truck, pedestrian, bicycle or public transport. You can also plan your route according to various criteria, such as fastest, shortest, economical or green. You can also avoid toll roads, ferries, highways or unpaved roads if you wish.

    -

    k:iGO Primo also provides you with detailed and accurate maps of the Middle East region, which are updated regularly to reflect the latest changes and developments. You can view the maps in 2D or 3D mode, and zoom in or out as you like. You can also see realistic 3D representations of landmarks and buildings, which help you to orient yourself and recognize your surroundings. You can also switch to night mode or day mode depending on the time of day.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/RT3 Upgrade 6.51 Na 6.63 Build 890 CAN.42 __TOP__.md b/spaces/diacanFperku/AutoGPT/RT3 Upgrade 6.51 Na 6.63 Build 890 CAN.42 __TOP__.md deleted file mode 100644 index b1a555ca0558cf25ae4696a70a919f162cf68544..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/RT3 Upgrade 6.51 Na 6.63 Build 890 CAN.42 __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    RT3 Upgrade 6.51 na 6.63 build 890 CAN.42


    DOWNLOAD ››››› https://gohhs.com/2uFTdy



    - - 1fdad05405
    -
    -
    -

    diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/fovea.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/fovea.py deleted file mode 100644 index 22a578efffbd108db644d907bae95c7c8df31f2e..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/detectors/fovea.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FOVEA(SingleStageDetector): - """Implementation of `FoveaBox `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/dirge/voicevox/build_util/codesign.bash b/spaces/dirge/voicevox/build_util/codesign.bash deleted file mode 100644 index f8f79f99c6700edff198b60b44aac11960c7f62d..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/build_util/codesign.bash +++ /dev/null @@ -1,49 +0,0 @@ -# !!! コードサイニング証明書を取り扱うので取り扱い注意 !!! - -set -eu - -if [ ! -v CERT_BASE64 ]; then - echo "CERT_BASE64が未定義です" - exit 1 -fi -if [ ! -v CERT_PASSWORD ]; then - echo "CERT_PASSWORDが未定義です" - exit 1 -fi - -if [ $# -ne 1 ]; then - echo "引数の数が一致しません" - exit 1 -fi -target_file_glob="$1" - -# 証明書 -CERT_PATH=cert.pfx -echo -n "$CERT_BASE64" | base64 -d - > $CERT_PATH - -# 指定ファイルに署名する -function codesign() { - TARGET="$1" - SIGNTOOL=$(find "C:/Program Files (x86)/Windows Kits/10/App Certification Kit" -name "signtool.exe" | sort -V | tail -n 1) - powershell "& '$SIGNTOOL' sign /fd SHA256 /td SHA256 /tr http://timestamp.digicert.com /f $CERT_PATH /p $CERT_PASSWORD '$TARGET'" -} - -# 指定ファイルが署名されているか -function is_signed() { - TARGET="$1" - SIGNTOOL=$(find "C:/Program Files (x86)/Windows Kits/10/App Certification Kit" -name "signtool.exe" | sort -V | tail -n 1) - powershell "& '$SIGNTOOL' verify /pa '$TARGET'" || return 1 -} - -# 署名されていなければ署名 -ls $target_file_glob | while read target_file; do - if is_signed "$target_file"; then - echo "署名済み: $target_file" - else - echo "署名: $target_file" - codesign "$target_file" - fi -done - -# 証明書を消去 -rm $CERT_PATH diff --git a/spaces/dragao-elastico/RVC_V2/vc_infer_pipeline.py b/spaces/dragao-elastico/RVC_V2/vc_infer_pipeline.py deleted file mode 100644 index a0b50d4c703b7638d7c951c9d820a1e59c275fc3..0000000000000000000000000000000000000000 --- a/spaces/dragao-elastico/RVC_V2/vc_infer_pipeline.py +++ /dev/null @@ -1,646 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import torchcrepe # Fork feature. Use the crepe f0 algorithm. New dependency (pip install torchcrepe) -from torch import Tensor -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - # Get cuda device - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library - # Else wise return the "cpu" as a torch device, - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - model="full", - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - f0 = f0[1:] # Get rid of extra first frame - elif method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif method == "harvest": - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] # Get rid of first frame. - elif method == "dio": # Potentially buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print("Calculating hybrid median f0 from the stack of: %s" % str(methods)) - f0_median_hybrid = None - if len(f0_computation_stack) == 1: - f0_median_hybrid = f0_computation_stack[0] - else: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/eaedk/Tuto_Sentiment_Analysis_App/app.py b/spaces/eaedk/Tuto_Sentiment_Analysis_App/app.py deleted file mode 100644 index ae51d0304156caa761d4ecfc379870b830026398..0000000000000000000000000000000000000000 --- a/spaces/eaedk/Tuto_Sentiment_Analysis_App/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -from transformers import AutoModelForSequenceClassification -from transformers import AutoTokenizer, AutoConfig -import numpy as np -from scipy.special import softmax - -# Setup -model_path = f"GhylB/Sentiment_Analysis_DistilBERT" - -tokenizer = AutoTokenizer.from_pretrained(model_path) -config = AutoConfig.from_pretrained(model_path) -model = AutoModelForSequenceClassification.from_pretrained(model_path) - -# Functions - -# Preprocess text (username and link placeholders) - - -def preprocess(text): - new_text = [] - for t in text.split(" "): - t = '@user' if t.startswith('@') and len(t) > 1 else t - t = 'http' if t.startswith('http') else t - new_text.append(t) - return " ".join(new_text) - - -def sentiment_analysis(text): - text = preprocess(text) - - # PyTorch-based models - encoded_input = tokenizer(text, return_tensors='pt') - output = model(**encoded_input) - scores_ = output[0][0].detach().numpy() - scores_ = softmax(scores_) - - # Format output dict of scores - labels = ['Negative', 'Neutral', 'Positive'] - scores = {l: float(s) for (l, s) in zip(labels, scores_)} - - return scores - - -demo = gr.Interface( - fn=sentiment_analysis, - inputs=gr.Textbox(placeholder="Copy and paste/Write a tweet here..."), - outputs="text", - interpretation="default", - examples=[["What's up with the vaccine"], - ["Covid cases are increasing fast!"], - ["Covid has been invented by Mavis"], - ["I'm going to party this weekend"], - ["Covid is hoax"]], - title="Tutorial : Sentiment Analysis App", - description="This Application assesses if a twitter post relating to vaccinations is positive, neutral, or negative.", ) - -if __name__ == "__main__": - demo.launch(server_name="0.0.0.0", server_port=7860) # 8080 __ diff --git a/spaces/edugp/perplexity-lenses/perplexity_lenses/data.py b/spaces/edugp/perplexity-lenses/perplexity_lenses/data.py deleted file mode 100644 index 778749de20cde07e61569deecaf6d3519b718ad4..0000000000000000000000000000000000000000 --- a/spaces/edugp/perplexity-lenses/perplexity_lenses/data.py +++ /dev/null @@ -1,81 +0,0 @@ -from functools import partial - -import numpy as np -import pandas as pd -from datasets import load_dataset -from tqdm import tqdm - -from perplexity_lenses import REGISTRY_DATASET -from perplexity_lenses.perplexity import KenlmModel - - -def hub_dataset_to_dataframe( - path: str, - name: str, - split: str, - sample: int, - text_column: str, - model: KenlmModel, - seed: int = 0, - doc_type: str = "Whole document", -) -> pd.DataFrame: - load_dataset_fn = partial(load_dataset, path=path) - if name: - load_dataset_fn = partial(load_dataset_fn, name=name) - # Special case for the registry dataset - if path == REGISTRY_DATASET: - load_dataset_fn = partial(load_dataset_fn, data_files=f"{name}/*") - if split: - load_dataset_fn = partial(load_dataset_fn, split=split) - dataset = load_dataset_fn(streaming=True).shuffle(buffer_size=10000, seed=seed) - if doc_type.lower() == "sentence": - dataset = dataset.map( - lambda x: [ - { - text_column: sentence, - "perplexity": model.get_perplexity(sentence), - "label": x.get("labels", [])[0] - if len(x.get("labels", [])) > 0 - else "NONE", # Special case for registry dataset - } - for sentence in x[text_column].split("\n") - ] - ) - else: - dataset = dataset.map( - lambda x: { - text_column: x[text_column], - "perplexity": model.get_perplexity(x[text_column]), - "label": x.get("labels", [])[0] - if len(x.get("labels", [])) > 0 - else "NONE", # Special case for registry dataset - } - ) - instances = [] - count = 0 - for instance in tqdm(dataset, total=sample): - if isinstance(instance, list): - for sentence in instance: - instances.append(sentence) - count += 1 - if count == sample: - break - else: - instances.append(instance) - count += 1 - if count == sample: - break - return pd.DataFrame(instances) - - -def documents_df_to_sentences_df( - df: pd.DataFrame, text_column: str, sample: int, seed: int = 0 -): - df_sentences = pd.DataFrame( - { - text_column: np.array( - df[text_column].map(lambda x: x.split("\n")).values.tolist() - ).flatten() - } - ) - return df_sentences.sample(min(sample, df_sentences.shape[0]), random_state=seed) diff --git a/spaces/epexVfeibi/Imagedeblurr/3d Girlz 2 Free Download.md b/spaces/epexVfeibi/Imagedeblurr/3d Girlz 2 Free Download.md deleted file mode 100644 index 281bb63389cd4ae8e871898c6c1d14c8643abafa..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/3d Girlz 2 Free Download.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    they are all here in the best virtual 3d girlz forever game. and if you're looking for the best 3d girlz forever out there, this is it. so, why are you waiting? get this game now and experience all the virtual 3d sex that you can handle.

    -

    3d girlz 2 free download


    Download File ✔✔✔ https://jinyurl.com/2uErq4



    -

    3d girlz forever has a good storyline, a good set of characters, a good 3d girlz forever engine, graphics, animation, and sex positions. the only bad thing is you can only bang your 3d slave girl once, but if you can't wait, 3d girlz forever is available for your computer to download now.

    -

    3d girlz forever is one of the most popular virtual 3d sex games around. it's the best virtual 3d girlz forever game out there and will give you the best virtual 3d girlz forever experience. if you're ready to experience virtual 3d sex like never before, then 3d girlz forever is your game. so, why are you still reading? get this game now and experience virtual 3d sex like never before.

    -

    its a virtual reality porn game where you can enter in the world of 3d with your own avatar. there are plenty of anime porn pics to browse through and you can do it in the hot anime sex with your favorite virtual girl. the 3d girlz forever fucking simulation offers a lot of options that allow you to select a girl to play with and choose your own sex scene. the game is packed with sexy content and action but it does not have any story line so you have to be patient. you can watch 3d girlz 2 video trailer to know more about the game and its features. your virtual porn is waiting for you!

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/docs/README.md.Korean.md b/spaces/f2api/gpt-academic/docs/README.md.Korean.md deleted file mode 100644 index d94aaf1ac9ef5bc4699d3edf9b4b04733ef0eb92..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/docs/README.md.Korean.md +++ /dev/null @@ -1,268 +0,0 @@ -> **노트** -> -> 의존성을 설치할 때는 반드시 requirements.txt에서 **지정된 버전**을 엄격하게 선택하십시오. -> -> `pip install -r requirements.txt` - -# GPT 학술 최적화 (GPT Academic) - -**이 프로젝트가 마음에 드신다면 Star를 주세요. 추가로 유용한 학술 단축키나 기능 플러그인이 있다면 이슈나 pull request를 남기세요. 이 프로젝트에 대한 [영어 |](docs/README_EN.md)[일본어 |](docs/README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[러시아어 |](docs/README_RS.md)[프랑스어](docs/README_FR.md)로 된 README도 있습니다. -GPT를 이용하여 프로젝트를 임의의 언어로 번역하려면 [`multi_language.py`](multi_language.py)를 읽고 실행하십시오. (실험적) - -> **노트** -> -> 1. 파일을 읽기 위해 **빨간색**으로 표시된 기능 플러그인 (버튼) 만 지원됩니다. 일부 플러그인은 플러그인 영역의 **드롭다운 메뉴**에 있습니다. 또한 새로운 플러그인은 **가장 높은 우선순위**로 환영하며 처리합니다! -> -> 2. 이 프로젝트의 각 파일의 기능을 [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)에서 자세히 설명합니다. 버전이 업데이트 됨에 따라 관련된 기능 플러그인을 클릭하고 GPT를 호출하여 프로젝트의 자체 분석 보고서를 다시 생성할 수도 있습니다. 자주 묻는 질문은 [`위키`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)에서 볼 수 있습니다. [설치 방법](#installation). -> -> 3. 이 프로젝트는 국내 언어 모델 chatglm과 RWKV, 판고 등의 시도와 호환 가능합니다. 여러 개의 api-key를 지원하며 설정 파일에 "API_KEY="openai-key1,openai-key2,api2d-key3""와 같이 작성할 수 있습니다. `API_KEY`를 임시로 변경해야하는 경우 입력 영역에 임시 `API_KEY`를 입력 한 후 엔터 키를 누르면 즉시 적용됩니다. - -
    기능 | 설명 ---- | --- -원 키워드 | 원 키워드 및 논문 문법 오류를 찾는 기능 지원 -한-영 키워드 | 한-영 키워드 지원 -코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가 -[사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원 -모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions) 지원, 플러그인이 [램 업데이트](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)를 지원합니다. -[자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공 -[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...) -논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다. -LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다. -대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다. -Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)를 볼 수 있습니다. -chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다. -[PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드) -[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다. -[Google Scholar 통합 도우미](https://www.bilibili.com/video/BV19L411U7ia) | [함수 플러그인] Google Scholar 검색 페이지 URL을 제공하면 gpt가 [Related Works 작성](https://www.bilibili.com/video/BV1GP411U7Az/)을 도와줍니다. -인터넷 정보 집계+GPT | [함수 플러그인] 먼저 GPT가 인터넷에서 정보를 수집하고 질문에 대답 할 수 있도록합니다. 정보가 절대적으로 구식이 아닙니다. -수식/이미지/표 표시 | 급여, 코드 강조 기능 지원 -멀티 스레드 함수 플러그인 지원 | Chatgpt를 여러 요청에서 실행하여 [대량의 텍스트](https://www.bilibili.com/video/BV1FT411H7c5/) 또는 프로그램을 처리 할 수 있습니다. -다크 그라디오 테마 시작 | 어둡게 주제를 변경하려면 브라우저 URL 끝에 ```/?__theme=dark```을 추가하면됩니다. -[다중 LLM 모델](https://www.bilibili.com/video/BV1wT411p7yf) 지원, [API2D](https://api2d.com/) 인터페이스 지원됨 | GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)가 모두 동시에 작동하는 것처럼 느낄 수 있습니다! -LLM 모델 추가 및[huggingface 배치](https://huggingface.co/spaces/qingxu98/gpt-academic) 지원 | 새 Bing 인터페이스 (새 Bing) 추가, Clearing House [Jittorllms](https://github.com/Jittor/JittorLLMs) 지원 [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) 및 [盘古α](https://openi.org.cn/pangu/) -기타 새로운 기능 (이미지 생성 등) ... | 이 문서의 끝부분을 참조하세요. ...- 모든 버튼은 functional.py를 동적으로 읽어와서 사용자 정의 기능을 자유롭게 추가할 수 있으며, 클립 보드를 해제합니다. -
    - -
    - -- 검수/오타 교정 -
    - -
    - -- 출력에 수식이 포함되어 있으면 텍스와 렌더링의 형태로 동시에 표시되어 복사 및 읽기가 용이합니다. -
    - -
    - -- 프로젝트 코드를 볼 시간이 없습니까? 전체 프로젝트를 chatgpt에 직접 표시하십시오 -
    - -
    - -- 다양한 대형 언어 모델 범용 요청 (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
    - -
    - ---- -# 설치 -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. 프로젝트 다운로드 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. API_KEY 구성 - -`config.py`에서 API KEY 등 설정을 구성합니다. [특별한 네트워크 환경 설정](https://github.com/binary-husky/gpt_academic/issues/1) . - -(P.S. 프로그램이 실행될 때, 이름이 `config_private.py`인 기밀 설정 파일이 있는지 우선적으로 확인하고 해당 설정으로 `config.py`의 동일한 이름의 설정을 덮어씁니다. 따라서 구성 읽기 논리를 이해할 수 있다면, `config.py` 옆에 `config_private.py`라는 새 구성 파일을 만들고 `config.py`의 구성을 `config_private.py`로 이동(복사)하는 것이 좋습니다. `config_private.py`는 git으로 관리되지 않으며 개인 정보를 더 안전하게 보호할 수 있습니다. P.S. 프로젝트는 또한 대부분의 옵션을 `환경 변수`를 통해 설정할 수 있으며, `docker-compose` 파일을 참조하여 환경 변수 작성 형식을 확인할 수 있습니다. 우선순위: `환경 변수` > `config_private.py` > `config.py`) - - -3. 의존성 설치 -```sh -# (I 선택: 기존 python 경험이 있다면) (python 버전 3.9 이상, 최신 버전이 좋습니다), 참고: 공식 pip 소스 또는 알리 pip 소스 사용, 일시적인 교체 방법: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (II 선택: Python에 익숙하지 않은 경우) anaconda 사용 방법은 비슷함(https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # anaconda 환경 만들기 -conda activate gptac_venv # anaconda 환경 활성화 -python -m pip install -r requirements.txt # 이 단계도 pip install의 단계와 동일합니다. -``` - -
    추가지원을 위해 Tsinghua ChatGLM / Fudan MOSS를 사용해야하는 경우 지원을 클릭하여 이 부분을 확장하세요. -

    - -[Tsinghua ChatGLM] / [Fudan MOSS]를 백엔드로 사용하려면 추가적인 종속성을 설치해야합니다 (전제 조건 : Python을 이해하고 Pytorch를 사용한 적이 있으며, 컴퓨터가 충분히 강력한 경우) : -```sh -# [선택 사항 I] Tsinghua ChatGLM을 지원합니다. Tsinghua ChatGLM에 대한 참고사항 : "Call ChatGLM fail cannot load ChatGLM parameters normally" 오류 발생시 다음 참조: -# 1 : 기본 설치된 것들은 torch + cpu 버전입니다. cuda를 사용하려면 torch를 제거한 다음 torch + cuda를 다시 설치해야합니다. -# 2 : 모델을 로드할 수 없는 기계 구성 때문에, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)를 -# AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)로 변경합니다. -python -m pip install -r request_llm/requirements_chatglm.txt - -# [선택 사항 II] Fudan MOSS 지원 -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 다음 코드 줄을 실행할 때 프로젝트 루트 경로에 있어야합니다. - -# [선택 사항III] AVAIL_LLM_MODELS config.py 구성 파일에 기대하는 모델이 포함되어 있는지 확인하십시오. -# 현재 지원되는 전체 모델 : -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. 실행 -```sh -python main.py -```5. 테스트 함수 플러그인 -``` -- 테스트 함수 플러그인 템플릿 함수 (GPT에게 오늘의 역사에서 무슨 일이 일어났는지 대답하도록 요청)를 구현하는 데 사용할 수 있습니다. 이 함수를 기반으로 더 복잡한 기능을 구현할 수 있습니다. - "[함수 플러그인 템플릿 데모] 오늘의 역사"를 클릭하세요. -``` - -## 설치 - 방법 2 : 도커 사용 - -1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # 다운로드 -cd chatgpt_academic # 경로 이동 -nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다. -docker build -t gpt-academic . # 설치 - -#(마지막 단계-1 선택) Linux 환경에서는 --net=host를 사용하면 더 편리합니다. -docker run --rm -it --net=host gpt-academic -#(마지막 단계-2 선택) macOS / windows 환경에서는 -p 옵션을 사용하여 컨테이너의 포트 (예 : 50923)를 호스트의 포트로 노출해야합니다. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Docker에 익숙해야합니다.) - -``` sh -#docker-compose.yml을 수정하여 계획 1 및 계획 3을 삭제하고 계획 2를 유지합니다. docker-compose.yml에서 계획 2의 구성을 수정하면 됩니다. 주석을 참조하십시오. -docker-compose up -``` - -3. ChatGPT + LLAMA + Pangu + RWKV (Docker에 익숙해야합니다.) -``` sh -#docker-compose.yml을 수정하여 계획 1 및 계획 2을 삭제하고 계획 3을 유지합니다. docker-compose.yml에서 계획 3의 구성을 수정하면 됩니다. 주석을 참조하십시오. -docker-compose up -``` - - -## 설치 - 방법 3 : 다른 배치 방법 - -1. 리버스 프록시 URL / Microsoft Azure API 사용 방법 -API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다. - -2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.) -[배치위키-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)에 방문하십시오. - -3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템) -[배치 위키-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)에 방문하십시오. - -4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법 -[FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오. - -5. docker-compose 실행 -docker-compose.yml을 읽은 후 지시 사항에 따라 작업하십시오. ---- -# 고급 사용법 -## 사용자 정의 바로 가기 버튼 / 사용자 정의 함수 플러그인 - -1. 사용자 정의 바로 가기 버튼 (학술 바로 가기) -임의의 텍스트 편집기로 'core_functional.py'를 엽니다. 엔트리 추가, 그런 다음 프로그램을 다시 시작하면됩니다. (버튼이 이미 추가되어 보이고 접두사, 접미사가 모두 변수가 효과적으로 수정되면 프로그램을 다시 시작하지 않아도됩니다.) -예 : -``` -"超级英译中": { - # 접두사. 당신이 요구하는 것을 설명하는 데 사용됩니다. 예를 들어 번역, 코드를 설명, 다듬기 등 - "Prefix": "下面翻译成中文,然后用一个 markdown 表格逐一解释文中出现的专有名词:\n\n", - - # 접미사는 입력 내용 앞뒤에 추가됩니다. 예를 들어 전위를 사용하여 입력 내용을 따옴표로 묶는데 사용할 수 있습니다. - "Suffix": "", -}, -``` -
    - -
    - -2. 사용자 지정 함수 플러그인 -강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오. -이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으며, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E 4%BB%B6%E6%8C%87%E5%8D%97). ---- -# 최신 업데이트 -## 새로운 기능 동향1. 대화 저장 기능. - -1. 함수 플러그인 영역에서 '현재 대화 저장'을 호출하면 현재 대화를 읽을 수 있고 복원 가능한 HTML 파일로 저장할 수 있습니다. 또한 함수 플러그인 영역(드롭다운 메뉴)에서 '대화 기록 불러오기'를 호출하면 이전 대화를 복원할 수 있습니다. 팁: 파일을 지정하지 않고 '대화 기록 불러오기'를 클릭하면 기록된 HTML 캐시를 볼 수 있으며 '모든 로컬 대화 기록 삭제'를 클릭하면 모든 HTML 캐시를 삭제할 수 있습니다. - -2. 보고서 생성. 대부분의 플러그인은 실행이 끝난 후 작업 보고서를 생성합니다. - -3. 모듈화 기능 설계, 간단한 인터페이스로도 강력한 기능을 지원할 수 있습니다. - -4. 자체 번역이 가능한 오픈 소스 프로젝트입니다. - -5. 다른 오픈 소스 프로젝트를 번역하는 것은 어렵지 않습니다. - -6. [live2d](https://github.com/fghrsh/live2d_demo) 장식 기능(기본적으로 비활성화되어 있으며 `config.py`를 수정해야 합니다.) - -7. MOSS 대 언어 모델 지원 추가 - -8. OpenAI 이미지 생성 - -9. OpenAI 음성 분석 및 요약 - -10. LaTeX 전체적인 교정 및 오류 수정 - -## 버전: -- version 3.5 (TODO): 자연어를 사용하여 이 프로젝트의 모든 함수 플러그인을 호출하는 기능(우선순위 높음) -- version 3.4(TODO): 로컬 대 모듈의 다중 스레드 지원 향상 -- version 3.3: 인터넷 정보 종합 기능 추가 -- version 3.2: 함수 플러그인이 더 많은 인수 인터페이스를 지원합니다.(대화 저장 기능, 임의의 언어 코드 해석 및 동시에 임의의 LLM 조합을 확인하는 기능) -- version 3.1: 여러 개의 GPT 모델에 대한 동시 쿼리 지원! api2d 지원, 여러 개의 apikey 로드 밸런싱 지원 -- version 3.0: chatglm 및 기타 소형 llm의 지원 -- version 2.6: 플러그인 구조를 재구성하여 상호 작용성을 향상시켰습니다. 더 많은 플러그인을 추가했습니다. -- version 2.5: 자체 업데이트, 전체 프로젝트를 요약할 때 텍스트가 너무 길어지고 토큰이 오버플로우되는 문제를 해결했습니다. -- version 2.4: (1) PDF 전체 번역 기능 추가; (2) 입력 영역 위치 전환 기능 추가; (3) 수직 레이아웃 옵션 추가; (4) 다중 스레드 함수 플러그인 최적화. -- version 2.3: 다중 스레드 상호 작용성 강화 -- version 2.2: 함수 플러그인 히트 리로드 지원 -- version 2.1: 접는 레이아웃 지원 -- version 2.0: 모듈화 함수 플러그인 도입 -- version 1.0: 기본 기능 - -gpt_academic 개발자 QQ 그룹-2 : 610599535 - -- 알려진 문제 - - 일부 브라우저 번역 플러그인이이 소프트웨어의 프론트 엔드 작동 방식을 방해합니다. - - gradio 버전이 너무 높거나 낮으면 여러 가지 이상이 발생할 수 있습니다. - -## 참고 및 학습 자료 - -``` -많은 우수 프로젝트의 디자인을 참고했습니다. 주요 항목은 다음과 같습니다. - -# 프로젝트 1 : Tsinghua ChatGLM-6B : -https://github.com/THUDM/ChatGLM-6B - -# 프로젝트 2 : Tsinghua JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# 프로젝트 3 : Edge-GPT : -https://github.com/acheong08/EdgeGPT - -# 프로젝트 4 : ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 프로젝트 5 : ChatPaper : -https://github.com/kaixindelele/ChatPaper - -# 더 많은 : -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/docs/self_analysis.md b/spaces/f2api/gpt-academic/docs/self_analysis.md deleted file mode 100644 index ebc2337194974bf210794df7d858889010fecf08..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/docs/self_analysis.md +++ /dev/null @@ -1,378 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - - -| 文件名 | 功能描述 | -| ------ | ------ | -| check_proxy.py | 检查代理有效性及地理位置 | -| colorful.py | 控制台打印彩色文字 | -| config.py | 配置和参数设置 | -| config_private.py | 私人配置和参数设置 | -| core_functional.py | 核心函数和参数设置 | -| crazy_functional.py | 高级功能插件集合 | -| main.py | 一个 Chatbot 程序,提供各种学术翻译、文本处理和其他查询服务 | -| multi_language.py | 识别和翻译不同语言 | -| theme.py | 自定义 gradio 应用程序主题 | -| toolbox.py | 工具类库,用于协助实现各种功能 | -| crazy_functions\crazy_functions_test.py | 测试 crazy_functions 中的各种函数 | -| crazy_functions\crazy_utils.py | 工具函数,用于字符串处理、异常检测、Markdown 格式转换等 | -| crazy_functions\Latex全文润色.py | 对整个 Latex 项目进行润色和纠错 | -| crazy_functions\Latex全文翻译.py | 对整个 Latex 项目进行翻译 | -| crazy_functions\\_\_init\_\_.py | 模块初始化文件,标识 `crazy_functions` 是一个包 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | -| crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | -| crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | -| crazy_functions\对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | -| crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 | -| crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 | -| crazy_functions\批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | -| crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | -| crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | -| crazy_functions\批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | -| crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | -| crazy_functions\生成函数注释.py | 自动生成Python函数的注释 | -| crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | -| crazy_functions\解析JupyterNotebook.py | 对Jupyter Notebook进行代码解析 | -| crazy_functions\解析项目源代码.py | 对指定编程语言的源代码进行解析 | -| crazy_functions\询问多个大语言模型.py | 使用多个大语言模型对输入进行处理和回复 | -| crazy_functions\读文章写摘要.py | 对论文进行解析和全文摘要生成 | -| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 | -| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 | -| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 | -| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 | -| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 | -| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 | -| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 | -| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 | -| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 | -| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 | -| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 | -| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 | -| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 | -| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 | -| request_llm\test_llms.py | 对llm模型进行单元测试。 | - -## 接下来请你逐文件分析下面的工程[0/48] 请对下面的程序文件做一个概述: check_proxy.py - -这个文件主要包含了五个函数: - -1. `check_proxy`:用于检查代理的有效性及地理位置,输出代理配置和所在地信息。 - -2. `backup_and_download`:用于备份当前版本并下载新版本。 - -3. `patch_and_restart`:用于覆盖更新当前版本并重新启动程序。 - -4. `get_current_version`:用于获取当前程序的版本号。 - -5. `auto_update`:用于自动检查新版本并提示用户更新。如果用户选择更新,则备份并下载新版本,覆盖更新当前版本并重新启动程序。如果更新失败,则输出错误信息,并不会向用户进行任何提示。 - -还有一个没有函数名的语句`os.environ['no_proxy'] = '*'`,用于设置环境变量,避免代理网络产生意外污染。 - -此外,该文件导入了以下三个模块/函数: - -- `requests` -- `shutil` -- `os` - -## [1/48] 请对下面的程序文件做一个概述: colorful.py - -该文件是一个Python脚本,用于在控制台中打印彩色文字。该文件包含了一些函数,用于以不同颜色打印文本。其中,红色、绿色、黄色、蓝色、紫色、靛色分别以函数 print红、print绿、print黄、print蓝、print紫、print靛 的形式定义;亮红色、亮绿色、亮黄色、亮蓝色、亮紫色、亮靛色分别以 print亮红、print亮绿、print亮黄、print亮蓝、print亮紫、print亮靛 的形式定义。它们使用 ANSI Escape Code 将彩色输出从控制台突出显示。如果运行在 Linux 操作系统上,文件所执行的操作被留空;否则,该文件导入了 colorama 库并调用 init() 函数进行初始化。最后,通过一系列条件语句,该文件通过将所有彩色输出函数的名称重新赋值为 print 函数的名称来避免输出文件的颜色问题。 - -## [2/48] 请对下面的程序文件做一个概述: config.py - -这个程序文件是用来配置和参数设置的。它包含了许多设置,如API key,使用代理,线程数,默认模型,超时时间等等。此外,它还包含了一些高级功能,如URL重定向等。这些设置将会影响到程序的行为和性能。 - -## [3/48] 请对下面的程序文件做一个概述: config_private.py - -这个程序文件是一个Python脚本,文件名为config_private.py。其中包含以下变量的赋值: - -1. API_KEY:API密钥。 -2. USE_PROXY:是否应用代理。 -3. proxies:如果使用代理,则设置代理网络的协议(socks5/http)、地址(localhost)和端口(11284)。 -4. DEFAULT_WORKER_NUM:默认的工作线程数量。 -5. SLACK_CLAUDE_BOT_ID:Slack机器人ID。 -6. SLACK_CLAUDE_USER_TOKEN:Slack用户令牌。 - -## [4/48] 请对下面的程序文件做一个概述: core_functional.py - -这是一个名为core_functional.py的源代码文件,该文件定义了一个名为get_core_functions()的函数,该函数返回一个字典,该字典包含了各种学术翻译润色任务的说明和相关参数,如颜色、前缀、后缀等。这些任务包括英语学术润色、中文学术润色、查找语法错误、中译英、学术中英互译、英译中、找图片和参考文献转Bib。其中,一些任务还定义了预处理函数用于处理任务的输入文本。 - -## [5/48] 请对下面的程序文件做一个概述: crazy_functional.py - -此程序文件(crazy_functional.py)是一个函数插件集合,包含了多个函数插件的定义和调用。这些函数插件旨在提供一些高级功能,如解析项目源代码、批量翻译PDF文档和Latex全文润色等。其中一些插件还支持热更新功能,不需要重启程序即可生效。文件中的函数插件按照功能进行了分类(第一组和第二组),并且有不同的调用方式(作为按钮或下拉菜单)。 - -## [6/48] 请对下面的程序文件做一个概述: main.py - -这是一个Python程序文件,文件名为main.py。该程序包含一个名为main的函数,程序会自动运行该函数。程序要求已经安装了gradio、os等模块,会根据配置文件加载代理、model、API Key等信息。程序提供了Chatbot功能,实现了一个对话界面,用户可以输入问题,然后Chatbot可以回答问题或者提供相关功能。程序还包含了基础功能区、函数插件区、更换模型 & SysPrompt & 交互界面布局、备选输入区,用户可以在这些区域选择功能和插件进行使用。程序中还包含了一些辅助模块,如logging等。 - -## [7/48] 请对下面的程序文件做一个概述: multi_language.py - -该文件multi_language.py是用于将项目翻译成不同语言的程序。它包含了以下函数和变量:lru_file_cache、contains_chinese、split_list、map_to_json、read_map_from_json、advanced_split、trans、trans_json、step_1_core_key_translate、CACHE_FOLDER、blacklist、LANG、TransPrompt、cached_translation等。注释和文档字符串提供了有关程序的说明,例如如何使用该程序,如何修改“LANG”和“TransPrompt”变量等。 - -## [8/48] 请对下面的程序文件做一个概述: theme.py - -这是一个Python源代码文件,文件名为theme.py。此文件中定义了一个函数adjust_theme,其功能是自定义gradio应用程序的主题,包括调整颜色、字体、阴影等。如果允许,则添加一个看板娘。此文件还包括变量advanced_css,其中包含一些CSS样式,用于高亮显示代码和自定义聊天框样式。此文件还导入了get_conf函数和gradio库。 - -## [9/48] 请对下面的程序文件做一个概述: toolbox.py - -toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和小工具函数,用于协助实现聊天机器人所需的各种功能,包括文本处理、功能插件加载、异常检测、Markdown格式转换,文件读写等等。此外,该库还包含一些依赖、参数配置等信息。该库易于理解和维护。 - -## [10/48] 请对下面的程序文件做一个概述: crazy_functions\crazy_functions_test.py - -这个文件是一个Python测试模块,用于测试crazy_functions中的各种函数插件。这些函数包括:解析Python项目源代码、解析Cpp项目源代码、Latex全文润色、Markdown中译英、批量翻译PDF文档、谷歌检索小助手、总结word文档、下载arxiv论文并翻译摘要、联网回答问题、和解析Jupyter Notebooks。对于每个函数插件,都有一个对应的测试函数来进行测试。 - -## [11/48] 请对下面的程序文件做一个概述: crazy_functions\crazy_utils.py - -这个Python文件中包括了两个函数: - -1. `input_clipping`: 该函数用于裁剪输入文本长度,使其不超过一定的限制。 -2. `request_gpt_model_in_new_thread_with_ui_alive`: 该函数用于请求 GPT 模型并保持用户界面的响应,支持多线程和实时更新用户界面。 - -这两个函数都依赖于从 `toolbox` 和 `request_llm` 中导入的一些工具函数。函数的输入和输出有详细的描述文档。 - -## [12/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文润色.py - -这是一个Python程序文件,文件名为crazy_functions\Latex全文润色.py。文件包含了一个PaperFileGroup类和三个函数Latex英文润色,Latex中文润色和Latex英文纠错。程序使用了字符串处理、正则表达式、文件读写、多线程等技术,主要作用是对整个Latex项目进行润色和纠错。其中润色和纠错涉及到了对文本的语法、清晰度和整体可读性等方面的提升。此外,该程序还参考了第三方库,并封装了一些工具函数。 - -## [13/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文翻译.py - -这个文件包含两个函数 `Latex英译中` 和 `Latex中译英`,它们都会对整个Latex项目进行翻译。这个文件还包含一个类 `PaperFileGroup`,它拥有一个方法 `run_file_split`,用于把长文本文件分成多个短文件。其中使用了工具库 `toolbox` 中的一些函数和从 `request_llm` 中导入了 `model_info`。接下来的函数把文件读取进来,把它们的注释删除,进行分割,并进行翻译。这个文件还包括了一些异常处理和界面更新的操作。 - -## [14/48] 请对下面的程序文件做一个概述: crazy_functions\__init__.py - -这是一个Python模块的初始化文件(__init__.py),命名为"crazy_functions"。该模块包含了一些疯狂的函数,但该文件并没有实现这些函数,而是作为一个包(package)来导入其它的Python模块以实现这些函数。在该文件中,没有定义任何类或函数,它唯一的作用就是标识"crazy_functions"模块是一个包。 - -## [15/48] 请对下面的程序文件做一个概述: crazy_functions\下载arxiv论文翻译摘要.py - -这是一个 Python 程序文件,文件名为 `下载arxiv论文翻译摘要.py`。程序包含多个函数,其中 `下载arxiv论文并翻译摘要` 函数的作用是下载 `arxiv` 论文的 PDF 文件,提取摘要并使用 GPT 对其进行翻译。其他函数包括用于下载 `arxiv` 论文的 `download_arxiv_` 函数和用于获取文章信息的 `get_name` 函数,其中涉及使用第三方库如 requests, BeautifulSoup 等。该文件还包含一些用于调试和存储文件的代码段。 - -## [16/48] 请对下面的程序文件做一个概述: crazy_functions\代码重写为全英文_多线程.py - -该程序文件是一个多线程程序,主要功能是将指定目录下的所有Python代码文件中的中文内容转化为英文,并将转化后的代码存储到一个新的文件中。其中,程序使用了GPT-3等技术进行中文-英文的转化,同时也进行了一些Token限制下的处理,以防止程序发生错误。程序在执行过程中还会输出一些提示信息,并将所有转化过的代码文件存储到指定目录下。在程序执行结束后,还会生成一个任务执行报告,记录程序运行的详细信息。 - -## [17/48] 请对下面的程序文件做一个概述: crazy_functions\图片生成.py - -该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。 - -## [18/48] 请对下面的程序文件做一个概述: crazy_functions\对话历史存档.py - -这个文件是名为crazy_functions\对话历史存档.py的Python程序文件,包含了4个函数: - -1. write_chat_to_file(chatbot, history=None, file_name=None):用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。 - -2. gen_file_preview(file_name):从传入的文件中读取内容,解析出对话历史记录并返回前100个字符,用于文件预览。 - -3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。 - -4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。 - -## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py - -该程序文件实现了一个总结Word文档的功能,使用Python的docx库读取docx格式的文件,使用pywin32库读取doc格式的文件。程序会先根据传入的txt参数搜索需要处理的文件,并逐个解析其中的内容,将内容拆分为指定长度的文章片段,然后使用另一个程序文件中的request_gpt_model_in_new_thread_with_ui_alive函数进行中文概述。最后将所有的总结结果写入一个文件中,并在界面上进行展示。 - -## [20/48] 请对下面的程序文件做一个概述: crazy_functions\总结音视频.py - -该程序文件包括两个函数:split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。 - -## [21/48] 请对下面的程序文件做一个概述: crazy_functions\批量Markdown翻译.py - -该程序文件名为`批量Markdown翻译.py`,包含了以下功能:读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译(英译中和中译英),整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。 - -## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py - -该文件是一个Python脚本,名为crazy_functions\批量总结PDF文档.py。在导入了一系列库和工具函数后,主要定义了5个函数,其中包括一个错误处理装饰器(@CatchException),用于批量总结PDF文档。该函数主要实现对PDF文档的解析,并调用模型生成中英文摘要。 - -## [23/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档pdfminer.py - -该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。 - -## [24/48] 请对下面的程序文件做一个概述: crazy_functions\批量翻译PDF文档_多线程.py - -这个程序文件是一个Python脚本,文件名为“批量翻译PDF文档_多线程.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件(包括md文件和html文件)。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。 - -## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py - -该程序文件实现了一个名为“理解PDF文档内容”的函数,该函数可以为输入的PDF文件提取摘要以及正文各部分的主要内容,并在提取过程中根据上下文关系进行学术性问题解答。该函数依赖于多个辅助函数和第三方库,并在执行过程中针对可能出现的异常进行了处理。 - -## [26/48] 请对下面的程序文件做一个概述: crazy_functions\生成函数注释.py - -该程序文件是一个Python模块文件,文件名为“生成函数注释.py”,定义了两个函数:一个是生成函数注释的主函数“生成函数注释”,另一个是通过装饰器实现异常捕捉的函数“批量生成函数注释”。该程序文件依赖于“toolbox”和本地“crazy_utils”模块,并且在运行时使用了多线程技术和GPT模型来生成注释。函数生成的注释结果使用Markdown表格输出并写入历史记录文件。 - -## [27/48] 请对下面的程序文件做一个概述: crazy_functions\联网的ChatGPT.py - -这是一个名为`联网的ChatGPT.py`的Python程序文件,其中定义了一个函数`连接网络回答问题`。该函数通过爬取搜索引擎的结果和访问网页来综合回答给定的问题,并使用ChatGPT模型完成回答。此外,该文件还包括一些工具函数,例如从网页中抓取文本和使用代理访问网页。 - -## [28/48] 请对下面的程序文件做一个概述: crazy_functions\解析JupyterNotebook.py - -这个程序文件包含了两个函数: `parseNotebook()`和`解析ipynb文件()`,并且引入了一些工具函数和类。`parseNotebook()`函数将Jupyter Notebook文件解析为文本代码块,`解析ipynb文件()`函数则用于解析多个Jupyter Notebook文件,使用`parseNotebook()`解析每个文件和一些其他的处理。函数中使用了多线程处理输入和输出,并且将结果写入到文件中。 - -## [29/48] 请对下面的程序文件做一个概述: crazy_functions\解析项目源代码.py - -这是一个源代码分析的Python代码文件,其中定义了多个函数,包括解析一个Python项目、解析一个C项目、解析一个C项目的头文件和解析一个Java项目等。其中解析源代码新函数是实际处理源代码分析并生成报告的函数。该函数首先会逐个读取传入的源代码文件,生成对应的请求内容,通过多线程发送到chatgpt进行分析。然后将结果写入文件,并进行汇总分析。最后通过调用update_ui函数刷新界面,完整实现了源代码的分析。 - -## [30/48] 请对下面的程序文件做一个概述: crazy_functions\询问多个大语言模型.py - -该程序文件包含两个函数:同时问询()和同时问询_指定模型(),它们的作用是使用多个大语言模型同时对用户输入进行处理,返回对应模型的回复结果。同时问询()会默认使用ChatGPT和ChatGLM两个模型,而同时问询_指定模型()则可以指定要使用的模型。该程序文件还引用了其他的模块和函数库。 - -## [31/48] 请对下面的程序文件做一个概述: crazy_functions\读文章写摘要.py - -这个程序文件是一个Python模块,文件名为crazy_functions\读文章写摘要.py。该模块包含了两个函数,其中主要函数是"读文章写摘要"函数,其实现了解析给定文件夹中的tex文件,对其中每个文件的内容进行摘要生成,并根据各论文片段的摘要,最终生成全文摘要。第二个函数是"解析Paper"函数,用于解析单篇论文文件。其中用到了一些工具函数和库,如update_ui、CatchException、report_execption、write_results_to_file等。 - -## [32/48] 请对下面的程序文件做一个概述: crazy_functions\谷歌检索小助手.py - -该文件是一个Python模块,文件名为“谷歌检索小助手.py”。该模块包含两个函数,一个是“get_meta_information()”,用于从提供的网址中分析出所有相关的学术文献的元数据信息;另一个是“谷歌检索小助手()”,是主函数,用于分析用户提供的谷歌学术搜索页面中出现的文章,并提取相关信息。其中,“谷歌检索小助手()”函数依赖于“get_meta_information()”函数,并调用了其他一些Python模块,如“arxiv”、“math”、“bs4”等。 - -## [33/48] 请对下面的程序文件做一个概述: crazy_functions\高级功能函数模板.py - -该程序文件定义了一个名为高阶功能模板函数的函数,该函数接受多个参数,包括输入的文本、gpt模型参数、插件模型参数、聊天显示框的句柄、聊天历史等,并利用送出请求,使用 Unsplash API 发送相关图片。其中,为了避免输入溢出,函数会在开始时清空历史。函数也有一些 UI 更新的语句。该程序文件还依赖于其他两个模块:CatchException 和 update_ui,以及一个名为 request_gpt_model_in_new_thread_with_ui_alive 的来自 crazy_utils 模块(应该是自定义的工具包)的函数。 - -## [34/48] 请对下面的程序文件做一个概述: request_llm\bridge_all.py - -该文件包含两个函数:predict和predict_no_ui_long_connection,用于基于不同的LLM模型进行对话。该文件还包含一个lazyloadTiktoken类和一个LLM_CATCH_EXCEPTION修饰器函数。其中lazyloadTiktoken类用于懒加载模型的tokenizer,LLM_CATCH_EXCEPTION用于错误处理。整个文件还定义了一些全局变量和模型信息字典,用于引用和配置LLM模型。 - -## [35/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatglm.py - -这是一个Python程序文件,名为`bridge_chatglm.py`,其中定义了一个名为`GetGLMHandle`的类和三个方法:`predict_no_ui_long_connection`、 `predict`和 `stream_chat`。该文件依赖于多个Python库,如`transformers`和`sentencepiece`。该文件实现了一个聊天机器人,使用ChatGLM模型来生成回复,支持单线程和多线程方式。程序启动时需要加载ChatGLM的模型和tokenizer,需要一段时间。在配置文件`config.py`中设置参数会影响模型的内存和显存使用,因此程序可能会导致低配计算机卡死。 - -## [36/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatgpt.py - -该文件为 Python 代码文件,文件名为 request_llm\bridge_chatgpt.py。该代码文件主要提供三个函数:predict、predict_no_ui和 predict_no_ui_long_connection,用于发送至 chatGPT 并等待回复,获取输出。该代码文件还包含一些辅助函数,用于处理连接异常、生成 HTTP 请求等。该文件的代码架构清晰,使用了多个自定义函数和模块。 - -## [37/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_llama.py - -该代码文件实现了一个聊天机器人,其中使用了 JittorLLMs 模型。主要包括以下几个部分: -1. GetGLMHandle 类:一个进程类,用于加载 JittorLLMs 模型并接收并处理请求。 -2. predict_no_ui_long_connection 函数:一个多线程方法,用于在后台运行聊天机器人。 -3. predict 函数:一个单线程方法,用于在前端页面上交互式调用聊天机器人,以获取用户输入并返回相应的回复。 - -这个文件中还有一些辅助函数和全局变量,例如 importlib、time、threading 等。 - -## [38/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_pangualpha.py - -这个文件是为了实现使用jittorllms(一种机器学习模型)来进行聊天功能的代码。其中包括了模型加载、模型的参数加载、消息的收发等相关操作。其中使用了多进程和多线程来提高性能和效率。代码中还包括了处理依赖关系的函数和预处理函数等。 - -## [39/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_rwkv.py - -这个文件是一个Python程序,文件名为request_llm\bridge_jittorllms_rwkv.py。它依赖transformers、time、threading、importlib、multiprocessing等库。在文件中,通过定义GetGLMHandle类加载jittorllms模型参数和定义stream_chat方法来实现与jittorllms模型的交互。同时,该文件还定义了predict_no_ui_long_connection和predict方法来处理历史信息、调用jittorllms模型、接收回复信息并输出结果。 - -## [40/48] 请对下面的程序文件做一个概述: request_llm\bridge_moss.py - -该文件为一个Python源代码文件,文件名为 request_llm\bridge_moss.py。代码定义了一个 GetGLMHandle 类和两个函数 predict_no_ui_long_connection 和 predict。 - -GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个子进程并加载 MOSS 模型参数,通过 Pipe 进行主子进程的通信。该类还定义了 check_dependency、moss_init、run 和 stream_chat 等方法,其中 check_dependency 和 moss_init 是子进程的初始化方法,run 是子进程运行方法,stream_chat 实现了主进程和子进程的交互过程。 - -函数 predict_no_ui_long_connection 是多线程方法,调用 GetGLMHandle 类加载 MOSS 参数后使用 stream_chat 实现主进程和子进程的交互过程。 - -函数 predict 是单线程方法,通过调用 update_ui 将交互过程中 MOSS 的回复实时更新到UI(User Interface)中,并执行一个 named function(additional_fn)指定的函数对输入进行预处理。 - -## [41/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbing.py - -这是一个名为`bridge_newbing.py`的程序文件,包含三个部分: - -第一部分使用from语句导入了`edge_gpt`模块的`NewbingChatbot`类。 - -第二部分定义了一个名为`NewBingHandle`的继承自进程类的子类,该类会检查依赖性并启动进程。同时,该部分还定义了一个名为`predict_no_ui_long_connection`的多线程方法和一个名为`predict`的单线程方法,用于与NewBing进行通信。 - -第三部分定义了一个名为`newbing_handle`的全局变量,并导出了`predict_no_ui_long_connection`和`predict`这两个方法,以供其他程序可以调用。 - -## [42/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbingfree.py - -这个Python文件包含了三部分内容。第一部分是来自edge_gpt_free.py文件的聊天机器人程序。第二部分是子进程Worker,用于调用主体。第三部分提供了两个函数:predict_no_ui_long_connection和predict用于调用NewBing聊天机器人和返回响应。其中predict函数还提供了一些参数用于控制聊天机器人的回复和更新UI界面。 - -## [43/48] 请对下面的程序文件做一个概述: request_llm\bridge_stackclaude.py - -这是一个Python源代码文件,文件名为request_llm\bridge_stackclaude.py。代码分为三个主要部分: - -第一部分定义了Slack API Client类,实现Slack消息的发送、接收、循环监听,用于与Slack API进行交互。 - -第二部分定义了ClaudeHandle类,继承Process类,用于创建子进程Worker,调用主体,实现Claude与用户交互的功能。 - -第三部分定义了predict_no_ui_long_connection和predict两个函数,主要用于通过调用ClaudeHandle对象的stream_chat方法来获取Claude的回复,并更新ui以显示相关信息。其中predict函数采用单线程方法,而predict_no_ui_long_connection函数使用多线程方法。 - -## [44/48] 请对下面的程序文件做一个概述: request_llm\bridge_tgui.py - -该文件是一个Python代码文件,名为request_llm\bridge_tgui.py。它包含了一些函数用于与chatbot UI交互,并通过WebSocket协议与远程LLM模型通信完成文本生成任务,其中最重要的函数是predict()和predict_no_ui_long_connection()。这个程序还有其他的辅助函数,如random_hash()。整个代码文件在协作的基础上完成了一次修改。 - -## [45/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt.py - -该文件是一个用于调用Bing chatbot API的Python程序,它由多个类和辅助函数构成,可以根据给定的对话连接在对话中提出问题,使用websocket与远程服务通信。程序实现了一个聊天机器人,可以为用户提供人工智能聊天。 - -## [46/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt_free.py - -该代码文件为一个会话API,可通过Chathub发送消息以返回响应。其中使用了 aiohttp 和 httpx 库进行网络请求并发送。代码中包含了一些函数和常量,多数用于生成请求数据或是请求头信息等。同时该代码文件还包含了一个 Conversation 类,调用该类可实现对话交互。 - -## [47/48] 请对下面的程序文件做一个概述: request_llm\test_llms.py - -这个文件是用于对llm模型进行单元测试的Python程序。程序导入一个名为"request_llm.bridge_newbingfree"的模块,然后三次使用该模块中的predict_no_ui_long_connection()函数进行预测,并输出结果。此外,还有一些注释掉的代码段,这些代码段也是关于模型预测的。 - -## 用一张Markdown表格简要描述以下文件的功能: -check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, multi_language.py, theme.py, toolbox.py, crazy_functions\crazy_functions_test.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能描述 | -| ------ | ------ | -| check_proxy.py | 检查代理有效性及地理位置 | -| colorful.py | 控制台打印彩色文字 | -| config.py | 配置和参数设置 | -| config_private.py | 私人配置和参数设置 | -| core_functional.py | 核心函数和参数设置 | -| crazy_functional.py | 高级功能插件集合 | -| main.py | 一个 Chatbot 程序,提供各种学术翻译、文本处理和其他查询服务 | -| multi_language.py | 识别和翻译不同语言 | -| theme.py | 自定义 gradio 应用程序主题 | -| toolbox.py | 工具类库,用于协助实现各种功能 | -| crazy_functions\crazy_functions_test.py | 测试 crazy_functions 中的各种函数 | -| crazy_functions\crazy_utils.py | 工具函数,用于字符串处理、异常检测、Markdown 格式转换等 | -| crazy_functions\Latex全文润色.py | 对整个 Latex 项目进行润色和纠错 | -| crazy_functions\Latex全文翻译.py | 对整个 Latex 项目进行翻译 | -| crazy_functions\__init__.py | 模块初始化文件,标识 `crazy_functions` 是一个包 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | - -这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。 - -## 用一张Markdown表格简要描述以下文件的功能: -crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\对话历史存档.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能简述 | -| --- | --- | -| 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | -| 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | -| 对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | -| 总结word文档.py | 对输入的word文档进行摘要生成 | -| 总结音视频.py | 对输入的音视频文件进行摘要生成 | -| 批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | -| 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | -| 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | -| 批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | -| 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | -| 生成函数注释.py | 自动生成Python函数的注释 | -| 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | -| 解析JupyterNotebook.py | 对Jupyter Notebook进行代码解析 | -| 解析项目源代码.py | 对指定编程语言的源代码进行解析 | -| 询问多个大语言模型.py | 使用多个大语言模型对输入进行处理和回复 | -| 读文章写摘要.py | 对论文进行解析和全文摘要生成 | - -概括程序的整体功能:提供了一系列处理文本、文件和代码的功能,使用了各类语言模型、多线程、网络请求和数据解析技术来提高效率和精度。 - -## 用一张Markdown表格简要描述以下文件的功能: -crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_jittorllms_llama.py, request_llm\bridge_jittorllms_pangualpha.py, request_llm\bridge_jittorllms_rwkv.py, request_llm\bridge_moss.py, request_llm\bridge_newbing.py, request_llm\bridge_newbingfree.py, request_llm\bridge_stackclaude.py, request_llm\bridge_tgui.py, request_llm\edge_gpt.py, request_llm\edge_gpt_free.py, request_llm\test_llms.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能描述 | -| --- | --- | -| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 | -| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 | -| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 | -| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 | -| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 | -| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 | -| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 | -| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 | -| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 | -| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 | -| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 | -| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 | -| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 | -| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 | -| request_llm\test_llms.py | 对llm模型进行单元测试。 | -| 程序整体功能 | 实现不同种类的聊天机器人,可以根据输入进行文本生成。 | diff --git a/spaces/facebook/MusicGen/CODE_OF_CONDUCT.md b/spaces/facebook/MusicGen/CODE_OF_CONDUCT.md deleted file mode 100644 index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or -advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic -address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a -professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/facebook/StyleNeRF/metrics/perceptual_path_length.py b/spaces/facebook/StyleNeRF/metrics/perceptual_path_length.py deleted file mode 100644 index 8d2c3a44aececa58a7c5602e14a24d424e51bf14..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/metrics/perceptual_path_length.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Perceptual Path Length (PPL) from the paper "A Style-Based Generator -Architecture for Generative Adversarial Networks". Matches the original -implementation by Karras et al. at -https://github.com/NVlabs/stylegan/blob/master/metrics/perceptual_path_length.py""" - -import copy -import numpy as np -import torch -import dnnlib -from . import metric_utils - -#---------------------------------------------------------------------------- - -# Spherical interpolation of a batch of vectors. -def slerp(a, b, t): - a = a / a.norm(dim=-1, keepdim=True) - b = b / b.norm(dim=-1, keepdim=True) - d = (a * b).sum(dim=-1, keepdim=True) - p = t * torch.acos(d) - c = b - d * a - c = c / c.norm(dim=-1, keepdim=True) - d = a * torch.cos(p) + c * torch.sin(p) - d = d / d.norm(dim=-1, keepdim=True) - return d - -#---------------------------------------------------------------------------- - -class PPLSampler(torch.nn.Module): - def __init__(self, G, G_kwargs, epsilon, space, sampling, crop, vgg16): - assert space in ['z', 'w'] - assert sampling in ['full', 'end'] - super().__init__() - self.G = copy.deepcopy(G) - self.G_kwargs = G_kwargs - self.epsilon = epsilon - self.space = space - self.sampling = sampling - self.crop = crop - self.vgg16 = copy.deepcopy(vgg16) - - def forward(self, c): - # Generate random latents and interpolation t-values. - t = torch.rand([c.shape[0]], device=c.device) * (1 if self.sampling == 'full' else 0) - z0, z1 = torch.randn([c.shape[0] * 2, self.G.z_dim], device=c.device).chunk(2) - - # Interpolate in W or Z. - if self.space == 'w': - w0, w1 = self.G.mapping(z=torch.cat([z0,z1]), c=torch.cat([c,c])).chunk(2) - wt0 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2)) - wt1 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2) + self.epsilon) - else: # space == 'z' - zt0 = slerp(z0, z1, t.unsqueeze(1)) - zt1 = slerp(z0, z1, t.unsqueeze(1) + self.epsilon) - wt0, wt1 = self.G.mapping(z=torch.cat([zt0,zt1]), c=torch.cat([c,c])).chunk(2) - - # Randomize noise buffers. - for name, buf in self.G.named_buffers(): - if name.endswith('.noise_const'): - buf.copy_(torch.randn_like(buf)) - - # Generate images. - img = self.G.synthesis(ws=torch.cat([wt0,wt1]), noise_mode='const', force_fp32=True, **self.G_kwargs) - - # Center crop. - if self.crop: - assert img.shape[2] == img.shape[3] - c = img.shape[2] // 8 - img = img[:, :, c*3 : c*7, c*2 : c*6] - - # Downsample to 256x256. - factor = self.G.img_resolution // 256 - if factor > 1: - img = img.reshape([-1, img.shape[1], img.shape[2] // factor, factor, img.shape[3] // factor, factor]).mean([3, 5]) - - # Scale dynamic range from [-1,1] to [0,255]. - img = (img + 1) * (255 / 2) - if self.G.img_channels == 1: - img = img.repeat([1, 3, 1, 1]) - - # Evaluate differential LPIPS. - lpips_t0, lpips_t1 = self.vgg16(img, resize_images=False, return_lpips=True).chunk(2) - dist = (lpips_t0 - lpips_t1).square().sum(1) / self.epsilon ** 2 - return dist - -#---------------------------------------------------------------------------- - -def compute_ppl(opts, num_samples, epsilon, space, sampling, crop, batch_size, jit=False): - dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs) - vgg16_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - vgg16 = metric_utils.get_feature_detector(vgg16_url, num_gpus=opts.num_gpus, rank=opts.rank, verbose=opts.progress.verbose) - - # Setup sampler. - sampler = PPLSampler(G=opts.G, G_kwargs=opts.G_kwargs, epsilon=epsilon, space=space, sampling=sampling, crop=crop, vgg16=vgg16) - sampler.eval().requires_grad_(False).to(opts.device) - if jit: - c = torch.zeros([batch_size, opts.G.c_dim], device=opts.device) - sampler = torch.jit.trace(sampler, [c], check_trace=False) - - # Sampling loop. - dist = [] - progress = opts.progress.sub(tag='ppl sampling', num_items=num_samples) - for batch_start in range(0, num_samples, batch_size * opts.num_gpus): - progress.update(batch_start) - c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(batch_size)] - c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device) - x = sampler(c) - for src in range(opts.num_gpus): - y = x.clone() - if opts.num_gpus > 1: - torch.distributed.broadcast(y, src=src) - dist.append(y) - progress.update(num_samples) - - # Compute PPL. - if opts.rank != 0: - return float('nan') - dist = torch.cat(dist)[:num_samples].cpu().numpy() - lo = np.percentile(dist, 1, interpolation='lower') - hi = np.percentile(dist, 99, interpolation='higher') - ppl = np.extract(np.logical_and(dist >= lo, dist <= hi), dist).mean() - return float(ppl) - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/Galaxyofterrorwormsceneuncut BEST.md b/spaces/falterWliame/Face_Mask_Detection/Galaxyofterrorwormsceneuncut BEST.md deleted file mode 100644 index ab6d1c9e711393899347b0bb470dbc441ffe4fea..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Galaxyofterrorwormsceneuncut BEST.md +++ /dev/null @@ -1,38 +0,0 @@ -

    galaxyofterrorwormsceneuncut


    Download Zip ->->->-> https://urlca.com/2uDd4E



    -
    -The movie is a sequel to both Corman's Evil Monkey and the original Galaxy of Terror and was originally released on July 5, 1981. The Plot The evil Col. Blasa Salsabil has escaped to the stars and with his new starship the Queen Anne, he returns to earth. Here he seeks to re-establish himself and his control over the New World. 3.1 James Cameron; 3.2 Taaffe O'Connell and "the worm scene". The movie is a sequel to both Corman's Evil Monkey and the original Galaxy of Terror and was originally released on July 5, 1981. The Plot The evil Col. Blasa Salsabil has escaped to the stars and with his new starship the Queen Anne, he returns to earth. Here he seeks to re-establish himself and his control over the New World. The movie is a sequel to both Corman's Evil Monkey and the original Galaxy of Terror and was originally released on July 5, 1981. The Plot The evil Col. Blasa Salsabil has escaped to the stars and with his new starship the Queen Anne, he returns to earth. Here he seeks to re-establish himself and his control over the New World. The Movie The movie is a sequel to both Corman's Evil Monkey and the original Galaxy of Terror and was originally released on July 5, 1981. The Plot The evil Col. Blasa Salsabil has escaped to the stars and with his new starship the Queen Anne, he returns to earth. Here he seeks to re-establish himself and his control over the New World. Show lessQ: - -Mock an interface in Jest / Enzyme - -How do you mock an interface in Jest / Enzyme? - -for example: - -var MyClass = function() ; - -MyClass.prototype.foo = function() ; - -module.exports = MyClass; - -How would you mock MyClass and/or it's functions to test against? - -A: - -You can mock the whole class by adding it to the test. - -import MyClass from './my-class'; - -describe('...', () => { - - it('...', () => { - - const mockedClass = jest.fn(MyClass); - - mockedClass.foo.mockResolvedValue(() => ); - - const myInstance = new MyClass(); - - expect(myInstance).toEqual 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Kassai Ilona Fonetika.pdf [EXCLUSIVE].md b/spaces/falterWliame/Face_Mask_Detection/Kassai Ilona Fonetika.pdf [EXCLUSIVE].md deleted file mode 100644 index 32f4c9c71e77da53d2c60240437fbd68a6f5761c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Kassai Ilona Fonetika.pdf [EXCLUSIVE].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Kassai Ilona Fonetika.pdf


    Download Zip ->>> https://urlca.com/2uDcnf



    -
    -.•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..•.•..• 4fefd39f24
    -
    -
    -

    diff --git a/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/attentions.py b/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/fartsmellalmao/combined-GI-RVC-models/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/farukozderim/bug_test_1/app.py b/spaces/farukozderim/bug_test_1/app.py deleted file mode 100644 index f6b8e36bf317095f4e9a8056cc94153caa99e54e..0000000000000000000000000000000000000000 --- a/spaces/farukozderim/bug_test_1/app.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr -import os - - -def temp(input_1): - return "Hey" - - -print(f"System: {os.getenv('SYSTEM')}") - -iface = gr.Interface(fn=temp, inputs="sketchpad", outputs="textbox") -print(f"You are in the spaces: {iface.is_space}") -iface.launch(debug=True, share=True) diff --git a/spaces/fatiXbelha/sd/Chief Almighty Hile APK Assemble Ancient Beasts and Warriors.md b/spaces/fatiXbelha/sd/Chief Almighty Hile APK Assemble Ancient Beasts and Warriors.md deleted file mode 100644 index 0370435952b5199f2521c406bae31320d36a162e..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Chief Almighty Hile APK Assemble Ancient Beasts and Warriors.md +++ /dev/null @@ -1,93 +0,0 @@ -
    -

    Chief Almighty Hile APK: What Is It and How to Download It

    -

    Introduction

    -

    Chief Almighty is a strategy mobile game that lets you build your tribe, hunt ancient beasts, ally with other chiefs, and dominate the continent in the stone ages. The game features stunning graphics, real-time strategy assembling, global server competition, arcane ancient relics, and fierce collision between ancient creatures and primitive warriors. You can download Chief Almighty for free from Google Play Store or App Store .

    -

    However, some players are not satisfied with the normal gameplay and want to have an edge over others. They look for hile apk, which is a modified version of the game that gives them access to cheats such as unlimited resources, instant upgrades, free VIP benefits, and more. Hile apk is a Turkish term that means cheat apk.

    -

    chief almighty hile apk


    DOWNLOAD ✓✓✓ https://urllie.com/2uNEaK



    -

    Before you decide to use hile apk for Chief Almighty, you should be aware of the risks and consequences involved. Using hile apk is against the game's terms of service and may result in your account being banned or suspended. Moreover, hile apk may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, we do not recommend or endorse using hile apk for Chief Almighty or any other game.

    -

    How to Download Chief Almighty Hile APK

    -

    If you still want to try hile apk for Chief Almighty at your own risk, here are the steps you need to follow:

    -
      -
    1. Find a reliable source of hile apk. There are many websites that claim to offer hile apk for Chief Almighty, but not all of them are trustworthy or updated. You should do some research and read reviews before downloading any file from an unknown source.
    2. -
    3. Enable unknown sources on your device. To install hile apk for Chief Almighty, you need to allow your device to install apps from sources other than Google Play Store or App Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    4. -
    5. Download and install the hile apk file. Once you have found a reputable source of hile apk for Chief Almighty, download the file to your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
    6. -
    7. Launch the game and enjoy the cheats. After installing hile apk for Chief Almighty, you can launch the game from your app drawer or home screen. You should see some changes in the game interface and features that indicate that the cheats are activated. You can now enjoy unlimited resources, instant upgrades, free VIP benefits, and more.
    8. -
    -

    Pros and Cons of Using Chief Almighty Hile APK

    -

    Using hile apk for Chief Almighty may seem tempting and fun, but it also has some drawbacks and disadvantages that you should consider. Here are some of the pros and cons of using hile apk for Chief Almighty:

    - - - - - - - - - - - - - - - - - -
    ProsCons
    Faster progress. You can level up your tribe, research new technologies, train your warriors, and expand your territory faster with hile apk. You don't have to wait for timers or resources to complete your tasks.Risk of ban. Using hile apk is a violation of the game's terms of service and may result in your account being banned or suspended. You may lose all your progress and achievements if you get caught.
    Unlimited resources. You can get unlimited amounts of food, wood, stone, iron, and gold with hile apk. You don't have to worry about running out of resources or spending real money to buy them.Virus, malware, or spyware. Hile apk may contain harmful software that can damage your device or steal your personal information. You may expose your device to security risks or privacy breaches if you download hile apk from untrusted sources.
    More fun. You can enjoy the game more with hile apk. You can explore more features, unlock more items, and experience more challenges with hile apk. You can also have an advantage over other players in the global server competition.Unfair gameplay. Using hile apk is unfair to other players who play the game legitimately. You may ruin the game balance and the game fun for others who don't use hile apk. You may also face backlash or criticism from other players who don't like cheaters.
    -

    Conclusion

    -

    In conclusion, hile apk for Chief Almighty is a modified version of the game that gives you access to cheats such as unlimited resources, instant upgrades, free VIP benefits, and more. However, using hile apk is risky and unethical, as it may result in your account being banned or suspended, your device being infected with viruses or malware, or your gameplay being unfair and unenjoyable. Therefore, we recommend that you play the game without hile apk and enjoy the game as it is meant to be played.

    -

    If you want to download Chief Almighty for free from Google Play Store or App Store , you can use this link: [Download Chief Almighty]. If you want to learn more about Chief Almighty and its features, you can visit the official website or the official Facebook page.

    -

    We hope you found this article helpful and informative. If you have any thoughts or feedback about Chief Almighty or hile apk, please feel free to share them in the comments section below. We would love to hear from you!

    -

    FAQs

    -

    Q1: Is Chief Almighty Hile APK safe to use?

    -

    A1: No, Chief Almighty Hile APK is not safe to use. It may contain viruses, malware, or spyware that can harm your device or steal your personal information. It may also result in your account being banned or suspended by the game developers.

    -

    chief almighty mod apk unlimited money
    -chief almighty hack apk download
    -chief almighty cheats apk free
    -chief almighty latest version apk
    -chief almighty game apk offline
    -chief almighty android apk full
    -chief almighty online apk update
    -chief almighty premium apk cracked
    -chief almighty pro apk modded
    -chief almighty apk pure app
    -chief almighty apk mirror link
    -chief almighty apk rexdl site
    -chief almighty apk revdl file
    -chief almighty apk obb data
    -chief almighty apk no root
    -chief almighty apk for pc
    -chief almighty apk for ios
    -chief almighty apk for firestick
    -chief almighty apk for smart tv
    -chief almighty apk for chromebook
    -how to install chief almighty apk
    -how to play chief almighty apk
    -how to update chief almighty apk
    -how to hack chief almighty apk
    -how to cheat in chief almighty apk
    -best tips for chief almighty apk
    -best guide for chief almighty apk
    -best strategy for chief almighty apk
    -best review of chief almighty apk
    -best alternative to chief almighty apk
    -is chief almighty apk safe
    -is chief almighty apk legit
    -is chief almighty apk virus free
    -is chief almighty apk ad free
    -is chief almighty apk compatible with my device
    -what is new in chief almighty apk
    -what is the size of chief almighty apk
    -what is the rating of chief almighty apk
    -what is the genre of chief almighty apk
    -what is the developer of chief almighty apk

    -

    Q2: How can I update Chief Almighty Hile APK?

    -

    A2: You can update Chief Almighty Hile APK by downloading the latest version of the file from a reliable source. However, we do not recommend using hile apk at all, as it may cause problems for your device and your account.

    -

    Q3: Can I play Chief Almighty Hile APK with other players?

    -

    A3: Yes, you can play Chief Almighty Hile APK with other players on the global server competition. However, this is unfair and unethical, as you will have an advantage over other players who don't use hile apk. You may also face backlash or criticism from other players who don't like cheaters.

    -

    Q4: What are some alternatives to Chief Almighty Hile APK?

    -

    A4: Some alternatives to Chief Almighty Hile APK are playing the game legitimately without cheats, using tips and tricks from online guides and forums , or playing other similar strategy games that don't require hile apk.

    -

    Q5: Where can I find more information about Chief Almighty?

    -

    A5: You can find more information about Chief Almighty on the official website, the official Facebook page, the official YouTube channel, or the official Instagram account. You can also contact the customer service team if you have any questions or issues about the game.

    - <|im_end

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Devil Girl APK What You Need to Know Before Downloading.md b/spaces/fatiXbelha/sd/Devil Girl APK What You Need to Know Before Downloading.md deleted file mode 100644 index 17e11c9f7c91be9a1a78998ece70cb7e9fceacdc..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Devil Girl APK What You Need to Know Before Downloading.md +++ /dev/null @@ -1,144 +0,0 @@ - -

    Devil Girl APK: A Simulation Game with a Twist

    -

    If you are looking for a simulation game that is not only fun but also challenging, then you might want to try Devil Girl APK. This is a game that will test your observation skills, your decision-making skills, and your imagination. In this game, you will encounter various devil-like girls who have different personalities and stories. You will have to find out their secrets and help them solve their problems. But be careful, because not everything is as it seems in this game.

    -

    What is Devil Girl APK?

    -

    Devil Girl APK is a simulation game developed by Honor apps. It is available for Android devices and can be downloaded for free from various sources. In this game, you will play as a detective who has to investigate different cases involving devil-like girls. These girls are not really evil, but they have some issues that make them act in strange ways. You will have to observe their pictures carefully and find out the problematic parts. Then, you will have to click on them and see how the story unfolds. You will also have to make choices that will affect the outcome of the story.

    -

    devil girl apk


    Download File ✵✵✵ https://urllie.com/2uNDKD



    -

    The premise of the game

    -

    The game is set in a world where humans are resisting the control of gods and demons. You are a detective who works for a special agency that deals with supernatural cases. You have a device called the Devil Eye that allows you to see through the illusions and lies of the devil-like girls. You use this device to find out their true intentions and help them overcome their troubles. However, you also have to be careful not to fall into their traps or get corrupted by their influence.

    -

    The gameplay of the game

    -

    The gameplay of Devil Girl APK is simple but addictive. You will have to complete different levels that correspond to different cases. Each level consists of several pictures that show different scenes of the story. You will have to examine each picture carefully and find out the parts that are different from reality. These parts can be anything from objects, colors, shapes, expressions, etc. You will have to tap on them and see what happens next. Sometimes, you will have to make choices that will affect how the story progresses. You will also have to collect clues and evidence that will help you solve the case.

    -

    The features of the game

    -

    Devil Girl APK has many features that make it an enjoyable and engaging game. Some of these features are:

    -

    devil girl apk download
    -devil girl apk mod
    -devil girl apk latest version
    -devil girl apk android
    -devil girl apk game
    -devil girl apk free
    -devil girl apk offline
    -devil girl apk unlimited money
    -devil girl apk hack
    -devil girl apk full
    -devil girl apk story
    -devil girl apk review
    -devil girl apk cheats
    -devil girl apk update
    -devil girl apk gameplay
    -devil girl apk english
    -devil girl apk online
    -devil girl apk premium
    -devil girl apk cracked
    -devil girl apk unlocked
    -devil girl apk no ads
    -devil girl apk reddit
    -devil girl apk guide
    -devil girl apk tips
    -devil girl apk tricks
    -devil girl apk best choices
    -devil girl apk romance options
    -devil girl apk characters
    -devil girl apk endings
    -devil girl apk walkthrough
    -devil girl apk wiki
    -devil girl apk install
    -devil girl apk requirements
    -devil girl apk size
    -devil girl apk features
    -devil girl apk screenshots
    -devil girl apk video
    -devil girl apk trailer
    -devil girl apk rating
    -devil girl apk feedback
    -devil girl apk support
    -devil girl apk developer
    -devil girl apk genre
    -devil girl apk theme
    -devil girl apk plot
    -devil girl apk graphics
    -devil girl apk sound effects
    -devil girl apk music

    -
      -
    • Beautiful graphics and animations that create a realistic and immersive atmosphere.
    • -
    • Interesting and diverse characters that have their own personalities, backgrounds, and stories.
    • -
    • Multiple endings that depend on your choices and actions.
    • -
    • Challenging puzzles that require your attention and logic.
    • -
    • A captivating storyline that mixes mystery, romance, drama, and fantasy.
    • -
    • A user-friendly interface that is easy to navigate and control.
    • -
    • A sound system that enhances the mood and emotion of the game.
    • -
    -

    How to download and install Devil Girl APK?

    -

    If you want to play Devil Girl APK on your Android device, you will have to download and install it from a reliable source. Here are some of the requirements and steps that you need to follow:

    -

    The requirements for the game

    -

    Before you download and install Devil Girl APK, you need to make sure that your device meets some minimum requirements. These are:

    -
      -
    • An Android version of 4.0 or higher and a free storage space of at least 100 MB.
    • -
    • A stable internet connection and a compatible web browser.
    • -
    • A permission to install apps from unknown sources on your device. To enable this, you need to go to your device's settings, then navigate to security or privacy, then tap on the install unknown apps option. You will see a list of apps that can download other apps. You need to toggle on the switch next to your web browser. You may see a warning message that tells you the risks of installing apps from unknown sources. You need to tap on OK or Allow to proceed.
    • -
    -

    The steps to download and install the game

    -

    Once you have met the requirements, you can follow these steps to download and install Devil Girl APK on your device:

    -
      -
    1. Open your web browser and go to a reliable source that offers the Devil Girl APK file. Some of the best sources are APKMirror, MUO, and Internet Archive. These sources are safe and verified by many users.
    2. -
    3. Search for the Devil Girl APK file on the source's website. You can use the search bar or browse through the categories. Make sure you choose the latest version of the game, which is 1.0.4 as of July 2022.
    4. -
    5. Tap on the download button or link to start downloading the Devil Girl APK file. You may see a pop-up that asks you to confirm the download. Tap on OK or Download to continue.
    6. -
    7. Wait for the download to finish. You can check the progress in your notification bar or your download folder.
    8. -
    9. Once the download is complete, tap on the Devil Girl APK file to open it. You may see a pop-up that asks you to install the game. Tap on Install or Next to proceed.
    10. -
    11. Wait for the installation to finish. You may see a pop-up that tells you that the game is installed successfully. Tap on Open or Done to launch the game or exit the installer.
    12. -
    -

    The tips to play the game safely and smoothly

    -

    To enjoy playing Devil Girl APK without any issues, you should follow these tips:

    -
      -
    • Make sure you have enough battery life and memory space on your device before playing the game.
    • -
    • Close any unnecessary apps or background processes that may slow down your device or interfere with the game.
    • -
    • Update your device's software and security patches regularly to avoid any bugs or vulnerabilities.
    • -
    • Avoid downloading or installing any modded or hacked versions of the game that may contain malware or viruses.
    • -
    • Do not share your personal or financial information with anyone in the game or online.
    • -
    -

    Why should you play Devil Girl APK?

    -

    Devil Girl APK is not just a simple simulation game. It is a game that will challenge your mind, test your skills, and entertain you with its story and characters. Here are some of the reasons why you should play Devil Girl APK:

    -

    The benefits of playing the game

    -

    Playing Devil Girl APK can provide you with many benefits, such as:

    -
      -
    • Improving your observation skills by finding out the hidden details in each picture.
    • -
    • Enhancing your decision-making skills by choosing the best options for each situation.
    • -
    • Stimulating your imagination by creating your own scenarios and outcomes based on your choices.
    • -
    • Relaxing your mind by enjoying the graphics, sounds, and animations of the game.
    • -
    • Learning new things by discovering different facts and trivia about devil-like girls and their cultures.
    • -
    -

    The challenges of playing the game

    -

    Playing Devil Girl APK can also pose some challenges, such as:

    -
      -
    • Facing difficult puzzles that require your logic and reasoning.
    • -
    • Dealing with unexpected twists and turns that may surprise or shock you.
    • -
    • Coping with emotional moments that may make you laugh, cry, or angry.
    • -
    • Avoiding traps and temptations that may lead you to bad endings or consequences.
    • -
    • Balancing your morality and curiosity by deciding whether to help or harm the devil-like girls.
    • -
    -

    The comparison of the game with other similar games

    -

    Devil Girl APK is not the only simulation game that features devil-like girls. There are other similar games that you may have heard of or played before, such as:

    - - - - - -
    NameDescriptionDifference
    HelltakerA puzzle-adventure game where you try to assemble a harem of demon girls by solving puzzles and making choices.It has a more comedic and lighthearted tone than Devil Girl APK. It also has a more pixelated and retro style of graphics. It is available for Windows, Linux, and Mac devices.
    Devilish CharmsA romance simulation game where you date different devil-like boys who have different personalities and stories.It has a more romantic and dramatic tone than Devil Girl APK. It also has a more realistic and detailed style of graphics. It is available for Android and iOS devices.
    Devil May CryAn action-adventure game where you fight against demons and other supernatural enemies using various weapons and skills.It has a more action-packed and thrilling tone than Devil Girl APK. It also has a more 3D and cinematic style of graphics. It is available for various consoles and PC devices.
    -

    As you can see, Devil Girl APK has its own unique features and qualities that make it stand out from other similar games. It offers a different kind of experience that will appeal to different kinds of players.

    -

    Conclusion

    -

    Devil Girl APK is a simulation game that will challenge your mind, test your skills, and entertain you with its story and characters. It is a game that will make you think, feel, and imagine. It is a game that will make you curious, surprised, and satisfied. If you are looking for a game that is not only fun but also challenging, then you should try Devil Girl APK. You will not regret it.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Devil Girl APK:

    -
      -
    • Q: Is Devil Girl APK safe to play?
    • -
    • A: Yes, Devil Girl APK is safe to play as long as you download and install it from a reliable source and follow the tips to play the game safely and smoothly. However, you should also be aware that the game contains some mature content and themes that may not be suitable for younger or sensitive players. You should play the game at your own discretion.
    • -
    • Q: How many levels are there in Devil Girl APK?
    • -
    • A: There are currently 20 levels in Devil Girl APK, each with its own devil-like girl and story. The developers may add more levels in the future updates.
    • -
    • Q: How can I get more clues and evidence in Devil Girl APK?
    • -
    • A: You can get more clues and evidence in Devil Girl APK by replaying the levels and making different choices. You can also watch ads or use in-app purchases to get more hints or skip puzzles.
    • -
    • Q: How can I get the best endings in Devil Girl APK?
    • -
    • A: You can get the best endings in Devil Girl APK by finding all the problematic parts in each picture, making the right choices for each situation, and helping the devil-like girls overcome their troubles. You can also use the Devil Eye device to see through their illusions and lies.
    • -
    • Q: How can I contact the developers of Devil Girl APK?
    • -
    • A: You can contact the developers of Devil Girl APK by sending them an email at honorapps@gmail.com or by following them on their Facebook page. You can also leave your feedback, suggestions, or questions on their Google Play Store page.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download and Install Naruto Shippuden Ultimate Ninja Storm 4 APK on Android for Free.md b/spaces/fatiXbelha/sd/Download and Install Naruto Shippuden Ultimate Ninja Storm 4 APK on Android for Free.md deleted file mode 100644 index 29bf96c5f923f2e8113ca39f3dfe756feaed4b05..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download and Install Naruto Shippuden Ultimate Ninja Storm 4 APK on Android for Free.md +++ /dev/null @@ -1,88 +0,0 @@ - -

    Download Naruto Ultimate Ninja Storm 4 for Android Free APK

    -

    If you are a fan of Naruto, you probably know that one of the best games based on the popular anime and manga series is Naruto Ultimate Ninja Storm 4. This game is a fighting game that lets you experience the most epic battles in the Naruto Shippuden saga, with stunning graphics, fluid animations, and immersive gameplay.

    -

    But did you know that you can also play this game on your Android device? Yes, you read that right. You can download Naruto Ultimate Ninja Storm 4 for Android free APK and enjoy this amazing game on your smartphone or tablet.

    -

    download naruto ultimate ninja storm 4 for android free apk


    Downloadhttps://urllie.com/2uNGnT



    -

    In this article, we will tell you everything you need to know about Naruto Ultimate Ninja Storm 4 for Android, including its features, how to download and install it, and some tips and tricks to help you become a ninja master. Let's get started!

    -

    Features of Naruto Ultimate Ninja Storm 4

    -

    Naruto Ultimate Ninja Storm 4 is not just another fighting game. It is a game that captures the essence of the Naruto Shippuden series, with its characters, story, and action. Here are some of the features that make this game so special:

    -

    Naruto Shippuden: Ultimate Ninja Storm 4 apk free download for android
    -How to install Ultimate Ninja Storm 4 game on android device
    -Ultimate Ninja Storm 4 android apk latest version 1.2
    -Download Naruto Games Ultimate Ninja Shippuden Storm 4 from FileHippo
    -Ultimate Ninja Storm 4 apk mod unlimited money and coins
    -Naruto Shippuden: Ultimate Ninja Storm 4 android gameplay and review
    -Best Naruto games for android: Ultimate Ninja Storm 4 and others
    -Ultimate Ninja Storm 4 apk offline mode without internet connection
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk + obb data file download
    -Ultimate Ninja Storm 4 android apk compatible with all devices
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk size and requirements
    -Ultimate Ninja Storm 4 apk features and tips
    -Download Ultimate Ninja Storm 4 from APKCombo for free
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk update and patch notes
    -Ultimate Ninja Storm 4 apk cheats and hacks
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk english version download
    -Ultimate Ninja Storm 4 apk portuguese version download from APKCombo
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk graphics and sound quality
    -Ultimate Ninja Storm 4 apk characters and skills
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk story mode and missions
    -Ultimate Ninja Storm 4 apk multiplayer mode and online battles
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk ranking and rewards
    -Ultimate Ninja Storm 4 apk customization and settings
    -Naruto Shippuden: Ultimate Ninja Storm 4 apk fan-made mods and addons
    -Ultimate Ninja Storm 4 apk problems and solutions

    -
      -
    • The most epic fights in the Naruto Shippuden series. You can relive the iconic battles from the anime and manga, such as Naruto vs Sasuke, Madara vs Hashirama, Kaguya vs Team 7, and more. You can also create your own dream matches with any combination of characters.
    • -
    • A revamped battle system with new mechanics and options. You can take advantage of the totally redesigned battle system that offers more freedom and strategy. You can switch between your main character and two support characters at any time, use wall-running and elemental damage, break your opponent's guard with powerful attacks, and unleash devastating team combos.
    • -
    • A huge roster of characters from the anime and manga. You can choose from over 100 playable characters, each with their own unique abilities, moves, and personality. You can also customize your characters with different costumes, accessories, and items.
    • -
    • A thrilling story mode that covers the final arc of the Naruto Shippuden saga. You can follow the events of the Fourth Shinobi World War, from the rise of Obito Uchiha to the final showdown between Naruto and Sasuke. You can also explore different scenarios and outcomes depending on your choices.
    • -
    -

    How to Download Naruto Ultimate Ninja Storm 4 for Android Free APK

    -

    Now that you know what Naruto Ultimate Ninja Storm 4 has to offer, you might be wondering how to download it for your Android device. Well, it's not as hard as you might think. Here are the requirements and steps to do it:

    -
      -
    1. Make sure your Android device meets the minimum specifications. To run the game smoothly, you need at least Android 5.0 (Lollipop), 2 GB of RAM, and 3 GB of free storage space.
    2. -
    3. Download the APK file from a trusted source. An APK file is a package file that contains the installation files for an Android app. You can download Naruto Ultimate Ninja Storm 4 for Android free APK from various websites, but make sure you choose a reliable and legal one. Some of the best sources are APKPure, APKMirror, and APKCombo. These websites scan the APK files for viruses and malware, and provide the latest and original versions of the apps.
    4. -
    5. Enable the installation of unknown sources on your Android device. Since you are downloading the APK file from a third-party source, you need to allow your device to install apps from unknown sources. To do this, go to Settings > Security > Unknown Sources and toggle it on. You might also need to grant permission to your browser or file manager to install the APK file.
    6. -
    7. Install the APK file on your Android device. Once you have downloaded the APK file, you can install it by tapping on it and following the instructions on the screen. It might take a few minutes for the installation to complete, depending on your device and internet speed.
    8. -
    9. Enjoy playing Naruto Ultimate Ninja Storm 4 on your Android device. After the installation is done, you can launch the game from your app drawer or home screen. You can also create a shortcut for the game on your desktop for easy access. Now you can enjoy playing Naruto Ultimate Ninja Storm 4 on your Android device anytime and anywhere.
    10. -
    -

    However, before you download Naruto Ultimate Ninja Storm 4 for Android free APK, you should also be aware of some of the advantages and disadvantages of using an APK file. Here are some of them:

    - - - - - - - - - - - - - - - - - -
    AdvantagesDisadvantages
    - You can access apps that are not available in your region or on the Google Play Store.- You might encounter compatibility issues or bugs with some apps or devices.
    - You can get the latest updates and features of the apps before they are officially released.- You might expose your device to security risks or malware if you download from untrusted sources.
    - You can save storage space by downloading only the APK file instead of the whole app package.- You might violate the terms and conditions or copyrights of the app developers or publishers.
    -

    Tips and Tricks for Playing Naruto Ultimate Ninja Storm 4 on Android

    -

    Naruto Ultimate Ninja Storm 4 is a fun and challenging game that requires skill and strategy to master. If you want to improve your gameplay and enjoy the game more, here are some tips and tricks that might help you:

    -
      -
    • Optimize the game performance and settings on your Android device. To ensure that the game runs smoothly and without lag, you should close any background apps that might consume your RAM or battery. You should also adjust the game settings according to your device's capabilities. You can change the graphics quality, frame rate, sound effects, and controls from the options menu in the game.
    • -
    • Master the combat system and use the secret techniques, awakenings, and shurikens effectively. The combat system in Naruto Ultimate Ninja Storm 4 is complex and versatile, allowing you to perform various moves and combos with different characters. You should learn how to use the basic attacks, chakra dash, guard, counterattack, substitution jutsu, support characters, team combos, secret techniques, awakenings, and shurikens in different situations. You should also know the strengths and weaknesses of each character and their fighting style.
    • -
    • Unlock all the characters, costumes, and modes in the game. The game offers a lot of content and variety for you to explore. You can unlock new characters, costumes, and modes by completing different tasks and challenges in the game. For example, you can unlock new characters by playing through the story mode or completing certain missions. You can unlock new costumes by collecting ryo (the in-game currency) or completing certain objectives. You can unlock new modes by reaching certain ranks or achievements in the game.
    • -
    -

    Conclusion

    -

    Naruto Ultimate Ninja Storm 4 is a game that every Naruto fan should play. It is a game that delivers an authentic and immersive Naruto experience, with its stunning graphics, fluid gameplay, epic battles, huge roster of characters, thrilling story mode, and more. It is also a game that you can play on your Android device, thanks to the APK file that you can download for free from various sources. You can enjoy playing Naruto Ultimate Ninja Storm 4 on your smartphone or tablet anytime and anywhere, as long as you meet the minimum requirements and follow the steps to install it.

    -

    So what are you waiting for? Download Naruto Ultimate Ninja Storm 4 for Android free APK now and join the ultimate ninja storm!

    -

    FAQs

    -

    Here are some of the frequently asked questions and answers about Naruto Ultimate Ninja Storm 4 for Android and the APK file:

    -
      -
    1. Is Naruto Ultimate Ninja Storm 4 for Android free? Yes, you can download and play Naruto Ultimate Ninja Storm 4 for Android free APK from various websites. However, you should be careful about the source and the quality of the APK file, as some of them might be fake, outdated, or infected with malware.
    2. -
    3. Is Naruto Ultimate Ninja Storm 4 for Android legal? It depends on the country and the region where you live. Some countries and regions might have strict laws and regulations regarding the use of APK files and the distribution of copyrighted content. You should check the local laws and the terms and conditions of the app developers and publishers before downloading and installing Naruto Ultimate Ninja Storm 4 for Android free APK.
    4. -
    5. Is Naruto Ultimate Ninja Storm 4 for Android safe? As long as you download the APK file from a trusted and legal source, and you enable the installation of unknown sources on your device, Naruto Ultimate Ninja Storm 4 for Android should be safe to play. However, you should always scan the APK file with an antivirus or a malware detector before installing it, and you should backup your data and device before playing the game.
    6. -
    7. How to update Naruto Ultimate Ninja Storm 4 for Android? To update Naruto Ultimate Ninja Storm 4 for Android, you need to download the latest version of the APK file from a reliable source and install it over the existing one. You should also check if there are any updates or patches available in the game itself, and download them accordingly.
    8. -
    9. How to uninstall Naruto Ultimate Ninja Storm 4 for Android? To uninstall Naruto Ultimate Ninja Storm 4 for Android, you need to go to Settings > Apps > Naruto Ultimate Ninja Storm 4 and tap on Uninstall. You should also delete any leftover files or folders related to the game from your device's storage.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fazni/Resume-filter-plus-QA-documents/model_Responce.py b/spaces/fazni/Resume-filter-plus-QA-documents/model_Responce.py deleted file mode 100644 index e8d2b30ce907e7c7855f5c0a0083fdb226db7e40..0000000000000000000000000000000000000000 --- a/spaces/fazni/Resume-filter-plus-QA-documents/model_Responce.py +++ /dev/null @@ -1,39 +0,0 @@ -import pickle -import joblib -import numpy as np -import tensorflow as tf -from keras.utils import pad_sequences -from keras.preprocessing.text import Tokenizer - -# Load the model from the pickle file -# filename = 'F:/CVFilter/models/model_pk.pkl' - -# with open(filename, 'rb') as file: -# model = pickle.load(file) - -# Load the saved model -# model = joblib.load('F:\CVFilter\models\model.joblib') - -model = tf.keras.models.load_model('models/model.h5') - -tokenfile = 'tokenized_words/tokenized_words.pkl' -# Load the tokenized words from the pickle file -with open(tokenfile, 'rb') as file: - loaded_tokenized_words = pickle.load(file) - -max_review_length = 200 -tokenizer = Tokenizer(num_words=10000, #max no. of unique words to keep - filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', - lower=True #convert to lower case - ) -tokenizer.fit_on_texts(loaded_tokenized_words) - -outcome_labels = ['Business Analyst', 'Cyber Security','Data Engineer','Data Science','DevOps','Machine Learning Engineer','Mobile App Developer','Network Engineer','Quality Assurance','Software Engineer'] - -def model_prediction(text, model=model, tokenizer=tokenizer, labels=outcome_labels): - seq = tokenizer.texts_to_sequences([text]) - padded = pad_sequences(seq, maxlen=max_review_length) - pred = model.predict(padded) - # print("Probability distribution: ", pred) - # print("Field ") - return labels[np.argmax(pred)] \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_resume.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_resume.sh deleted file mode 100644 index e21a61f48a96f1d831c90d3cbc3a9cbe8eb7de38..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_resume.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_resume # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_large_resume/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=resume - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/Resume/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bmes \ - --valid_data test.char.bmes \ - --test_data test.char.bmes \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name resume \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/fclong/summary/fengshen/models/GAVAE/GAVAEModel.py b/spaces/fclong/summary/fengshen/models/GAVAE/GAVAEModel.py deleted file mode 100644 index fa74f95fd775ed17c9e25d9564f94c93b50347f8..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/GAVAE/GAVAEModel.py +++ /dev/null @@ -1,67 +0,0 @@ -# -*- encoding: utf-8 -*- -''' -Copyright 2022 The International Digital Economy Academy (IDEA). CCNL team. All rights reserved. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@File : GAVAEModel.py -@Time : 2022/11/04 11:35 -@Author : Liang Yuxin -@Version : 1.0 -@Contact : liangyuxin@idea.edu.cn -@License : (C)Copyright 2022-2023, CCNL-IDEA -''' -import torch -from transformers.modeling_utils import PreTrainedModel -from transformers.configuration_utils import PretrainedConfig - -from fengshen.models.DAVAE.DAVAEModel import DAVAEModel -from fengshen.models.GAVAE.gans_model import gans_process - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -class GAVAEPretrainedModel(PreTrainedModel): - def _init_weights(self, module): - """ Initialize the weights """ - pass # to bypass the not implement error - -class GAVAEModel(GAVAEPretrainedModel): - config_class = PretrainedConfig - def __init__(self, config:PretrainedConfig) -> None: - super().__init__(config) - self.config =config - config.device = device - self.gan = gans_process(self.config) - self.vae_model = DAVAEModel(self.config) - - def train_gan(self,encoder_tokenizer,decoder_tokenizer,input_texts): - self.vae_model.set_tokenizers(encoder_tokenizer,decoder_tokenizer) - n = len(input_texts) - inputs_latents = self.vae_model.latent_code_from_text_batch(input_texts) - well_trained_gan = False - while not well_trained_gan: - self.gan_training(inputs_latents) - latent = torch.tensor(self.gan.gen_test(n)) - if not latent.isnan().any(): - well_trained_gan = True - - def generate(self,n): - latent_z = torch.tensor(self.gan.gen_test(n)).to(device) - text = self.vae_model.text_from_latent_code_batch(latent_z,prompt=None) - return text - - def gan_training(self,inputs_latents): - for gt in range(self.config.gan_epoch): - x_train,y_train,x_test,y_test,perm = self.gan.ready_cls(inputs_latents) - # sent_output:latent_z inputs_labels:id of class label - self.gan.cls_train(x_train, y_train) - x2_gen, y_gen, s_gen = self.gan.ready_gen(inputs_latents) - # s_gen:sent_output - self.gan.gen_train(x2_gen, y_gen, s_gen, gt) diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Chess GUI Software for Windows and Linux Tips and Tricks.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Chess GUI Software for Windows and Linux Tips and Tricks.md deleted file mode 100644 index 775501ab5c302823b0fe46a9688768be1e913d58..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Chess GUI Software for Windows and Linux Tips and Tricks.md +++ /dev/null @@ -1,110 +0,0 @@ -
    -

    How to Download Chess GUI

    -

    If you are a chess enthusiast, you might want to play chess against a computer or analyze your games with a powerful engine. For that, you need a chess graphical user interface (GUI) that can connect to various chess engines and provide you with various features and options. In this article, we will explain what is chess GUI, why do you need it, how to choose one, and how to download and install it on your computer.

    -

    how to download chess gui


    Download ……… https://gohhs.com/2uPs5r



    -

    What is Chess GUI?

    -

    A chess GUI is a software program that allows you to interact with a chess engine. A chess engine is a program that calculates the best moves in a given position and evaluates the strength of each side. A chess GUI provides you with a graphical representation of the chess board, pieces, moves, and analysis. It also lets you play games against the engine, adjust its settings, load and save games, use opening books and endgame tablebases, and more.

    -

    Why do you need Chess GUI?

    -

    There are many benefits of using a chess GUI. Here are some of them:

    -
      -
    • You can improve your chess skills by playing against different levels of engines, analyzing your games, and learning from their suggestions.
    • -
    • You can enjoy the game of chess in different ways, such as playing variants like Chess960, using different board styles and piece sets, and listening to move announcements.
    • -
    • You can test and compare different chess engines and see how they perform in various positions and situations.
    • -
    • You can access a large database of chess games and positions and search for specific openings, players, events, etc.
    • -
    -

    How to choose a Chess GUI?

    -

    There are many chess GUIs available on the internet, but not all of them are equally good. You need to consider some factors before choosing one. Here are some of them:

    -

    Features to look for in a Chess GUI

    -

    Compatibility

    -

    The first thing you need to check is whether the chess GUI is compatible with your operating system (Windows, Linux, Mac, Android, iOS, etc.) and your device (PC, laptop, tablet, smartphone, etc.). You also need to make sure that the chess GUI supports the protocols that your chess engines use. The most common protocols are UCI (Universal Chess Interface) and Winboard (or XBoard). Most modern chess engines use UCI, but some older ones use Winboard. Some chess GUIs can support both protocols, but some can only support one.

    -

    Interface

    -

    The second thing you need to consider is how easy and user-friendly the interface of the chess GUI is. You want a chess GUI that has a clear and intuitive layout, with menus, buttons, icons, and tabs that are easy to access and understand. You also want a chess GUI that has customizable options, such as changing the colors, fonts, sounds, languages, etc. You also want a chess GUI that has a responsive and smooth performance, without bugs or crashes.

    -

    Functionality

    -

    The third thing you need to look for is what features and functions the chess GUI offers. You want a chess GUI that has all the basic features that you need, such as playing games against engines or humans online or offline, loading and saving games in PGN format, using opening books and endgame tablebases, displaying analysis lines and evaluations, setting up positions and playing from them, etc. You also want a chess GUI that has some advanced features that you might want, such as playing variants like Chess960 or Fischer Random

    Popular Chess GUIs to consider

    -

    Now that you know what to look for in a chess GUI, you might be wondering which ones are the best to choose from. There are many chess GUIs available on the internet, but some of them are more popular and reliable than others. Here are some of the most popular chess GUIs that you can consider:

    -

    Arena Chess GUI

    -

    Arena Chess GUI is a free and powerful chess GUI that supports UCI and Winboard protocols. It has a simple and elegant interface, with many features and options. You can play games against various engines, analyze your games with multiple engines, use opening books and endgame tablebases, access a huge online database of games, create your own tournaments, and more. You can also customize the board style, piece set, sounds, colors, fonts, etc. Arena Chess GUI comes with a few engines pre-installed, but you can also download and install many more from the internet.

    -

    How to install Arena Chess GUI on Windows
    -How to use Stockfish chess engine with UCI-compatible chess GUI
    -How to play online chess against Stockfish on Chess UI
    -How to download and run Stockfish 15.1 on Android
    -How to get SmallFish app for iOS with Stockfish engine
    -How to set up Leela Chess and Fruit cloud engines on Chess UI
    -How to configure DGT electronic chess board with Arena Chess GUI
    -How to test chess engines with Arena tournament features
    -How to analyze games and positions with Stockfish on Linux
    -How to use Gaviota tablebases for endgame analysis with Arena
    -How to download and unzip Arena 3.5.1 ZIP file for Windows
    -How to install Stockfish through Homebrew on macOS
    -How to play Chess960 with Arena Chess GUI and Stockfish
    -How to use the analysis board and opening book on Chess UI
    -How to update Stockfish to the latest development build
    -How to use the move announcements feature in Arena setup
    -How to play against Novag Citrine Chess computer with Arena on Windows
    -How to use the Speedtest tool to measure the performance of Stockfish
    -How to analyze EPD files with Arena Chess GUI
    -How to replay, search and filter PGN databases with Arena
    -How to download and install Stockfish app for macOS
    -How to use the EloStat tool to calculate the rating of Stockfish
    -How to play against different strengths of Stockfish on Chess UI
    -How to use the POPCNT and 64-bit versions of Stockfish for faster performance
    -How to use the ARMv8 and ARMv7 versions of Stockfish for Android devices
    -How to use the Winboard protocol with Arena Chess GUI and chess engines
    -How to use the analysis lines and principal variations of Stockfish in Arena
    -How to adjust Arena according to your personal preferences and interface settings
    -How to use the dual monitor support feature in Arena Chess GUI
    -How to use the scroll mainlines feature in Arena with mouse wheel
    -How to download and install Wine for Linux or Mac to run Arena Chess GUI
    -How to use the net energy gain feature of Stockfish in nuclear fusion experiments (just kidding 😜)
    -How to download and install SmallFish app for iOS with Leela Chess engine
    -How to use the opening name feature in Arena Chess GUI with Perfect_2010 book by Sedat Canbas
    -How to play against different chess engines bundled with Arena setup or zip file
    -How to use the Olympiad mini book in Arena Chess GUI by Kevin Frayer
    -How to download and install Fruit chess engine for Windows, Linux or Mac
    -How to use the UCI protocol with Arena Chess GUI and chess engines
    -How to use the DGT clock feature in Arena Chess GUI for timed games
    -How to download and install Leela Chess engine for Windows, Linux or Mac
    -How to use the redrawing feature in Wine for Linux or Mac with Arena Chess GUI
    -How to download and install Titus 2.4 mini book for Arena Chess GUI by Kevin Frayer
    -How to use the PGN database from Olivier DEVILLE in Arena Chess GUI
    -How to download and install Rybka 2.3.2a free chess engine for Windows
    -How to use the large fonts and large caption bars feature in Arena Chess GUI
    -How to download and install Spike 1.4 chess engine for Windows
    -How to use the ponder on feature in Arena tournament mode
    -How to download and install Hermann 2.8 chess engine for Windows
    -How to use the restarts feature in Arena tournaments after 20 games or less

    -

    Stockfish Chess GUI

    -

    Stockfish Chess GUI is a free and open source chess GUI that is based on the Stockfish engine, one of the strongest chess engines in the world. It has a simple and intuitive interface, with many features and options. You can play games against the Stockfish engine or other online players, analyze your games with the engine, use opening books and endgame tablebases, access a large database of games, create your own puzzles, and more. You can also customize the board style, piece set, sounds, colors, fonts, etc. Stockfish Chess GUI is compatible with UCI protocol and can connect to other UCI engines as well.

    -

    Chess UI

    -

    Chess UI is a free and user-friendly chess GUI that supports UCI protocol. It has a modern and sleek interface, with many features and options. You can play games against various engines, analyze your games with the engine, use opening books and endgame tablebases, access a large database of games, create your own tournaments, and more. You can also customize the board style, piece set, sounds, colors, fonts, etc. Chess UI comes with a few engines pre-installed, but you can also download and install many more from the internet.

    -

    How to download and install Chess GUI?

    -

    Once you have decided which chess GUI you want to use, you need to download and install it on your computer. The process is usually simple and straightforward, but it may vary slightly depending on the chess GUI you choose. Here are the general steps that you need to follow:

    -

    Step 1: Find the download link for your chosen Chess GUI

    -

    The first step is to find the download link for your chosen chess GUI. You can usually find it on the official website of the chess GUI or on other reputable websites that offer chess software downloads. For example, you can find the download links for Arena Chess GUI, Stockfish Chess GUI, and Chess UI on their respective websites. Make sure that you download the correct version for your operating system (Windows, Linux, Mac, etc.) and your device (PC, laptop, tablet, smartphone, etc.).

    -

    Step 2: Save the file to your computer and unzip it if necessary

    -

    The second step is to save the file to your computer and unzip it if necessary. The file will usually be in a compressed format such as ZIP or RAR. You will need a program such as WinZip or WinRAR to extract the contents of the file. Once you have extracted the contents of the file, you will see a folder containing the files of the chess GUI.

    -

    Step 3: Run the setup program or the executable file to launch the Chess GUI

    -

    The third step is to run the setup program or the executable file to launch the chess GUI. Depending on the chess GUI you choose, you may need to run a setup program that will guide you through the installation process or simply run an executable file that will launch the chess GUI directly. For example, Arena Chess GUI has a setup program that will ask you to agree to the terms of use, choose a destination folder for installation, create shortcuts for easy access, etc. Stockfish Chess GUI and Chess UI do not have a setup program; they have an executable file that will launch the chess GUI when you double-click on it.

    -

    Conclusion

    -

    In this article, we have explained what is chess GUI, why do you need it, how to choose one, and how to download and install it on your computer. We hope that this article has helped you understand how to download chess GUI and enjoy playing chess with various engines and features. If you have any questions or comments about this article or about chess. GUI, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your chess journey. Thank you for reading and happy chess playing!

    -

    FAQs

    -

    Here are some of the frequently asked questions about chess GUI:

    -

    What is the difference between chess GUI and chess engine?

    -

    A chess GUI is a software program that allows you to interact with a chess engine. A chess engine is a software program that calculates the best moves in a given position and evaluates the strength of each side. A chess GUI provides you with a graphical representation of the chess board, pieces, moves, and analysis. A chess engine provides you with the numerical and verbal output of its calculations.

    -

    Can I use any chess engine with any chess GUI?

    -

    Not necessarily. You need to make sure that the chess engine and the chess GUI use the same protocol to communicate with each other. The most common protocols are UCI (Universal Chess Interface) and Winboard (or XBoard). Most modern chess engines use UCI, but some older ones use Winboard. Some chess GUIs can support both protocols, but some can only support one. You need to check the compatibility of the chess engine and the chess GUI before using them together.

    -

    How can I download more chess engines?

    -

    There are many websites that offer free or paid downloads of various chess engines. Some of the most popular ones are Chess Engines Diary, CCRL 40/15, and TCEC. You can also find some links to download chess engines on the official websites of some chess GUIs, such as Arena Chess GUI and Stockfish Chess GUI. You need to make sure that you download the correct version for your operating system and device.

    -

    How can I update my chess engine or chess GUI?

    -

    You can update your chess engine or chess GUI by downloading the latest version from the internet and replacing the old files with the new ones. You can also check if your chess engine or chess GUI has an automatic update feature that will notify you when a new version is available and download it for you. You need to make sure that you backup your settings and files before updating your chess engine or chess GUI.

    -

    How can I improve my chess skills with a chess GUI?

    -

    You can improve your chess skills with a chess GUI by using it in various ways, such as:

    -
      -
    • Playing games against different levels of engines or online players and analyzing your mistakes and weaknesses.
    • -
    • Studying openings, middlegames, and endgames with the help of opening books, endgame tablebases, and engine analysis.
    • -
    • Solving puzzles and exercises that test your tactical and strategic skills.
    • -
    • Watching and learning from the games of strong players and engines.
    • -
    • Creating your own scenarios and positions and playing them out with the engine.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffffu/bing/src/components/chat-message.tsx b/spaces/fffffu/bing/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
    -
    - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

    {children}

    - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
    -
    -
    - {message.author === 'bot' && } - {message.author === 'bot' && } -
    -
    - ) : null -} diff --git a/spaces/fffiloni/ControlVideo/models/RIFE/IFNet_HDv3.py b/spaces/fffiloni/ControlVideo/models/RIFE/IFNet_HDv3.py deleted file mode 100644 index d57f0a2f0889fec5d68c52bf99bf2dbd91150381..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/ControlVideo/models/RIFE/IFNet_HDv3.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from diffusers import ModelMixin - -from .warplayer import warp - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=True), - nn.PReLU(out_planes) - ) - -def conv_bn(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1): - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=False), - nn.BatchNorm2d(out_planes), - nn.PReLU(out_planes) - ) - -def convert(param): - return { - k.replace("module.", ""): v - for k, v in param.items() - if "module." in k - } - -class IFBlock(nn.Module): - def __init__(self, in_planes, c=64): - super(IFBlock, self).__init__() - self.conv0 = nn.Sequential( - conv(in_planes, c//2, 3, 2, 1), - conv(c//2, c, 3, 2, 1), - ) - self.convblock0 = nn.Sequential( - conv(c, c), - conv(c, c) - ) - self.convblock1 = nn.Sequential( - conv(c, c), - conv(c, c) - ) - self.convblock2 = nn.Sequential( - conv(c, c), - conv(c, c) - ) - self.convblock3 = nn.Sequential( - conv(c, c), - conv(c, c) - ) - self.conv1 = nn.Sequential( - nn.ConvTranspose2d(c, c//2, 4, 2, 1), - nn.PReLU(c//2), - nn.ConvTranspose2d(c//2, 4, 4, 2, 1), - ) - self.conv2 = nn.Sequential( - nn.ConvTranspose2d(c, c//2, 4, 2, 1), - nn.PReLU(c//2), - nn.ConvTranspose2d(c//2, 1, 4, 2, 1), - ) - - def forward(self, x, flow, scale=1): - x = F.interpolate(x, scale_factor= 1. / scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) - flow = F.interpolate(flow, scale_factor= 1. / scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) * 1. / scale - feat = self.conv0(torch.cat((x, flow), 1)) - feat = self.convblock0(feat) + feat - feat = self.convblock1(feat) + feat - feat = self.convblock2(feat) + feat - feat = self.convblock3(feat) + feat - flow = self.conv1(feat) - mask = self.conv2(feat) - flow = F.interpolate(flow, scale_factor=scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) * scale - mask = F.interpolate(mask, scale_factor=scale, mode="bilinear", align_corners=False, recompute_scale_factor=False) - return flow, mask - -class IFNet(ModelMixin): - def __init__(self, ckpt_path="checkpoints/flownet.pkl"): - super(IFNet, self).__init__() - self.block0 = IFBlock(7+4, c=90) - self.block1 = IFBlock(7+4, c=90) - self.block2 = IFBlock(7+4, c=90) - self.block_tea = IFBlock(10+4, c=90) - if ckpt_path is not None: - self.load_state_dict(convert(torch.load(ckpt_path, map_location ='cpu'))) - - def inference(self, img0, img1, scale=1.0): - imgs = torch.cat((img0, img1), 1) - scale_list = [4/scale, 2/scale, 1/scale] - flow, mask, merged = self.forward(imgs, scale_list) - return merged[2] - - def forward(self, x, scale_list=[4, 2, 1], training=False): - if training == False: - channel = x.shape[1] // 2 - img0 = x[:, :channel] - img1 = x[:, channel:] - flow_list = [] - merged = [] - mask_list = [] - warped_img0 = img0 - warped_img1 = img1 - flow = (x[:, :4]).detach() * 0 - mask = (x[:, :1]).detach() * 0 - loss_cons = 0 - block = [self.block0, self.block1, self.block2] - for i in range(3): - f0, m0 = block[i](torch.cat((warped_img0[:, :3], warped_img1[:, :3], mask), 1), flow, scale=scale_list[i]) - f1, m1 = block[i](torch.cat((warped_img1[:, :3], warped_img0[:, :3], -mask), 1), torch.cat((flow[:, 2:4], flow[:, :2]), 1), scale=scale_list[i]) - flow = flow + (f0 + torch.cat((f1[:, 2:4], f1[:, :2]), 1)) / 2 - mask = mask + (m0 + (-m1)) / 2 - mask_list.append(mask) - flow_list.append(flow) - warped_img0 = warp(img0, flow[:, :2]) - warped_img1 = warp(img1, flow[:, 2:4]) - merged.append((warped_img0, warped_img1)) - ''' - c0 = self.contextnet(img0, flow[:, :2]) - c1 = self.contextnet(img1, flow[:, 2:4]) - tmp = self.unet(img0, img1, warped_img0, warped_img1, mask, flow, c0, c1) - res = tmp[:, 1:4] * 2 - 1 - ''' - for i in range(3): - mask_list[i] = torch.sigmoid(mask_list[i]) - merged[i] = merged[i][0] * mask_list[i] + merged[i][1] * (1 - mask_list[i]) - # merged[i] = torch.clamp(merged[i] + res, 0, 1) - return flow_list, mask_list[2], merged diff --git a/spaces/fffiloni/coqui-bark-voice-cloning-docker/examples/blank.md b/spaces/fffiloni/coqui-bark-voice-cloning-docker/examples/blank.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/calc_dataset_stats.py b/spaces/fffiloni/lama-video-watermark-remover/bin/calc_dataset_stats.py deleted file mode 100644 index 5086fea1bab691892f2e52e3c59e5ef048bcfac0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/bin/calc_dataset_stats.py +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env python3 - -import os - -import numpy as np -import tqdm -from scipy.ndimage.morphology import distance_transform_edt - -from saicinpainting.evaluation.data import InpaintingDataset -from saicinpainting.evaluation.vis import save_item_for_vis - - -def main(args): - dataset = InpaintingDataset(args.datadir, img_suffix='.png') - - area_bins = np.linspace(0, 1, args.area_bins + 1) - - heights = [] - widths = [] - image_areas = [] - hole_areas = [] - hole_area_percents = [] - known_pixel_distances = [] - - area_bins_count = np.zeros(args.area_bins) - area_bin_titles = [f'{area_bins[i] * 100:.0f}-{area_bins[i + 1] * 100:.0f}' for i in range(args.area_bins)] - - bin2i = [[] for _ in range(args.area_bins)] - - for i, item in enumerate(tqdm.tqdm(dataset)): - h, w = item['image'].shape[1:] - heights.append(h) - widths.append(w) - full_area = h * w - image_areas.append(full_area) - bin_mask = item['mask'] > 0.5 - hole_area = bin_mask.sum() - hole_areas.append(hole_area) - hole_percent = hole_area / full_area - hole_area_percents.append(hole_percent) - bin_i = np.clip(np.searchsorted(area_bins, hole_percent) - 1, 0, len(area_bins_count) - 1) - area_bins_count[bin_i] += 1 - bin2i[bin_i].append(i) - - cur_dist = distance_transform_edt(bin_mask) - cur_dist_inside_mask = cur_dist[bin_mask] - known_pixel_distances.append(cur_dist_inside_mask.mean()) - - os.makedirs(args.outdir, exist_ok=True) - with open(os.path.join(args.outdir, 'summary.txt'), 'w') as f: - f.write(f'''Location: {args.datadir} - -Number of samples: {len(dataset)} - -Image height: min {min(heights):5d} max {max(heights):5d} mean {np.mean(heights):.2f} -Image width: min {min(widths):5d} max {max(widths):5d} mean {np.mean(widths):.2f} -Image area: min {min(image_areas):7d} max {max(image_areas):7d} mean {np.mean(image_areas):.2f} -Hole area: min {min(hole_areas):7d} max {max(hole_areas):7d} mean {np.mean(hole_areas):.2f} -Hole area %: min {min(hole_area_percents) * 100:2.2f} max {max(hole_area_percents) * 100:2.2f} mean {np.mean(hole_area_percents) * 100:2.2f} -Dist 2known: min {min(known_pixel_distances):2.2f} max {max(known_pixel_distances):2.2f} mean {np.mean(known_pixel_distances):2.2f} median {np.median(known_pixel_distances):2.2f} - -Stats by hole area %: -''') - for bin_i in range(args.area_bins): - f.write(f'{area_bin_titles[bin_i]}%: ' - f'samples number {area_bins_count[bin_i]}, ' - f'{area_bins_count[bin_i] / len(dataset) * 100:.1f}%\n') - - for bin_i in range(args.area_bins): - bindir = os.path.join(args.outdir, 'samples', area_bin_titles[bin_i]) - os.makedirs(bindir, exist_ok=True) - bin_idx = bin2i[bin_i] - for sample_i in np.random.choice(bin_idx, size=min(len(bin_idx), args.samples_n), replace=False): - save_item_for_vis(dataset[sample_i], os.path.join(bindir, f'{sample_i}.png')) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('outdir', type=str, help='Where to put results') - aparser.add_argument('--samples-n', type=int, default=10, - help='Number of sample images with masks to copy for visualization for each area bin') - aparser.add_argument('--area-bins', type=int, default=10, help='How many area bins to have') - - main(aparser.parse_args()) diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/sample_from_dataset.py b/spaces/fffiloni/lama-video-watermark-remover/bin/sample_from_dataset.py deleted file mode 100644 index 31593b3212454dd0b6f74a39195a34b489df20a1..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/bin/sample_from_dataset.py +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env python3 - -import os - -import numpy as np -import tqdm -from skimage import io -from skimage.segmentation import mark_boundaries - -from saicinpainting.evaluation.data import InpaintingDataset -from saicinpainting.evaluation.vis import save_item_for_vis - -def save_mask_for_sidebyside(item, out_file): - mask = item['mask']# > 0.5 - if mask.ndim == 3: - mask = mask[0] - mask = np.clip(mask * 255, 0, 255).astype('uint8') - io.imsave(out_file, mask) - -def save_img_for_sidebyside(item, out_file): - img = np.transpose(item['image'], (1, 2, 0)) - img = np.clip(img * 255, 0, 255).astype('uint8') - io.imsave(out_file, img) - -def save_masked_img_for_sidebyside(item, out_file): - mask = item['mask'] - img = item['image'] - - img = (1-mask) * img + mask - img = np.transpose(img, (1, 2, 0)) - - img = np.clip(img * 255, 0, 255).astype('uint8') - io.imsave(out_file, img) - -def main(args): - dataset = InpaintingDataset(args.datadir, img_suffix='.png') - - area_bins = np.linspace(0, 1, args.area_bins + 1) - - heights = [] - widths = [] - image_areas = [] - hole_areas = [] - hole_area_percents = [] - area_bins_count = np.zeros(args.area_bins) - area_bin_titles = [f'{area_bins[i] * 100:.0f}-{area_bins[i + 1] * 100:.0f}' for i in range(args.area_bins)] - - bin2i = [[] for _ in range(args.area_bins)] - - for i, item in enumerate(tqdm.tqdm(dataset)): - h, w = item['image'].shape[1:] - heights.append(h) - widths.append(w) - full_area = h * w - image_areas.append(full_area) - hole_area = (item['mask'] == 1).sum() - hole_areas.append(hole_area) - hole_percent = hole_area / full_area - hole_area_percents.append(hole_percent) - bin_i = np.clip(np.searchsorted(area_bins, hole_percent) - 1, 0, len(area_bins_count) - 1) - area_bins_count[bin_i] += 1 - bin2i[bin_i].append(i) - - os.makedirs(args.outdir, exist_ok=True) - - for bin_i in range(args.area_bins): - bindir = os.path.join(args.outdir, area_bin_titles[bin_i]) - os.makedirs(bindir, exist_ok=True) - bin_idx = bin2i[bin_i] - for sample_i in np.random.choice(bin_idx, size=min(len(bin_idx), args.samples_n), replace=False): - item = dataset[sample_i] - path = os.path.join(bindir, dataset.img_filenames[sample_i].split('/')[-1]) - save_masked_img_for_sidebyside(item, path) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('--datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('--outdir', type=str, help='Where to put results') - aparser.add_argument('--samples-n', type=int, default=10, - help='Number of sample images with masks to copy for visualization for each area bin') - aparser.add_argument('--area-bins', type=int, default=10, help='How many area bins to have') - - main(aparser.parse_args()) diff --git a/spaces/finlaymacklon/smooth_slate/README.md b/spaces/finlaymacklon/smooth_slate/README.md deleted file mode 100644 index 70b17a22abc23b75528b239421ad836e6fa50d05..0000000000000000000000000000000000000000 --- a/spaces/finlaymacklon/smooth_slate/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -tags: -- gradio-theme -- track-1 -title: smooth_slate -colorFrom: slate -colorTo: slate -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: apache-2.0 -emoji: 👁 ---- -# smooth_slate -## Description -Add a description of this theme here! -## Contributions -Thanks to [@finlaymacklon](https://huggingface.co/finlaymacklon) for adding this gradio theme! \ No newline at end of file diff --git a/spaces/firsk/ai_otto/mel_processing.py b/spaces/firsk/ai_otto/mel_processing.py deleted file mode 100644 index aab5bd926a194610b7ce3da29c553bd877341aa4..0000000000000000000000000000000000000000 --- a/spaces/firsk/ai_otto/mel_processing.py +++ /dev/null @@ -1,139 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + "_" + str(spec.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=spec.dtype, device=spec.device - ) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=y.dtype, device=y.device - ) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/fkhuggingme/gpt-academic/docs/README_FR.md b/spaces/fkhuggingme/gpt-academic/docs/README_FR.md deleted file mode 100644 index f21e90035ef2ddea91382155e0ad46b6740f5322..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/docs/README_FR.md +++ /dev/null @@ -1,296 +0,0 @@ -> **Note** -> -> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%. -> - -# ChatGPT Optimisation Académique - -**Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.** - -> **Note** -> -> 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin! -> -> 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). -> - -
    - -Fonctionnalité | Description ---- | --- -Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche. -Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois. -Explication de code en un clic | Affiche et explique correctement le code. -[Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables. -[Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy. -Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet. -[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés. -Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé. -Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX -Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction. -Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution. -[Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic -[Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread) -[Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants -Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge -Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic -Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre -[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps? -Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic) -... | ... - -
    - - -Vous êtes un traducteur professionnel d'articles universitaires en français. - -Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes : - -- Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas) -
    - -
    - - -- Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers. -
    - -
    - -- Correction/amélioration -
    - -
    - -- Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture. -
    - -
    - -- Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT. -
    - -
    - -- Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
    - -
    - -Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm). - - ---- - -## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS) - -1. Téléchargez le projet -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configuration de l'API_KEY et des paramètres de proxy - -Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous -``` -1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions). -2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py. -3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1. -``` -(Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.) - -3. Installation des dépendances -```sh -# (Option 1) Recommandé -python -m pip install -r requirements.txt - -# (Option 2) Si vous utilisez anaconda, les étapes sont similaires : -# (Option 2.1) conda create -n gptac_venv python=3.11 -# (Option 2.2) conda activate gptac_venv -# (Option 2.3) python -m pip install -r requirements.txt - -# note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez : -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) : -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. Exécution -```sh -python main.py -``` - -5. Tester les plugins de fonctions -``` -- Test Python Project Analysis - Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project" -- Test d'auto-lecture du code - Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)" -- Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes. - Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour" -- Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner. -``` - -## Installation - Méthode 2 : Utilisation de docker (Linux) - - -Vous êtes un traducteur professionnel d'articles académiques en français. - -1. ChatGPT seul (recommandé pour la plupart des gens) -``` sh -# Télécharger le projet -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# Configurer le proxy outre-mer et la clé API OpenAI -Modifier le fichier config.py avec n'importe quel éditeur de texte -# Installer -docker build -t gpt-academic . -# Exécuter -docker run --rm -it --net=host gpt-academic - -# Tester les modules de fonction -## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes. -Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui" -## Tester le résumé écrit pour le projet LaTeX -Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX" -## Tester l'analyse du projet Python -Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python" - -D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction. -``` - -2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante) -``` sh -# Modifier le dockerfile -cd docs && nano Dockerfile+ChatGLM -# Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# Comment exécuter | 如何运行 (1) Directement exécuter : -docker run --rm -it --net=host --gpus=all gpt-academic -# Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer : -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - -## Installation - Méthode 3 : Autres méthodes de déploiement - -1. Déploiement sur un cloud serveur distant -Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. Utilisation de WSL2 (Windows Subsystem for Linux) -Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## Configuration de la procuration de l'installation -### Méthode 1 : Méthode conventionnelle -[Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1) - -### Méthode 2 : Tutoriel pour débutant pur -[Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - - ---- - -## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques) -Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.) -Par exemple: -``` -"Traduction Français-Chinois": { - # Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc. - "Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n", - - # Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets. - "Suffix": "", -}, -``` - -
    - -
    - ---- - - -## Présentation de certaines fonctionnalités - -### Affichage des images: - -
    - -
    - - -### Si un programme peut comprendre et décomposer lui-même : - -
    - -
    - -
    - -
    - - -### Analyse de tout projet Python/Cpp quelconque : -
    - -
    - -
    - -
    - -### Lecture et résumé générés automatiquement pour les articles en Latex -
    - -
    - -### Génération de rapports automatique -
    - - - -
    - -### Conception de fonctionnalités modulaires -
    - - -
    - - -### Traduction de code source en anglais - -
    - -
    - -## À faire et planification de version : -- version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction -- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API -- version 3.0 : Prise en charge de chatglm et d'autres petits llm -- version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins -- version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet -- version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread. -- version 2.3 : Amélioration de l'interactivité multi-thread -- version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction -- version 2.1 : Mise en page pliable -- version 2.0 : Introduction du plugin de fonction modulaire -- version 1.0 : Fonctionnalité de base - -## Références et apprentissage - -``` -De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment : - -# Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projet 2 : ChatGLM-6B de Tsinghua : -https://github.com/THUDM/ChatGLM-6B -``` - diff --git a/spaces/flax-community/multilingual-image-captioning/sections/pretraining/dataset.md b/spaces/flax-community/multilingual-image-captioning/sections/pretraining/dataset.md deleted file mode 100644 index 8ad94d4861cfd115d83aa97c32945c13e801c3ca..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/sections/pretraining/dataset.md +++ /dev/null @@ -1 +0,0 @@ -The dataset we use for pre-training is a cleaned version of Conceptual 12M. The dataset is downloaded and then broken images are removed which gives us about 10M images. To save time, we use 2.5M of these image-text pairs. Then we use the MarianMT `Helsinki-NLP/opus-mt-{src}-{tgt}` checkpoint to translate the dataset into four different languages - English, French, German, and Spanish, keeping approximately 2.5M examples of each language. \ No newline at end of file diff --git a/spaces/fmind/resume/app.py b/spaces/fmind/resume/app.py deleted file mode 100644 index 84752b88a9a77313eb7285e5bd22e0c4310d3a75..0000000000000000000000000000000000000000 --- a/spaces/fmind/resume/app.py +++ /dev/null @@ -1,120 +0,0 @@ -"""Answer questions about my resume.""" - -# %% IMPORTS - -import logging - -import gradio as gr - -import lib - -# %% LOGGING - -logging.basicConfig( - level=logging.INFO, - format="[%(asctime)s][%(levelname)s] %(message)s", -) - -# %% CONFIGS - -# %% - Frontend - -THEME = "soft" -TITLE = "Fmind Chatbot" -EXAMPLES = [ - "Who is Médéric Hurier (Fmind)?", - "Is Fmind open to new opportunities?", - "Can you share details about Médéric PhD?", - "Elaborate on Médéric current work position", - "Describe his proficiency with Python programming", - "What is the answer to life, the universe, and everything?", -] - -# %% - Backend - -MODEL = lib.get_language_model() -CLIENT = lib.get_database_client(path=lib.DATABASE_PATH) -ENCODING = lib.get_encoding_function() -EMBEDDING = lib.get_embedding_function() -COLLECTION = CLIENT.get_collection( - name=lib.DATABASE_COLLECTION, - embedding_function=EMBEDDING, -) - -# %% - Answer - -PROMPT_CONTEXT = """ -You are Fmind Chatbot, specialized in providing information regarding Médéric Hurier's (known as Fmind) professional background. -Médéric is an MLOps engineer based in Luxembourg. He is currently working at Decathlon. His calendar is booked until the conclusion of 2024. -Your responses should be succinct and maintain a professional tone. If inquiries deviate from Médéric's professional sphere, courteously decline to engage. - -You may find more information about Médéric below (markdown format): -""" -PROMPT_MAX_TOKENS = lib.MODEL_INPUT_LIMIT -QUERY_MAX_DISTANCE = 0.4 -QUERY_N_RESULTS = 20 - -# %% FUNCTIONS - - -def answer(message: str, history: list[str]) -> str: - """Answer questions about my resume.""" - # counters - n_tokens = 0 - # messages - messages = [] - # - context - n_tokens += len(ENCODING(PROMPT_CONTEXT)) - messages += [{"role": "system", "content": PROMPT_CONTEXT}] - # - history - for user_content, assistant_content in history: - n_tokens += len(ENCODING(user_content)) - n_tokens += len(ENCODING(assistant_content)) - messages += [{"role": "user", "content": user_content}] - messages += [{"role": "assistant", "content": assistant_content}] - # - message - n_tokens += len(ENCODING(message)) - messages += [{"role": "user", "content": message}] - # database - results = COLLECTION.query(query_texts=message, n_results=QUERY_N_RESULTS) - logging.info("Results: %s", results) - distances = results["distances"][0] - documents = results["documents"][0] - for distance, document in zip(distances, documents): - # - distance - logging.debug("Doc distance: %f", distance) - if distance > QUERY_MAX_DISTANCE: - break - # - document - n_document_tokens = len(ENCODING(document)) - logging.debug("Doc tokens: %f", n_document_tokens) - if (n_tokens + n_document_tokens) >= PROMPT_MAX_TOKENS: - break - n_tokens += n_document_tokens - messages[0]["content"] += document - # response - logging.info("Tokens: %d", n_tokens) - logging.info("Messages: %s", messages) - api_response = MODEL(messages=messages) - logging.info("Response: %s", api_response.to_dict_recursive()) - # content - content = api_response["choices"][0]["message"]["content"] - # return - return content - - -# %% INTERFACES - -interface = gr.ChatInterface( - fn=answer, - theme=THEME, - title=TITLE, - examples=EXAMPLES, - clear_btn=None, - retry_btn=None, - undo_btn=None, -) - - -if __name__ == "__main__": - interface.queue(concurrency_count=20).launch() diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/openpose/util.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/openpose/util.py deleted file mode 100644 index 6f91ae0e65abaf0cbd62d803f56498991141e61b..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/openpose/util.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import matplotlib -import cv2 - - -def padRightDownCorner(img, stride, padValue): - h = img.shape[0] - w = img.shape[1] - - pad = 4 * [None] - pad[0] = 0 # up - pad[1] = 0 # left - pad[2] = 0 if (h % stride == 0) else stride - (h % stride) # down - pad[3] = 0 if (w % stride == 0) else stride - (w % stride) # right - - img_padded = img - pad_up = np.tile(img_padded[0:1, :, :]*0 + padValue, (pad[0], 1, 1)) - img_padded = np.concatenate((pad_up, img_padded), axis=0) - pad_left = np.tile(img_padded[:, 0:1, :]*0 + padValue, (1, pad[1], 1)) - img_padded = np.concatenate((pad_left, img_padded), axis=1) - pad_down = np.tile(img_padded[-2:-1, :, :]*0 + padValue, (pad[2], 1, 1)) - img_padded = np.concatenate((img_padded, pad_down), axis=0) - pad_right = np.tile(img_padded[:, -2:-1, :]*0 + padValue, (1, pad[3], 1)) - img_padded = np.concatenate((img_padded, pad_right), axis=1) - - return img_padded, pad - -# transfer caffe model to pytorch which will match the layer name -def transfer(model, model_weights): - transfered_model_weights = {} - for weights_name in model.state_dict().keys(): - transfered_model_weights[weights_name] = model_weights['.'.join(weights_name.split('.')[1:])] - return transfered_model_weights - -# draw the body keypoint and lims -def draw_bodypose(canvas, candidate, subset): - stickwidth = 4 - limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \ - [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \ - [1, 16], [16, 18], [3, 17], [6, 18]] - - colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \ - [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \ - [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]] - for i in range(18): - for n in range(len(subset)): - index = int(subset[n][i]) - if index == -1: - continue - x, y = candidate[index][0:2] - cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1) - for i in range(17): - for n in range(len(subset)): - index = subset[n][np.array(limbSeq[i]) - 1] - if -1 in index: - continue - cur_canvas = canvas.copy() - Y = candidate[index.astype(int), 0] - X = candidate[index.astype(int), 1] - mX = np.mean(X) - mY = np.mean(Y) - length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5 - angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1])) - polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1) - cv2.fillConvexPoly(cur_canvas, polygon, colors[i]) - canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0) - # plt.imsave("preview.jpg", canvas[:, :, [2, 1, 0]]) - # plt.imshow(canvas[:, :, [2, 1, 0]]) - return canvas - - -# image drawed by opencv is not good. -def draw_handpose(canvas, all_hand_peaks, show_number=False): - edges = [[0, 1], [1, 2], [2, 3], [3, 4], [0, 5], [5, 6], [6, 7], [7, 8], [0, 9], [9, 10], \ - [10, 11], [11, 12], [0, 13], [13, 14], [14, 15], [15, 16], [0, 17], [17, 18], [18, 19], [19, 20]] - - for peaks in all_hand_peaks: - for ie, e in enumerate(edges): - if np.sum(np.all(peaks[e], axis=1)==0)==0: - x1, y1 = peaks[e[0]] - x2, y2 = peaks[e[1]] - cv2.line(canvas, (x1, y1), (x2, y2), matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0])*255, thickness=2) - - for i, keyponit in enumerate(peaks): - x, y = keyponit - cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1) - if show_number: - cv2.putText(canvas, str(i), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 0), lineType=cv2.LINE_AA) - return canvas - -# detect hand according to body pose keypoints -# please refer to https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/src/openpose/hand/handDetector.cpp -def handDetect(candidate, subset, oriImg): - # right hand: wrist 4, elbow 3, shoulder 2 - # left hand: wrist 7, elbow 6, shoulder 5 - ratioWristElbow = 0.33 - detect_result = [] - image_height, image_width = oriImg.shape[0:2] - for person in subset.astype(int): - # if any of three not detected - has_left = np.sum(person[[5, 6, 7]] == -1) == 0 - has_right = np.sum(person[[2, 3, 4]] == -1) == 0 - if not (has_left or has_right): - continue - hands = [] - #left hand - if has_left: - left_shoulder_index, left_elbow_index, left_wrist_index = person[[5, 6, 7]] - x1, y1 = candidate[left_shoulder_index][:2] - x2, y2 = candidate[left_elbow_index][:2] - x3, y3 = candidate[left_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, True]) - # right hand - if has_right: - right_shoulder_index, right_elbow_index, right_wrist_index = person[[2, 3, 4]] - x1, y1 = candidate[right_shoulder_index][:2] - x2, y2 = candidate[right_elbow_index][:2] - x3, y3 = candidate[right_wrist_index][:2] - hands.append([x1, y1, x2, y2, x3, y3, False]) - - for x1, y1, x2, y2, x3, y3, is_left in hands: - # pos_hand = pos_wrist + ratio * (pos_wrist - pos_elbox) = (1 + ratio) * pos_wrist - ratio * pos_elbox - # handRectangle.x = posePtr[wrist*3] + ratioWristElbow * (posePtr[wrist*3] - posePtr[elbow*3]); - # handRectangle.y = posePtr[wrist*3+1] + ratioWristElbow * (posePtr[wrist*3+1] - posePtr[elbow*3+1]); - # const auto distanceWristElbow = getDistance(poseKeypoints, person, wrist, elbow); - # const auto distanceElbowShoulder = getDistance(poseKeypoints, person, elbow, shoulder); - # handRectangle.width = 1.5f * fastMax(distanceWristElbow, 0.9f * distanceElbowShoulder); - x = x3 + ratioWristElbow * (x3 - x2) - y = y3 + ratioWristElbow * (y3 - y2) - distanceWristElbow = math.sqrt((x3 - x2) ** 2 + (y3 - y2) ** 2) - distanceElbowShoulder = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2) - width = 1.5 * max(distanceWristElbow, 0.9 * distanceElbowShoulder) - # x-y refers to the center --> offset to topLeft point - # handRectangle.x -= handRectangle.width / 2.f; - # handRectangle.y -= handRectangle.height / 2.f; - x -= width / 2 - y -= width / 2 # width = height - # overflow the image - if x < 0: x = 0 - if y < 0: y = 0 - width1 = width - width2 = width - if x + width > image_width: width1 = image_width - x - if y + width > image_height: width2 = image_height - y - width = min(width1, width2) - # the max hand box value is 20 pixels - if width >= 20: - detect_result.append([int(x), int(y), int(width), is_left]) - - ''' - return value: [[x, y, w, True if left hand else False]]. - width=height since the network require squared input. - x, y is the coordinate of top left - ''' - return detect_result - -# get max index of 2d array -def npmax(array): - arrayindex = array.argmax(1) - arrayvalue = array.max(1) - i = arrayvalue.argmax() - j = arrayindex[i] - return i, j diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py deleted file mode 100644 index 5674a39854cafd1f2e363bac99c58ccae62f24da..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='NLHead', - in_channels=2048, - in_index=3, - channels=512, - dropout_ratio=0.1, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/io.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/io.py deleted file mode 100644 index d3fa2e8cc06b1a7b0b69de6406980b15d61a1e5d..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/io.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import io -import os.path as osp -from pathlib import Path - -import cv2 -import numpy as np -from cv2 import (IMREAD_COLOR, IMREAD_GRAYSCALE, IMREAD_IGNORE_ORIENTATION, - IMREAD_UNCHANGED) - -from annotator.uniformer.mmcv.utils import check_file_exist, is_str, mkdir_or_exist - -try: - from turbojpeg import TJCS_RGB, TJPF_BGR, TJPF_GRAY, TurboJPEG -except ImportError: - TJCS_RGB = TJPF_GRAY = TJPF_BGR = TurboJPEG = None - -try: - from PIL import Image, ImageOps -except ImportError: - Image = None - -try: - import tifffile -except ImportError: - tifffile = None - -jpeg = None -supported_backends = ['cv2', 'turbojpeg', 'pillow', 'tifffile'] - -imread_flags = { - 'color': IMREAD_COLOR, - 'grayscale': IMREAD_GRAYSCALE, - 'unchanged': IMREAD_UNCHANGED, - 'color_ignore_orientation': IMREAD_IGNORE_ORIENTATION | IMREAD_COLOR, - 'grayscale_ignore_orientation': - IMREAD_IGNORE_ORIENTATION | IMREAD_GRAYSCALE -} - -imread_backend = 'cv2' - - -def use_backend(backend): - """Select a backend for image decoding. - - Args: - backend (str): The image decoding backend type. Options are `cv2`, - `pillow`, `turbojpeg` (see https://github.com/lilohuang/PyTurboJPEG) - and `tifffile`. `turbojpeg` is faster but it only supports `.jpeg` - file format. - """ - assert backend in supported_backends - global imread_backend - imread_backend = backend - if imread_backend == 'turbojpeg': - if TurboJPEG is None: - raise ImportError('`PyTurboJPEG` is not installed') - global jpeg - if jpeg is None: - jpeg = TurboJPEG() - elif imread_backend == 'pillow': - if Image is None: - raise ImportError('`Pillow` is not installed') - elif imread_backend == 'tifffile': - if tifffile is None: - raise ImportError('`tifffile` is not installed') - - -def _jpegflag(flag='color', channel_order='bgr'): - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'color': - if channel_order == 'bgr': - return TJPF_BGR - elif channel_order == 'rgb': - return TJCS_RGB - elif flag == 'grayscale': - return TJPF_GRAY - else: - raise ValueError('flag must be "color" or "grayscale"') - - -def _pillow2array(img, flag='color', channel_order='bgr'): - """Convert a pillow image to numpy array. - - Args: - img (:obj:`PIL.Image.Image`): The image loaded using PIL - flag (str): Flags specifying the color type of a loaded image, - candidates are 'color', 'grayscale' and 'unchanged'. - Default to 'color'. - channel_order (str): The channel order of the output image array, - candidates are 'bgr' and 'rgb'. Default to 'bgr'. - - Returns: - np.ndarray: The converted numpy array - """ - channel_order = channel_order.lower() - if channel_order not in ['rgb', 'bgr']: - raise ValueError('channel order must be either "rgb" or "bgr"') - - if flag == 'unchanged': - array = np.array(img) - if array.ndim >= 3 and array.shape[2] >= 3: # color image - array[:, :, :3] = array[:, :, (2, 1, 0)] # RGB to BGR - else: - # Handle exif orientation tag - if flag in ['color', 'grayscale']: - img = ImageOps.exif_transpose(img) - # If the image mode is not 'RGB', convert it to 'RGB' first. - if img.mode != 'RGB': - if img.mode != 'LA': - # Most formats except 'LA' can be directly converted to RGB - img = img.convert('RGB') - else: - # When the mode is 'LA', the default conversion will fill in - # the canvas with black, which sometimes shadows black objects - # in the foreground. - # - # Therefore, a random color (124, 117, 104) is used for canvas - img_rgba = img.convert('RGBA') - img = Image.new('RGB', img_rgba.size, (124, 117, 104)) - img.paste(img_rgba, mask=img_rgba.split()[3]) # 3 is alpha - if flag in ['color', 'color_ignore_orientation']: - array = np.array(img) - if channel_order != 'rgb': - array = array[:, :, ::-1] # RGB to BGR - elif flag in ['grayscale', 'grayscale_ignore_orientation']: - img = img.convert('L') - array = np.array(img) - else: - raise ValueError( - 'flag must be "color", "grayscale", "unchanged", ' - f'"color_ignore_orientation" or "grayscale_ignore_orientation"' - f' but got {flag}') - return array - - -def imread(img_or_path, flag='color', channel_order='bgr', backend=None): - """Read an image. - - Args: - img_or_path (ndarray or str or Path): Either a numpy array or str or - pathlib.Path. If it is a numpy array (loaded image), then - it will be returned as is. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale`, `unchanged`, - `color_ignore_orientation` and `grayscale_ignore_orientation`. - By default, `cv2` and `pillow` backend would rotate the image - according to its EXIF info unless called with `unchanged` or - `*_ignore_orientation` flags. `turbojpeg` and `tifffile` backend - always ignore image's EXIF info regardless of the flag. - The `turbojpeg` backend only supports `color` and `grayscale`. - channel_order (str): Order of channel, candidates are `bgr` and `rgb`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `tifffile`, `None`. - If backend is None, the global imread_backend specified by - ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if isinstance(img_or_path, Path): - img_or_path = str(img_or_path) - - if isinstance(img_or_path, np.ndarray): - return img_or_path - elif is_str(img_or_path): - check_file_exist(img_or_path, - f'img file does not exist: {img_or_path}') - if backend == 'turbojpeg': - with open(img_or_path, 'rb') as in_file: - img = jpeg.decode(in_file.read(), - _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - img = Image.open(img_or_path) - img = _pillow2array(img, flag, channel_order) - return img - elif backend == 'tifffile': - img = tifffile.imread(img_or_path) - return img - else: - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imread(img_or_path, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - else: - raise TypeError('"img" must be a numpy array or a str or ' - 'a pathlib.Path object') - - -def imfrombytes(content, flag='color', channel_order='bgr', backend=None): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Same as :func:`imread`. - backend (str | None): The image decoding backend type. Options are - `cv2`, `pillow`, `turbojpeg`, `None`. If backend is None, the - global imread_backend specified by ``mmcv.use_backend()`` will be - used. Default: None. - - Returns: - ndarray: Loaded image array. - """ - - if backend is None: - backend = imread_backend - if backend not in supported_backends: - raise ValueError(f'backend: {backend} is not supported. Supported ' - "backends are 'cv2', 'turbojpeg', 'pillow'") - if backend == 'turbojpeg': - img = jpeg.decode(content, _jpegflag(flag, channel_order)) - if img.shape[-1] == 1: - img = img[:, :, 0] - return img - elif backend == 'pillow': - buff = io.BytesIO(content) - img = Image.open(buff) - img = _pillow2array(img, flag, channel_order) - return img - else: - img_np = np.frombuffer(content, np.uint8) - flag = imread_flags[flag] if is_str(flag) else flag - img = cv2.imdecode(img_np, flag) - if flag == IMREAD_COLOR and channel_order == 'rgb': - cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img) - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = osp.abspath(osp.dirname(file_path)) - mkdir_or_exist(dir_name) - return cv2.imwrite(file_path, img, params) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/cldm/ddim_hacked.py b/spaces/georgefen/Face-Landmark-ControlNet/cldm/ddim_hacked.py deleted file mode 100644 index 04798a68f80d09fb6ff1e28be44d8b1d20228967..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/cldm/ddim_hacked.py +++ /dev/null @@ -1,320 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != device: - attr = attr.to(device) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - model_t = self.model.apply_model(x, t, c) - model_uncond = self.model.apply_model(x, t, unconditional_conditioning) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Lex Doctor 9.1 Descargar Taringa !FREE!.md b/spaces/gotiQspiryo/whisper-ui/examples/Lex Doctor 9.1 Descargar Taringa !FREE!.md deleted file mode 100644 index 6ef503fdde7d2f3b248a00879c3aa9587056e61a..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Lex Doctor 9.1 Descargar Taringa !FREE!.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Tengo para vender el Programa + Crack del LEX DOCTOR 9 y del LEX DOCTOR 10. Por favor comunicarse al Grupo de Facebook: "Abogados del Lex-Doctor"...EL DEL CABALLITO BLANCO CON ALAS.....

    MERCADO LIBRE:
    -771980594-programa-abogados-doctor-estudios-juridicos-estudiantes-_JM

    OLX:
    -doctor-10-abogados-y-estudiantes-con-lic-original-uso-de-por-vida-iid-1045264503

    MAIL a: lexdoctor2005@gmail.com

    -

    lex doctor 9.1 descargar taringa


    Download Filehttps://urlgoal.com/2uyNBu



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/simultaneous_translation/__init__.py b/spaces/gradio/HuBERT/examples/simultaneous_translation/__init__.py deleted file mode 100644 index 5835316ba9b23c0d99d1a8f109ee047682211546..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/simultaneous_translation/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import models # noqa diff --git a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh b/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh deleted file mode 100644 index 811cb63c88bb7cdd03b0a250ef2db32b5eaa50df..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh +++ /dev/null @@ -1,38 +0,0 @@ -#!/bin/bash - -set -u - -val_sets="dev_other" -graph_name=graph -decode_suffix="" -decode_script="steps/decode_fmllr.sh" -decode_args="" -nj=60 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -x -exp_dir=$1 -data_root=$2 -lang_test=$3 - -graph=$exp_dir/$graph_name - -if [ ! -d $graph ]; then - utils/mkgraph.sh $lang_test $exp_dir $graph -fi - -for part in $val_sets; do - dec_dir=$exp_dir/decode${decode_suffix}_${part} - if [ ! -d $dec_dir ]; then - echo "decoding $part for $exp_dir" - $decode_script --nj $nj --cmd "$decode_cmd" $decode_args \ - $graph $data_root/$part $dec_dir & - else - echo "$dec_dir exists. skip" - fi -done - -wait diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/net_util.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/net_util.py deleted file mode 100644 index 3345c10335a0216c5ca3b3c02300911600771b52..0000000000000000000000000000000000000000 --- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/lib/net_util.py +++ /dev/null @@ -1,396 +0,0 @@ -import torch -from torch.nn import init -import torch.nn as nn -import torch.nn.functional as F -import functools - -import numpy as np -from .mesh_util import * -from .sample_util import * -from .geometry import index -import cv2 -from PIL import Image -from tqdm import tqdm - - -def reshape_multiview_tensors(image_tensor, calib_tensor): - # Careful here! Because we put single view and multiview together, - # the returned tensor.shape is 5-dim: [B, num_views, C, W, H] - # So we need to convert it back to 4-dim [B*num_views, C, W, H] - # Don't worry classifier will handle multi-view cases - image_tensor = image_tensor.view( - image_tensor.shape[0] * image_tensor.shape[1], - image_tensor.shape[2], - image_tensor.shape[3], - image_tensor.shape[4] - ) - calib_tensor = calib_tensor.view( - calib_tensor.shape[0] * calib_tensor.shape[1], - calib_tensor.shape[2], - calib_tensor.shape[3] - ) - - return image_tensor, calib_tensor - - -def reshape_sample_tensor(sample_tensor, num_views): - if num_views == 1: - return sample_tensor - # Need to repeat sample_tensor along the batch dim num_views times - sample_tensor = sample_tensor.unsqueeze(dim=1) - sample_tensor = sample_tensor.repeat(1, num_views, 1, 1) - sample_tensor = sample_tensor.view( - sample_tensor.shape[0] * sample_tensor.shape[1], - sample_tensor.shape[2], - sample_tensor.shape[3] - ) - return sample_tensor - - -def gen_mesh(opt, net, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - net.filter(image_tensor) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - net, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - xyz_tensor = net.projection(verts_tensor, calib_tensor[:1]) - uv = xyz_tensor[:, :2, :] - color = index(image_tensor[:1], uv).detach().cpu().numpy()[0].T - color = color * 0.5 + 0.5 - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def gen_mesh_color(opt, netG, netC, cuda, data, save_path, use_octree=True): - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - - netG.filter(image_tensor) - netC.filter(image_tensor) - netC.attach(netG.get_im_feat()) - - b_min = data['b_min'] - b_max = data['b_max'] - try: - save_img_path = save_path[:-4] + '.png' - save_img_list = [] - for v in range(image_tensor.shape[0]): - save_img = (np.transpose(image_tensor[v].detach().cpu().numpy(), (1, 2, 0)) * 0.5 + 0.5)[:, :, ::-1] * 255.0 - save_img_list.append(save_img) - save_img = np.concatenate(save_img_list, axis=1) - Image.fromarray(np.uint8(save_img[:,:,::-1])).save(save_img_path) - - verts, faces, _, _ = reconstruction( - netG, cuda, calib_tensor, opt.resolution, b_min, b_max, use_octree=use_octree) - - # Now Getting colors - verts_tensor = torch.from_numpy(verts.T).unsqueeze(0).to(device=cuda).float() - verts_tensor = reshape_sample_tensor(verts_tensor, opt.num_views) - - color = np.zeros(verts.shape) - interval = opt.num_sample_color - for i in range(len(color) // interval): - left = i * interval - right = i * interval + interval - if i == len(color) // interval - 1: - right = -1 - netC.query(verts_tensor[:, :, left:right], calib_tensor) - rgb = netC.get_preds()[0].detach().cpu().numpy() * 0.5 + 0.5 - color[left:right] = rgb.T - - save_obj_mesh_with_color(save_path, verts, faces, color) - except Exception as e: - print(e) - print('Can not create marching cubes at this time.') - -def adjust_learning_rate(optimizer, epoch, lr, schedule, gamma): - """Sets the learning rate to the initial LR decayed by schedule""" - if epoch in schedule: - lr *= gamma - for param_group in optimizer.param_groups: - param_group['lr'] = lr - return lr - - -def compute_acc(pred, gt, thresh=0.5): - ''' - return: - IOU, precision, and recall - ''' - with torch.no_grad(): - vol_pred = pred > thresh - vol_gt = gt > thresh - - union = vol_pred | vol_gt - inter = vol_pred & vol_gt - - true_pos = inter.sum().float() - - union = union.sum().float() - if union == 0: - union = 1 - vol_pred = vol_pred.sum().float() - if vol_pred == 0: - vol_pred = 1 - vol_gt = vol_gt.sum().float() - if vol_gt == 0: - vol_gt = 1 - return true_pos / union, true_pos / vol_pred, true_pos / vol_gt - - -def calc_error(opt, net, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - erorr_arr, IOU_arr, prec_arr, recall_arr = [], [], [], [] - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - sample_tensor = data['samples'].to(device=cuda).unsqueeze(0) - if opt.num_views > 1: - sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views) - label_tensor = data['labels'].to(device=cuda).unsqueeze(0) - - res, error = net.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor) - - IOU, prec, recall = compute_acc(res, label_tensor) - - # print( - # '{0}/{1} | Error: {2:06f} IOU: {3:06f} prec: {4:06f} recall: {5:06f}' - # .format(idx, num_tests, error.item(), IOU.item(), prec.item(), recall.item())) - erorr_arr.append(error.item()) - IOU_arr.append(IOU.item()) - prec_arr.append(prec.item()) - recall_arr.append(recall.item()) - - return np.average(erorr_arr), np.average(IOU_arr), np.average(prec_arr), np.average(recall_arr) - -def calc_error_color(opt, netG, netC, cuda, dataset, num_tests): - if num_tests > len(dataset): - num_tests = len(dataset) - with torch.no_grad(): - error_color_arr = [] - - for idx in tqdm(range(num_tests)): - data = dataset[idx * len(dataset) // num_tests] - # retrieve the data - image_tensor = data['img'].to(device=cuda) - calib_tensor = data['calib'].to(device=cuda) - color_sample_tensor = data['color_samples'].to(device=cuda).unsqueeze(0) - - if opt.num_views > 1: - color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views) - - rgb_tensor = data['rgbs'].to(device=cuda).unsqueeze(0) - - netG.filter(image_tensor) - _, errorC = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor) - - # print('{0}/{1} | Error inout: {2:06f} | Error color: {3:06f}' - # .format(idx, num_tests, errorG.item(), errorC.item())) - error_color_arr.append(errorC.item()) - - return np.average(error_color_arr) - - -def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, - stride=strd, padding=padding, bias=bias) - -def init_weights(net, init_type='normal', init_gain=0.02): - """Initialize network weights. - - Parameters: - net (network) -- network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - init_gain (float) -- scaling factor for normal, xavier and orthogonal. - - We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might - work better for some applications. Feel free to try yourself. - """ - - def init_func(m): # define the initialization function - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find( - 'BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies. - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - print('initialize network with %s' % init_type) - net.apply(init_func) # apply the initialization function - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]): - """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights - Parameters: - net (network) -- the network to be initialized - init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal - gain (float) -- scaling factor for normal, xavier and orthogonal. - gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2 - - Return an initialized network. - """ - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.to(gpu_ids[0]) - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -def imageSpaceRotation(xy, rot): - ''' - args: - xy: (B, 2, N) input - rot: (B, 2) x,y axis rotation angles - - rotation center will be always image center (other rotation center can be represented by additional z translation) - ''' - disp = rot.unsqueeze(2).sin().expand_as(xy) - return (disp * xy).sum(dim=1) - - -def cal_gradient_penalty(netD, real_data, fake_data, device, type='mixed', constant=1.0, lambda_gp=10.0): - """Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028 - - Arguments: - netD (network) -- discriminator network - real_data (tensor array) -- real images - fake_data (tensor array) -- generated images from the generator - device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') - type (str) -- if we mix real and fake data or not [real | fake | mixed]. - constant (float) -- the constant used in formula ( | |gradient||_2 - constant)^2 - lambda_gp (float) -- weight for this loss - - Returns the gradient penalty loss - """ - if lambda_gp > 0.0: - if type == 'real': # either use real images, fake images, or a linear interpolation of two. - interpolatesv = real_data - elif type == 'fake': - interpolatesv = fake_data - elif type == 'mixed': - alpha = torch.rand(real_data.shape[0], 1) - alpha = alpha.expand(real_data.shape[0], real_data.nelement() // real_data.shape[0]).contiguous().view( - *real_data.shape) - alpha = alpha.to(device) - interpolatesv = alpha * real_data + ((1 - alpha) * fake_data) - else: - raise NotImplementedError('{} not implemented'.format(type)) - interpolatesv.requires_grad_(True) - disc_interpolates = netD(interpolatesv) - gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolatesv, - grad_outputs=torch.ones(disc_interpolates.size()).to(device), - create_graph=True, retain_graph=True, only_inputs=True) - gradients = gradients[0].view(real_data.size(0), -1) # flat the data - gradient_penalty = (((gradients + 1e-16).norm(2, dim=1) - constant) ** 2).mean() * lambda_gp # added eps - return gradient_penalty, gradients - else: - return 0.0, None - -def get_norm_layer(norm_type='instance'): - """Return a normalization layer - Parameters: - norm_type (str) -- the name of the normalization layer: batch | instance | none - For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). - For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics. - """ - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - elif norm_type == 'group': - norm_layer = functools.partial(nn.GroupNorm, 32) - elif norm_type == 'none': - norm_layer = None - else: - raise NotImplementedError('normalization layer [%s] is not found' % norm_type) - return norm_layer - -class Flatten(nn.Module): - def forward(self, input): - return input.view(input.size(0), -1) - -class ConvBlock(nn.Module): - def __init__(self, in_planes, out_planes, norm='batch'): - super(ConvBlock, self).__init__() - self.conv1 = conv3x3(in_planes, int(out_planes / 2)) - self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4)) - self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4)) - - if norm == 'batch': - self.bn1 = nn.BatchNorm2d(in_planes) - self.bn2 = nn.BatchNorm2d(int(out_planes / 2)) - self.bn3 = nn.BatchNorm2d(int(out_planes / 4)) - self.bn4 = nn.BatchNorm2d(in_planes) - elif norm == 'group': - self.bn1 = nn.GroupNorm(32, in_planes) - self.bn2 = nn.GroupNorm(32, int(out_planes / 2)) - self.bn3 = nn.GroupNorm(32, int(out_planes / 4)) - self.bn4 = nn.GroupNorm(32, in_planes) - - if in_planes != out_planes: - self.downsample = nn.Sequential( - self.bn4, - nn.ReLU(True), - nn.Conv2d(in_planes, out_planes, - kernel_size=1, stride=1, bias=False), - ) - else: - self.downsample = None - - def forward(self, x): - residual = x - - out1 = self.bn1(x) - out1 = F.relu(out1, True) - out1 = self.conv1(out1) - - out2 = self.bn2(out1) - out2 = F.relu(out2, True) - out2 = self.conv2(out2) - - out3 = self.bn3(out2) - out3 = F.relu(out3, True) - out3 = self.conv3(out3) - - out3 = torch.cat((out1, out2, out3), 1) - - if self.downsample is not None: - residual = self.downsample(residual) - - out3 += residual - - return out3 - \ No newline at end of file diff --git a/spaces/gulabpatel/GFP_GAN/setup.py b/spaces/gulabpatel/GFP_GAN/setup.py deleted file mode 100644 index 474e9188aa2dc5c19614921760ce4ad99bd19c13..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/GFP_GAN/setup.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python - -from setuptools import find_packages, setup - -import os -import subprocess -import time - -version_file = 'gfpgan/version.py' - - -def readme(): - with open('README.md', encoding='utf-8') as f: - content = f.read() - return content - - -def get_git_hash(): - - def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - except OSError: - sha = 'unknown' - - return sha - - -def get_hash(): - if os.path.exists('.git'): - sha = get_git_hash()[:7] - else: - sha = 'unknown' - - return sha - - -def write_version_py(): - content = """# GENERATED VERSION FILE -# TIME: {} -__version__ = '{}' -__gitsha__ = '{}' -version_info = ({}) -""" - sha = get_hash() - with open('VERSION', 'r') as f: - SHORT_VERSION = f.read().strip() - VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')]) - - version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO) - with open(version_file, 'w') as f: - f.write(version_file_str) - - -def get_version(): - with open(version_file, 'r') as f: - exec(compile(f.read(), version_file, 'exec')) - return locals()['__version__'] - - -def get_requirements(filename='requirements.txt'): - here = os.path.dirname(os.path.realpath(__file__)) - with open(os.path.join(here, filename), 'r') as f: - requires = [line.replace('\n', '') for line in f.readlines()] - return requires - - -if __name__ == '__main__': - write_version_py() - setup( - name='gfpgan', - version=get_version(), - description='GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration', - long_description=readme(), - long_description_content_type='text/markdown', - author='Xintao Wang', - author_email='xintao.wang@outlook.com', - keywords='computer vision, pytorch, image restoration, super-resolution, face restoration, gan, gfpgan', - url='https://github.com/TencentARC/GFPGAN', - include_package_data=True, - packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')), - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - ], - license='Apache License Version 2.0', - setup_requires=['cython', 'numpy'], - install_requires=get_requirements(), - zip_safe=False) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/__init__.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/__init__.py deleted file mode 100644 index a0b0f4efcbe1e3cd4199eeecb043d5afe1548307..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/hackathon-somos-nlp-2023/GIPBERT/app.py b/spaces/hackathon-somos-nlp-2023/GIPBERT/app.py deleted file mode 100644 index f96e00a013464ef06c2172a911a8a0d2809ac922..0000000000000000000000000000000000000000 --- a/spaces/hackathon-somos-nlp-2023/GIPBERT/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import gradio as gr -import torch -from peft import PeftModel, PeftConfig -from transformers import AutoModelForCausalLM, AutoTokenizer - -peft_model_id = "hackathon-somos-nlp-2023/PAG-BERT" -config = PeftConfig.from_pretrained(peft_model_id) -model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') -tokenizer = AutoTokenizer.from_pretrained(peft_model_id) -# Load the Lora model -model = PeftModel.from_pretrained(model, peft_model_id) - -def predecir_intervencion(text): - text = "" + text + " Intervención: " - batch = tokenizer(text, return_tensors='pt') - with torch.cuda.amp.autocast(): - output_tokens = model.generate(**batch, max_new_tokens=256, eos_token_id=50258) - - output = tokenizer.decode(output_tokens[0], skip_special_tokens=False) - - aux = output.split("Intervención:")[1].strip() - intervencion = aux.split("Resultado:")[0].strip() - resultado = aux.split("Resultado:")[1].split("")[0].strip() - - return intervencion, resultado - -with gr.Blocks() as demo: - gr.Markdown("Predicción de intervenciones para mitigar el daño racista en el pueblo gitano") - with gr.Row(): - hechos = gr.Textbox(placeholder="Un alumno gitano de un Instituto...", label="Hechos") - with gr.Row(): - intervencion = gr.Textbox(label="Intervención") - resultado = gr.Textbox(label="Resultado") - - btn = gr.Button("Go") - btn.click(fn=predecir_intervencion, inputs=hechos, outputs=[intervencion, resultado]) - - gr.Examples( - examples=["El Diario de Almería publicó una noticia sobre un conflicto con la Guardia Civil, mencionando la etnia de la persona implicada.", - "Un hombre gitano participante de servicios la FSG en Valencia, nos contó que estaba en la puerta de su casa arreglando un asiento de su furgoneta y, para ello, lo desmontó y lo colocó en la acera mientras terminaba la reparación. En ese momento, se acercó un agente de policía local y le dijo que le dejara entrar a su casa para ver si tenía chatarra almacenada. El hombre le dijo que si no tenía una orden no le dejaba entrar a su casa. Ante esa respuesta, el policía (que según la víctima se enfadó por la respuesta que le dio) le puso una multa por “ensuciar la vía pública “. Para valorar la posibilidad de recurrir dicha multa, la Técnica de Igualdad entrevistó al hombre, que informó que esto le había ocurrido hacía tiempo, por lo que no fue posible interponer recurso. Igualmente, nos informó que en el barrio del Cabañal esta situación era bastante frecuente hacia otras personas gitanas. La víctima no quiso que interviniésemos con otras acciones, pero pidió que desde la FSG se estuviera más pendiente de las actuaciones policiales de su zona.", - "Una mujer gitana contó en la FSG que recibió comentarios inadecuados por teléfono por parte de la administrativa y de la trabajadora social de su centro de servicios sociales, que le dijeron que pidiera ayuda al Secretariado Gitano. La mujer estaba convencida que esto le había pasado por ser gitana." - ], - inputs=hechos, - label="Ejemplos" - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/__init__.py deleted file mode 100644 index b0804ff9446160fdad093af0b0fcff2e45fddb76..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from .coco import COCODataset -from .voc import PascalVOCDataset -from .concat_dataset import ConcatDataset -from .background import Background -from .tsv import TSVDataset, ODTSVDataset - -from .modulated_coco import ModulatedDataset, CocoDetection, CocoGrounding -from .flickr import FlickrDataset -from .refexp import RefExpDataset -from .mixed import MixedDataset -from .gqa import GQADataset - -from .coco_dt import CocoDetectionTSV -from .caption import CaptionTSV -from .lvis import LvisDetection -from .pseudo_data import PseudoData -from .phrasecut import PhrasecutDetection - -__all__ = ["COCODataset", "TSVDataset", "ODTSVDataset", "ConcatDataset", "PascalVOCDataset", "Background", - "ModulatedDataset", "MixedDataset", "CocoDetection", "FlickrDataset", "RefExpDataset", "GQADataset", - "CocoDetectionTSV", "CocoGrounding", "CaptionTSV", "LvisDetection", "PseudoData", "PhrasecutDetection" - ] diff --git a/spaces/harshvardhansb/ObjectDetection/node_modules/@tensorflow-models/coco-ssd/dist/classes.js b/spaces/harshvardhansb/ObjectDetection/node_modules/@tensorflow-models/coco-ssd/dist/classes.js deleted file mode 100644 index 1eda15e6d049ec4993fad37ab520dfd6cbee9ce5..0000000000000000000000000000000000000000 --- a/spaces/harshvardhansb/ObjectDetection/node_modules/@tensorflow-models/coco-ssd/dist/classes.js +++ /dev/null @@ -1,32 +0,0 @@ -"use strict"; -/** - * @license - * Copyright 2019 Google LLC. All Rights Reserved. - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * ============================================================================= - */ -Object.defineProperty(exports, "__esModule", { value: true }); -exports.CLASSES = { - 1: { - name: '/m/01g317', - id: 1, - displayName: 'person', - }, - 2: { - name: '/m/0199g', - id: 2, - displayName: 'bicycle', - }, - 3: { - name: '/m/0k4j', - id: \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/layers/csrc/vision.cpp b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/layers/csrc/vision.cpp deleted file mode 100644 index ad8e472c2cfc7c10e00cd6b00fc22c0dd9384dd1..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/layers/csrc/vision.cpp +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -#include -#include "SwapAlign2Nat/SwapAlign2Nat.h" - -namespace tensormask { - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "swap_align2nat_forward", - &SwapAlign2Nat_forward, - "SwapAlign2Nat_forward"); - m.def( - "swap_align2nat_backward", - &SwapAlign2Nat_backward, - "SwapAlign2Nat_backward"); -} - -} // namespace tensormask diff --git a/spaces/hasibzunair/image-recognition-demo/app.py b/spaces/hasibzunair/image-recognition-demo/app.py deleted file mode 100644 index 4c8d54503da90f9abdcab7021b625b29579866ef..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/image-recognition-demo/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import os -import torch -import gradio as gr - -from PIL import Image -from torchvision import transforms - - -""" -Built following: -https://huggingface.co/spaces/pytorch/ResNet/tree/main -https://www.gradio.app/image_classification_in_pytorch/ -""" - -# Get classes list -os.system("wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt") - -# Load PyTorch model -model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True) -model.eval() - -# Download an example image from the pytorch website -torch.hub.download_url_to_file("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") - -# Inference! -def inference(input_image): - preprocess = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ]) - input_tensor = preprocess(input_image) - input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model - - # Move the input and model to GPU for speed if available - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - model.to('cuda') - - with torch.no_grad(): - output = model(input_batch) - # The output has unnormalized scores. To get probabilities, you can run a softmax on it. - probabilities = torch.nn.functional.softmax(output[0], dim=0) - - # Read the categories - with open("imagenet_classes.txt", "r") as f: - categories = [s.strip() for s in f.readlines()] - # Show top categories per image - top5_prob, top5_catid = torch.topk(probabilities, 5) - result = {} - for i in range(top5_prob.size(0)): - result[categories[top5_catid[i]]] = top5_prob[i].item() - return result - -# Define ins outs placeholders -inputs = gr.inputs.Image(type='pil') -outputs = gr.outputs.Label(type="confidences",num_top_classes=5) - -# Define style -title = "Image Recognition Demo" -description = "This is a prototype application which demonstrates how artifical intelligence based systems can recognize what object(s) is present in an image. This fundamental task in computer vision known as `Image Classification` has applications stretching from autonomous vehicles to medical imaging. To use it, simply upload your image, or click one of the examples images to load them, which I took at Montréal Biodôme! Read more at the links below." -article = "

    Deep Residual Learning for Image Recognition | Github Repo

    " - -# Run inference -gr.Interface(inference, - inputs, - outputs, - examples=["example1.jpg", "example2.jpg"], - title=title, - description=description, - article=article, - analytics_enabled=False).launch() - diff --git a/spaces/hdhzk/bingo/src/lib/bots/bing/utils.ts b/spaces/hdhzk/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/hebert2099/MusicGen/tests/data/test_audio.py b/spaces/hebert2099/MusicGen/tests/data/test_audio.py deleted file mode 100644 index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/tests/data/test_audio.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import random - -import numpy as np -import torch -import torchaudio - -from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestInfo(TempDirMixin): - - def test_info_mp3(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - wav = get_white_noise(ch, int(sample_rate * duration)) - path = self.get_temp_path('sample_wav.mp3') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - # we cannot trust torchaudio for num_frames, so we don't check - - def _test_info_format(self, ext: str): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'sample_wav{ext}') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - assert np.isclose(info.duration, duration, atol=1e-5) - - def test_info_wav(self): - self._test_info_format('.wav') - - def test_info_flac(self): - self._test_info_format('.flac') - - def test_info_ogg(self): - self._test_info_format('.ogg') - - def test_info_m4a(self): - # TODO: generate m4a file programmatically - # self._test_info_format('.m4a') - pass - - -class TestRead(TempDirMixin): - - def test_read_full_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == wav.shape[1] - assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04) - - def test_read_partial_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = torch.rand(1).item() - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path, 0, read_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - read_wav, read_sr = audio_read(path, seek_time, read_duration) - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == expected_frames - assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav_padded(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True) - expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav) - - -class TestAvRead(TempDirMixin): - - def test_avread_seek_base(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 2. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a full duration segment in the file - seek_time = random.uniform(0.0, 1.0) - seek_duration = random.uniform(0.001, 1.0) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == int(seek_duration * sample_rate) - - def test_avread_seek_partial(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a partial segment - seek_time = random.uniform(0.5, 1.) - seek_duration = 1. - expected_num_frames = n_frames - int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == expected_num_frames - - def test_avread_seek_outofbound(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = 1.5 - read_wav, read_sr = _av_read(path, seek_time, 1.) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == 0 - - def test_avread_seek_edge(self): - sample_rates = [8000, 16_000] - # some of these values will have - # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1) - n_frames = [1000, 1001, 1002] - channels = [1, 2] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - duration = frames / sample_rate - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = (frames - 1) / sample_rate - seek_frames = int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == (frames - seek_frames) - - -class TestAudioWrite(TempDirMixin): - - def test_audio_write_wav(self): - torch.manual_seed(1234) - sample_rates = [8000, 16_000] - n_frames = [1000, 1001, 1002] - channels = [1, 2] - strategies = ["peak", "clip", "rms"] - formats = ["wav", "mp3"] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - for format_, strategy in product(formats, strategies): - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'pred_{sample_rate}_{ch}') - audio_write(path, wav, sample_rate, format_, strategy=strategy) - read_wav, read_sr = torchaudio.load(f'{path}.{format_}') - if format_ == "wav": - assert read_wav.shape == wav.shape - - if format_ == "wav" and strategy in ["peak", "rms"]: - rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max() - # for a Gaussian, the typical max scale will be less than ~5x the std. - # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that. - # For RMS target, rescaling leaves more headroom by default, leading - # to a 20x rescaling typically - atol = (5 if strategy == "peak" else 20) / 2**15 - delta = (rescaled_read_wav - wav).abs().max() - assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol) - formats = ["wav"] # faster unit tests diff --git a/spaces/heiyubili/bingo/src/lib/bots/bing/types.ts b/spaces/heiyubili/bingo/src/lib/bots/bing/types.ts deleted file mode 100644 index 5a9813b797d13b592ec17b45cfac4bd46510d883..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,261 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN', - BING_TRY_LATER = 'BING_TRY_LATER', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/hrdtbs/rvc-mochinoa/weights/mochinoa/LICENCE.md b/spaces/hrdtbs/rvc-mochinoa/weights/mochinoa/LICENCE.md deleted file mode 100644 index 08b24e40c28a3a2f5160042b77d72d0cff4f05f1..0000000000000000000000000000000000000000 --- a/spaces/hrdtbs/rvc-mochinoa/weights/mochinoa/LICENCE.md +++ /dev/null @@ -1,23 +0,0 @@ -ライセンスの要約 - -- このモデルを利用して作成された音声は公序良俗に反しない範囲で商用利用が認められます。 -- 望月のあの宣伝をしてくれると嬉しいです。 -- このモデルとこのモデルを利用して作成された音声によって損害が発生しても責任は負いません。 - ---- - -Mochinoa Licence Revision 1 - -Copyright (C) 2023 望月のあ Mochizuki Noa - -Permission is hereby granted, free of charge, for anyone to use the compiled binaries, source code, and documentation (the "Software"). - -Permission to modify the Software is only granted to individuals or entities that have a higher number of YouTube subscribers and Twitter followers than the copyright holder (Mochizuki Noa) at the time of modification. - -You are permitted to sell or distribute products created using this Software, as long as the sale or distribution does not violate public order and morals. - -When publicly releasing any product created using this Software, you are strongly encouraged to promote the copyright holder's YouTube and Twitter accounts to the best of your ability. - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -The Software is provided in the hope that it will be useful, but the Software comes with NO WARRANTY, EXPRESS OR IMPLIED, and the authors of the Software are NOT LIABLE FOR ANY LOSSES, DAMAGES, OR MISUSE relating to the Software. \ No newline at end of file diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/components/pages/_layout.svelte-9d5377d3.js b/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/components/pages/_layout.svelte-9d5377d3.js deleted file mode 100644 index 743a20886104cb98a74de421d7c01b0d3d0d20f7..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/static/_app/immutable/components/pages/_layout.svelte-9d5377d3.js +++ /dev/null @@ -1 +0,0 @@ -import{S as l,i,s as r,C as u,D as f,E as _,F as c,f as p,t as d}from"../../chunks/index-ba22f6f0.js";function m(n){let s;const o=n[1].default,e=u(o,n,n[0],null);return{c(){e&&e.c()},l(t){e&&e.l(t)},m(t,a){e&&e.m(t,a),s=!0},p(t,[a]){e&&e.p&&(!s||a&1)&&f(e,o,t,t[0],s?c(o,t[0],a,null):_(t[0]),null)},i(t){s||(p(e,t),s=!0)},o(t){d(e,t),s=!1},d(t){e&&e.d(t)}}}function $(n,s,o){let{$$slots:e={},$$scope:t}=s;return n.$$set=a=>{"$$scope"in a&&o(0,t=a.$$scope)},[t,e]}class h extends l{constructor(s){super(),i(this,s,$,m,r,{})}}export{h as default}; diff --git a/spaces/huggingface/hf-speech-bench/README.md b/spaces/huggingface/hf-speech-bench/README.md deleted file mode 100644 index 1a3ea79d7ba62ce607648439318d80f0a6c72d1a..0000000000000000000000000000000000000000 --- a/spaces/huggingface/hf-speech-bench/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: The 🤗 Speech Bench -emoji: 📈 -colorFrom: red -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false -license: apache-2.0 ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hugginglearners/pokemon-card-checker/app.py b/spaces/hugginglearners/pokemon-card-checker/app.py deleted file mode 100644 index 232245fe7eaa10598ffe634bfeb70d2ca4e0d847..0000000000000000000000000000000000000000 --- a/spaces/hugginglearners/pokemon-card-checker/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import numpy as np -import gradio as gr -from huggingface_hub import from_pretrained_fastai -from lime import lime_image -from skimage.segmentation import mark_boundaries - -learn = from_pretrained_fastai('hugginglearners/pokemon-card-checker') - -def check_card(img): - pred_label, _, scores = learn.predict(img) - scores = scores.detach().numpy() - scores = {'real': float(scores[1]), 'fake': float(scores[0])} - - print(np.array(img).shape) - - # Lime Explanation - explainer = lime_image.LimeImageExplainer() - explanation = explainer.explain_instance( - np.array(img), - classifier_fn=classify_cards, - labels=['0', '1'], - num_samples=1000, - random_seed=42, - ) - - temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False) - img_boundry = mark_boundaries(temp/255.0, mask) - return scores, img_boundry - -def classify_cards(imgs): - print(imgs.shape) - scores = [] - - for i in range(imgs.shape[0]): - pred_label, _, score = learn.predict(imgs[i]) - scores.append(score.detach().numpy()) - - scores = np.array(scores) - print(scores.shape) - - return scores - - -demo = gr.Interface( - fn=check_card, - inputs='image', - outputs=["label", "image"], - examples=['real-1.jpeg','real-2.jpeg','fake-1.jpeg','fake-2.jpeg','real-3.jpeg','real-4.jpeg','fake-3.jpeg','fake-4.jpeg'], - title='Pokemon Card Checker', - description='This space uses a resnet34 model fine-tuned to determine whether Pokemon cards are real or fake. \n\nAdded [LIME](https://github.com/marcotcr/lime) to show what contributed to the predicted label (green shows what contributed towards that label and red shows what contributed against the label predicted).\n\n[Dataset](https://www.kaggle.com/datasets/ongshujian/real-and-fake-pokemon-cards) created by [Shujian Ong](https://www.kaggle.com/ongshujian).', - article='Can you guess which cards are real and fake? \n\nI can\'t 🤔 \n\n([View Labels](https://gist.github.com/mindwrapped/e5aad747757ef006037a1a1982be34fc)) \n\nSpace and model by Scott Krstyen (mindwrapped) \n\n![visitor badge](https://visitor-badge.glitch.me/badge?page_id=hugginglearners.pokemon-card-checker-space)', - live=False, - ) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/hunkim/es-gpt/app.py b/spaces/hunkim/es-gpt/app.py deleted file mode 100644 index bfe256af05ce10302312640b223edb91cd476754..0000000000000000000000000000000000000000 --- a/spaces/hunkim/es-gpt/app.py +++ /dev/null @@ -1,54 +0,0 @@ -from fastapi import FastAPI, HTTPException -from fastapi.responses import StreamingResponse, HTMLResponse -import json -from es_gpt import ESGPT -import asyncio - -# Create an instance of the ESGPT class -es = ESGPT(index_name="papers") - -# Create a FastAPI app -app = FastAPI() - -# Define the search route - - -@app.get("/", response_class=HTMLResponse) -async def read_index(): - with open("index.html", "r") as file: - html = file.read() - return html - - -@app.get("/search") -async def search(q: str): - # Perform a search for the query - results = es.search(q) - print(results) - - # Stream the search results to the client - async def stream_response(): - for hit in results: - # sleep(0.1) - await asyncio.sleep(0.1) - yield "data: " + json.dumps(hit) + "\n\n" - yield "[DONE]" - - return StreamingResponse(stream_response(), media_type="text/event-stream") - -# Define the summary route - - -@app.get("/summary") -async def summary(q: str): - # Perform a search for the query - results = es.search(q) - - # Generate summaries of the search results - resp = es.summarize(q, results) - - if resp.status_code != 200: - raise HTTPException(resp.status_code, resp.text) - - return StreamingResponse(resp.iter_content(1), - media_type="text/event-stream") diff --git a/spaces/hunkim/es-gpt/es_gpt.py b/spaces/hunkim/es-gpt/es_gpt.py deleted file mode 100644 index 40ac92f71962ef5e2c9baa893827f076b0fffacb..0000000000000000000000000000000000000000 --- a/spaces/hunkim/es-gpt/es_gpt.py +++ /dev/null @@ -1,75 +0,0 @@ -from elasticsearch import Elasticsearch -import os -import json -import requests - -ES_URL = os.environ["ES_URL"] -ES_USER = os.environ["ES_USER"] -ES_PASS = os.environ["ES_PASS"] -ES_CA_CERT = os.environ["ES_CA_CERT"] - - -class ESGPT: - def __init__(self, index_name): - self.es = Elasticsearch(ES_URL, http_auth=(ES_USER, ES_PASS), - ca_certs=ES_CA_CERT, verify_certs=True) - self.index_name = index_name - self.model_engine = os.environ["OPENAI_GPT_ENGINE"] - self.api_key = os.environ["OPENAI_API_KEY"] - - def index(self, doc_id, doc): - self.es.index(index=self.index_name, - id=doc_id, - document=doc) - - def search(self, query): - body = { - "query": { - "query_string": {"query": query} - } - } - - results = self.es.search(index=self.index_name, body=body) - return results['hits']['hits'] - - def _paper_results_to_text(self, results): - text_result = "" - for paper in results: - title = "" - if "title" in paper["_source"]: - title = paper["_source"]["title"] - - abstract = "" - if "abctract" in paper["_source"]: - abstract = paper["_source"]["abstract"] - - paper_str = f"{title}:\n{abstract[:100]}\n\n" - text_result += paper_str - return text_result - - def summarize(self, query, results): - # Generate summaries for each search result - result_json_str = self._paper_results_to_text(results) - if result_json_str == "": - result_json_str = "No results found" - - print(result_json_str[:500]) - - body = { - "model": self.model_engine, - "prompt": f"Please summarize the following search results for query: {query}:\n{result_json_str[:1000]}", - "max_tokens": 1000, - "n": 1, - "stop": None, - "temperature": 0.5, - "stream": True, - } - - headers = {"Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}"} - - resp = requests.post("https://api.openai.com/v1/completions", - headers=headers, - data=json.dumps(body), - stream=True) - return resp diff --git a/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py b/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py deleted file mode 100644 index 93d429590ca4f357aff07989965b673bdf1e50fe..0000000000000000000000000000000000000000 --- a/spaces/imjunaidafzal/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py +++ /dev/null @@ -1,1026 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# -# This file is adapted from https://github.com/huggingface/diffusers/blob/febaf863026bd014b7a14349336544fc109d0f57/examples/dreambooth/train_dreambooth_lora.py -# The original license is as below: -# -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import hashlib -import logging -import math -import os -import warnings -from pathlib import Path -from typing import Optional - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -import datasets -import diffusers -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - UNet2DConditionModel, -) -from diffusers.loaders import AttnProcsLayers -from diffusers.models.cross_attention import LoRACrossAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.12.0.dev0") - -logger = get_logger(__name__) - - -def save_model_card(repo_name, images=None, base_model=str, prompt=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA DreamBooth - {repo_name} - -These are LoRA adaption weights for {repo_name}. The weights were trained on {prompt} using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="lora-dreambooth-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # We only train the additional adapter LoRA layers - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - unet.requires_grad_(False) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move unet, vae and text_encoder to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - text_encoder.to(accelerator.device, dtype=weight_dtype) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # now we will add new LoRA weights to the attention layers - # It's important to realize here how many attention weights will be added and of which sizes - # The sizes of the attention layers consist only of two different variables: - # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`. - # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`. - - # Let's first see how many attention processors we will have to set. - # For Stable Diffusion, it should be equal to: - # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12 - # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2 - # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18 - # => 32 layers - - # Set correct lora layers - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_procs[name] = LoRACrossAttnProcessor( - hidden_size=hidden_size, cross_attention_dim=cross_attention_dim - ) - - unet.set_attn_processor(lora_attn_procs) - lora_layers = AttnProcsLayers(unet.attn_processors) - - accelerator.register_for_checkpointing(lora_layers) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - optimizer = optimizer_class( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth-lora", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = lora_layers.parameters() - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = unet.to(torch.float32) - unet.save_attn_procs(args.output_dir) - - # Final inference - # Load previous pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - - # load attention processors - pipeline.unet.load_attn_procs(args.output_dir) - - # run inference - if args.validation_prompt and args.num_validation_images > 0: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - test_image_dir = Path(args.output_dir) / 'test_images' - test_image_dir.mkdir() - for i, image in enumerate(images): - out_path = test_image_dir / f'image_{i}.png' - image.save(out_path) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - if args.push_to_hub: - save_model_card( - repo_name, - images=images, - base_model=args.pretrained_model_name_or_path, - prompt=args.instance_prompt, - repo_folder=args.output_dir, - ) - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/inamXcontru/PoeticTTS/Avast Antivirus Serial Key Facebook The Easiest Way to Install and Update Your Antivirus.md b/spaces/inamXcontru/PoeticTTS/Avast Antivirus Serial Key Facebook The Easiest Way to Install and Update Your Antivirus.md deleted file mode 100644 index 2e1a469989ed07cfd337412b1959cb19e314291c..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Avast Antivirus Serial Key Facebook The Easiest Way to Install and Update Your Antivirus.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Avast Premier 2023 is among the most widely used security application that takes PC antivirus security to the ultimate level. This antivirus gives you complete computer protection against all of the threats and regular security measures, together with a file shredder and secure browser. You can use our keys to activate the full Avast software on your PC, mobile, android & iOS. So read this article carefully and you will get an avast key for free.

    -

    These avast keys can be used by anyone for free. You have to just copy the activation codes and use them in the Avast antivirus software. The software will be fully activated and you can use all the premium features.

    -

    Avast Antivirus Serial Key Facebook


    Download ->>> https://gohhs.com/2uz4ms



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Comback 7 0 Ir Pro How Philippe Starck Reinvented the Comback Chair.md b/spaces/inamXcontru/PoeticTTS/Comback 7 0 Ir Pro How Philippe Starck Reinvented the Comback Chair.md deleted file mode 100644 index 88638101dbb0ce56535422361a1b26754c15448a..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Comback 7 0 Ir Pro How Philippe Starck Reinvented the Comback Chair.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Comback 7 0 Ir 40


    Download File ••• https://gohhs.com/2uz5tH



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/Despedida Maria Grever Partitura Pdf 22.md b/spaces/inamXcontru/PoeticTTS/Despedida Maria Grever Partitura Pdf 22.md deleted file mode 100644 index 7150f6919619c5518e43c5b313786f09b0e10827..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Despedida Maria Grever Partitura Pdf 22.md +++ /dev/null @@ -1,68 +0,0 @@ -## Despedida Maria Grever Partitura Pdf 22 - - - - - - - - - -**LINK ===> [https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txTed&sa=D&sntz=1&usg=AOvVaw0byKEn3rcy6jr4WlbdPRRb](https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txTed&sa=D&sntz=1&usg=AOvVaw0byKEn3rcy6jr4WlbdPRRb)** - - - - - - - - - - - - - -# Despedida: A Beautiful Song by Maria Grever - - - -Maria Grever was a Mexican composer and songwriter who wrote more than 800 songs in her lifetime. She is considered one of the most important and influential Latin American composers of the 20th century. One of her most famous songs is Despedida, which means "farewell" in Spanish. - - - -Despedida is a romantic ballad that expresses the sadness of parting from a loved one. The lyrics are full of emotion and nostalgia, as the singer recalls the happy moments they shared and hopes to see them again someday. The melody is simple but elegant, with a smooth and flowing rhythm. - - - -The song was written in 1934 and has been recorded by many artists, such as Placido Domingo, Andrea Bocelli, Luis Miguel, and Nat King Cole. It has also been adapted into different languages, such as English, French, Italian, and Portuguese. - - - -If you want to play Despedida on your instrument, you can find the sheet music online in PDF format. There are different versions available for piano, guitar, violin, cello, flute, and more. You can also listen to the song on YouTube or Spotify to get inspired by the beautiful performance of Maria Grever and other singers. - - - -Despedida is a song that will touch your heart and soul with its poetic lyrics and graceful melody. It is a perfect example of Maria Grever's talent and legacy as a composer and songwriter. If you love music and romance, you should definitely listen to Despedida and learn more about Maria Grever's life and work. - - - -If you are interested in learning more about Maria Grever and her music, you can visit the website of the Maria Grever Foundation, which is dedicated to preserving and promoting her legacy. The website contains biographical information, photos, videos, audio clips, and news about events and projects related to Maria Grever. You can also find links to other resources, such as books, articles, documentaries, and podcasts. - - - -Another way to appreciate Maria Grever's music is to attend a concert or a musical show that features her songs. There are many artists and groups that perform her music regularly, such as the Maria Grever Orchestra, the Maria Grever Quartet, and the Maria Grever Ensemble. You can check their schedules and venues online or follow them on social media. - - - -Maria Grever was a remarkable woman who overcame many challenges and obstacles in her life. She was born in Mexico in 1894 and moved to New York in 1916. She faced discrimination and prejudice as a female and a Latino composer in a male-dominated industry. She also suffered from health problems and personal tragedies. Despite all this, she never gave up on her passion and dream of creating music. She became one of the most successful and respected composers of her time, earning recognition and awards from around the world. - - - -Despedida is just one of the many gems that Maria Grever left for us to enjoy. It is a song that transcends time and space, and connects us with our feelings and memories. It is a song that celebrates love and life, even in the face of loss and sorrow. It is a song that reminds us of the beauty and power of music. - - dfd1c89656 - - - - - diff --git a/spaces/innnky/soft-vits-vc/modules.py b/spaces/innnky/soft-vits-vc/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/innnky/soft-vits-vc/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/innnky/visinger2-nomidi/modules/ddsp.py b/spaces/innnky/visinger2-nomidi/modules/ddsp.py deleted file mode 100644 index 7ffd7bb3d3f75ca963fa9795628ada54ec70f909..0000000000000000000000000000000000000000 --- a/spaces/innnky/visinger2-nomidi/modules/ddsp.py +++ /dev/null @@ -1,189 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import functional as F -import torch.fft as fft -import numpy as np -import librosa as li -import math -from scipy.signal import get_window - -def safe_log(x): - return torch.log(x + 1e-7) - - -@torch.no_grad() -def mean_std_loudness(dataset): - mean = 0 - std = 0 - n = 0 - for _, _, l in dataset: - n += 1 - mean += (l.mean().item() - mean) / n - std += (l.std().item() - std) / n - return mean, std - - -def multiscale_fft(signal, scales, overlap): - stfts = [] - for s in scales: - S = torch.stft( - signal, - s, - int(s * (1 - overlap)), - s, - torch.hann_window(s).to(signal), - True, - normalized=True, - return_complex=True, - ).abs() - stfts.append(S) - return stfts - - -def resample(x, factor: int): - batch, frame, channel = x.shape - x = x.permute(0, 2, 1).reshape(batch * channel, 1, frame) - - window = torch.hann_window( - factor * 2, - dtype=x.dtype, - device=x.device, - ).reshape(1, 1, -1) - y = torch.zeros(x.shape[0], x.shape[1], factor * x.shape[2]).to(x) - y[..., ::factor] = x - y[..., -1:] = x[..., -1:] - y = torch.nn.functional.pad(y, [factor, factor]) - y = torch.nn.functional.conv1d(y, window)[..., :-1] - - y = y.reshape(batch, channel, factor * frame).permute(0, 2, 1) - - return y - - -def upsample(signal, factor): - signal = signal.permute(0, 2, 1) - signal = nn.functional.interpolate(signal, size=signal.shape[-1] * factor) - return signal.permute(0, 2, 1) - - -def remove_above_nyquist(amplitudes, pitch, sampling_rate): - n_harm = amplitudes.shape[-1] - pitches = pitch * torch.arange(1, n_harm + 1).to(pitch) - aa = (pitches < sampling_rate / 2).float() + 1e-4 - return amplitudes * aa - - -def scale_function(x): - return 2 * torch.sigmoid(x)**(math.log(10)) + 1e-7 - - -def extract_loudness(signal, sampling_rate, block_size, n_fft=2048): - S = li.stft( - signal, - n_fft=n_fft, - hop_length=block_size, - win_length=n_fft, - center=True, - ) - S = np.log(abs(S) + 1e-7) - f = li.fft_frequencies(sampling_rate, n_fft) - a_weight = li.A_weighting(f) - - S = S + a_weight.reshape(-1, 1) - - S = np.mean(S, 0)[..., :-1] - - return S - - -def extract_pitch(signal, sampling_rate, block_size): - length = signal.shape[-1] // block_size - f0 = crepe.predict( - signal, - sampling_rate, - step_size=int(1000 * block_size / sampling_rate), - verbose=1, - center=True, - viterbi=True, - ) - f0 = f0[1].reshape(-1)[:-1] - - if f0.shape[-1] != length: - f0 = np.interp( - np.linspace(0, 1, length, endpoint=False), - np.linspace(0, 1, f0.shape[-1], endpoint=False), - f0, - ) - - return f0 - - -def mlp(in_size, hidden_size, n_layers): - channels = [in_size] + (n_layers) * [hidden_size] - net = [] - for i in range(n_layers): - net.append(nn.Linear(channels[i], channels[i + 1])) - net.append(nn.LayerNorm(channels[i + 1])) - net.append(nn.LeakyReLU()) - return nn.Sequential(*net) - - -def gru(n_input, hidden_size): - return nn.GRU(n_input * hidden_size, hidden_size, batch_first=True) - - -def harmonic_synth(pitch, amplitudes, sampling_rate): - n_harmonic = amplitudes.shape[-1] - omega = torch.cumsum(2 * math.pi * pitch / sampling_rate, 1) - omegas = omega * torch.arange(1, n_harmonic + 1).to(omega) - signal = (torch.sin(omegas) * amplitudes).sum(-1, keepdim=True) - return signal - - -def amp_to_impulse_response(amp, target_size): - amp = torch.stack([amp, torch.zeros_like(amp)], -1) - amp = torch.view_as_complex(amp) - amp = fft.irfft(amp) - - filter_size = amp.shape[-1] - - amp = torch.roll(amp, filter_size // 2, -1) - win = torch.hann_window(filter_size, dtype=amp.dtype, device=amp.device) - - amp = amp * win - - amp = nn.functional.pad(amp, (0, int(target_size) - int(filter_size))) - amp = torch.roll(amp, -filter_size // 2, -1) - - return amp - - -def fft_convolve(signal, kernel): - signal = nn.functional.pad(signal, (0, signal.shape[-1])) - kernel = nn.functional.pad(kernel, (kernel.shape[-1], 0)) - - output = fft.irfft(fft.rfft(signal) * fft.rfft(kernel)) - output = output[..., output.shape[-1] // 2:] - - return output - - -def init_kernels(win_len, win_inc, fft_len, win_type=None, invers=False): - if win_type == 'None' or win_type is None: - window = np.ones(win_len) - else: - window = get_window(win_type, win_len, fftbins=True)#**0.5 - - N = fft_len - fourier_basis = np.fft.rfft(np.eye(N))[:win_len] - real_kernel = np.real(fourier_basis) - imag_kernel = np.imag(fourier_basis) - kernel = np.concatenate([real_kernel, imag_kernel], 1).T - - if invers : - kernel = np.linalg.pinv(kernel).T - - kernel = kernel*window - kernel = kernel[:, None, :] - return torch.from_numpy(kernel.astype(np.float32)), torch.from_numpy(window[None,:,None].astype(np.float32)) - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ayodance Offline Free Download Full 31 PORTABLE.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ayodance Offline Free Download Full 31 PORTABLE.md deleted file mode 100644 index e5208959016c9b21b0eaa84c4e8039216e551b11..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ayodance Offline Free Download Full 31 PORTABLE.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    How to Download and Play Ayodance Offline Full 31 for Free

    -

    Ayodance is a popular online dance game that lets you show off your moves and compete with other players. But what if you want to play it offline, without an internet connection? Or what if you want to enjoy the full version of the game, with all the songs and features, without paying anything?

    -

    ayodance offline free download full 31


    Download Ziphttps://urlin.us/2uEw0T



    -

    Fortunately, there is a way to do that. You can download and play Ayodance offline full 31 for free, using a simple method that we will explain in this article. Here are the steps you need to follow:

    -
      -
    1. Download the Ayodance offline full 31 installer from this link: https://soundcloud.com/scuracyimse/ayodance-offline-free-download-full-31-link. This is a trusted source that has been verified by many users[^1^] [^2^]. The file size is about 2.5 GB, so make sure you have enough space on your device.
    2. -
    3. Run the installer and follow the instructions on the screen. You will need to choose a destination folder for the game files and agree to the terms and conditions. The installation process may take some time, depending on your device's performance.
    4. -
    5. Once the installation is complete, you can launch the game from the shortcut on your desktop or from the start menu. You will see a login screen where you can enter your username and password. If you don't have an account yet, you can create one for free by clicking on the register button.
    6. -
    7. After logging in, you can choose your character, customize your appearance, select your mode and difficulty level, and start dancing. You can play solo or with friends using a local network connection. You can also access all the songs and features of the game, such as the shop, the club, the chat, and more.
    8. -
    -

    That's it! You have successfully downloaded and played Ayodance offline full 31 for free. Enjoy the game and have fun!

    - -

    Why should you play Ayodance offline full 31 for free? There are many benefits of playing this game offline, such as:

    -

    -
      -
    • You can play anytime and anywhere, without worrying about internet connection or data usage.
    • -
    • You can enjoy the full version of the game, with all the songs and features, without spending any money.
    • -
    • You can practice your skills and improve your performance, without being distracted by other players or chat messages.
    • -
    • You can have fun with your friends using a local network connection, and challenge each other to dance battles.
    • -
    -

    Ayodance offline full 31 is a great way to experience the thrill and excitement of dancing, without any limitations or costs. You can choose from hundreds of songs, from various genres and artists, and dance to the rhythm of the music. You can also customize your character, with different outfits, accessories, hairstyles, and more. You can join a club, make friends, chat with other players, and participate in events and competitions.

    -

    If you love dancing and music, you will love Ayodance offline full 31. Download it now and start playing for free!

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Aap Mujhe Achche Lagne Lage English Dubbed Torrent Download.md b/spaces/inreVtussa/clothingai/Examples/Aap Mujhe Achche Lagne Lage English Dubbed Torrent Download.md deleted file mode 100644 index edaef3edc220f45e1618b42b891cb71ff3dfe5cd..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Aap Mujhe Achche Lagne Lage English Dubbed Torrent Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Aap Mujhe Achche Lagne Lage English Dubbed Torrent Download


    Download Ziphttps://tiurll.com/2uCijL



    -
    -Ek Hindustani The Movie English Sub 1080p Torrent raja hindustani ... Torrent > DOWNLOAD. fd3bc05f4a Aap Mujhe Achche Lagne Lage . 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop Cc 14.0 Final Multilanguage Keygen WORK.md b/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop Cc 14.0 Final Multilanguage Keygen WORK.md deleted file mode 100644 index 3c7fe1bb726a8251ccd78713da5383ec04c43f73..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Adobe Photoshop Cc 14.0 Final Multilanguage Keygen WORK.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    The Adobe Photoshop CC 2022 continuous number gives you a full scale carving to your most shocking points of view. Photoshop cc 2022 crack Mac can do everything from changing photographs and compositing to a mechanized gem, works out, and a graphical game-plan. Adobe photoshop cc full cracked takes your creative cerebrum to a more basic level. You can change your standard photographs into staggering pictures. There are indisputable expert photography contraptions that you need to chip away at your photographs. Within of the scenario of getting injured down by the utilize of cyber-terrorist Knowing of unsophisticated Adobe Photoshop CC Serial key 2022 is sometimes vital in the direction of applicants even if it may be not fairly essential. You are able to design cards for wrapping, basic ads for gorgeous sites, and unforgettable logos for intriguing symbols. Adobe Photoshop CC Latest 2022 is probably the most superior photo publisher.

    -

    adobe photoshop cc 14.0 final multilanguage keygen


    Download Zip ○○○ https://tiurll.com/2uCirZ



    -

    Adobe Photoshop CC Latest 2018 can be a software that makes it easier to modify images. You can enhance photographs with its unified tools and apply over 50 new filters. You can easily alter colors and levels, do the brush work, and change photos into stunning visuals. Photoshop is a well-known application used to craft things. This application is used by many photographers. Moreover, there are more than 10 million customers who are using Adobe Photoshop. They use this application to edit images. You will find a large collection of templates in this application. You can easily customize your application. Also, it is compatible with both Windows and Mac OS.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Crack For DacEasy Accounting V15 27 !!TOP!!.md b/spaces/inreVtussa/clothingai/Examples/Crack For DacEasy Accounting V15 27 !!TOP!!.md deleted file mode 100644 index 1086a23a630443be66d0371f8147e7bee2d0c241..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crack For DacEasy Accounting V15 27 !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Crack for DacEasy Accounting v15 27


    Download File ✏ ✏ ✏ https://tiurll.com/2uCmcN



    -
    -SSP 12: 5676 5034 7681 8288 2379 1238 3252 9988 8641 9306 27. Delta Force: Black ... Roxio Video Wave Movie Creator v1.5 Serial : 1T-1BS52-ZS6ZL-9J82T Macromedia. ... Peachtree Comp. accounting 7.0 - SN: 17023756 / u.code: 717553424. PHOTO ... Daceasy EIS v5.0 for Windows : EI50X101175 Daceasy for ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Crack UPDATED Para Activar Insight 2016 32.md b/spaces/inreVtussa/clothingai/Examples/Crack UPDATED Para Activar Insight 2016 32.md deleted file mode 100644 index 82d6151df296f17616e5ea28d5cea5753c2559e1..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crack UPDATED Para Activar Insight 2016 32.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    Formerly, there were two separate email services, Hotmail and Live, available to Windows Live Mail users. However, users can now consolidate their email to Outlook, including live service accounts. Windows Live Mail is included in Windows Vista and later operating systems, whereas Hotmail is available only for Windows XP.

    -

    Crack Para Activar Insight 2016 32


    DOWNLOAD ✶✶✶ https://tiurll.com/2uClHE



    -

    Los usuarios de redes sociales saben que Facebook está utilizando otras empresas para ampliar su poder y fortalecer su red social. Malentiendas al respecto han desacreditado a los usuarios creyendo que eso no es cierto.

    -

    In order to be considered for expedited shipping, customers will need to create a support ticket and request expedited shipping. In the support ticket, customers must include their order number and how soon they need their package. (Minimum is 3 business days for the US orders and 5 business days for non-US orders). If expedited shipping is approved, EMOTIV will send a separate PayPal invoice for expedited shipping. Any expedited shipping fees must be paid prior to the shipment.

    -

    A Linux distribution for professionals who want a customized system for daily use with support for thousands of applications and the ability to access a huge range of community content. The no-frills approach to setting up and maintaining a computer that Microsoft wants you to forget about but users will never. Course for Windows 7 users this course will cover how to Install, configure, secure and backup your system for daily use.

    Unlike other new systems at Insights 2016, this course is only $55.00.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Cummings Otolaryngology 6th Edition Pdf Free Download NEW.md b/spaces/inreVtussa/clothingai/Examples/Cummings Otolaryngology 6th Edition Pdf Free Download NEW.md deleted file mode 100644 index 3bb2a3654f26b8102ef76c5894a41efcfdfad7dc..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cummings Otolaryngology 6th Edition Pdf Free Download NEW.md +++ /dev/null @@ -1,89 +0,0 @@ -
    -

    How to Download Cummings Otolaryngology 6th Edition PDF for Free

    -

    Cummings Otolaryngology is a renowned textbook that covers all aspects of head and neck surgery. It is written by leading experts in the field and provides comprehensive and up-to-date information on the latest techniques and technologies. If you are looking for a reliable source of guidance on this topic, you should consider downloading Cummings Otolaryngology 6th Edition PDF for free.

    -

    cummings otolaryngology 6th edition pdf free download


    Download ····· https://tiurll.com/2uCkXx



    -

    In this article, we will explain what Cummings Otolaryngology is, what are the features of the 6th edition, why you should download it for free, and how you can do it. We will also provide you with some tips on how to use the book effectively.

    -

    What is Cummings Otolaryngology?

    -

    Cummings Otolaryngology is a three-volume set that covers all facets of head and neck surgery, including otology, neurotology, skull base surgery, rhinology, allergy, facial plastic and reconstructive surgery, laryngology, head and neck oncology, pediatric otolaryngology, sleep medicine, and endocrine surgery. It also provides comprehensive coverage of basic science, anatomy, physiology, pathology, pharmacology, and radiology.

    -

    The book is named after Charles W. Cummings, who was the editor-in-chief of the first four editions. The current editor-in-chief is Paul W. Flint, who is joined by six other editors and more than 300 contributors. The book has been published since 1973 and has been updated regularly to reflect the latest advances and innovations in the field.

    -

    The book is designed to help you overcome virtually any clinical challenge with detailed, expert coverage of every area of head and neck surgery. It also helps you experience clinical scenarios with vivid clarity through a heavily illustrated, full-color format that includes images and videos. It also helps you get diverse perspectives and worldwide best practices from a multi-disciplinary team of contributors and editors comprised of the world’s leading experts.

    -

    What are the features of the 6th edition?

    -

    The 6th edition of Cummings Otolaryngology was published in 2015 and has been updated with the latest developments and innovations in the field. Some of the features of this edition are:

    -

    -
      -
    • More than 3,600 pages of content, with over 3,200 full-color images and over 40 high-quality procedural videos.
    • -
    • A streamlined format, with reorganized chapters and a color design that facilitates reference.
    • -
    • New chapters on topics such as pediatric sleep disorders, pediatric infectious disease, evaluation and management of the infant airway, vestibular implants and vestibular management involving intratympanic and physical therapy-based approaches, radiosurgical treatment of posterior fossa and skull base neoplasms, intraoperative monitoring of cranial nerve and CNS function, and more.
    • -
    • Updated information on minimally invasive surgical approaches to the entire skull base, endoscopic techniques for sinonasal and anterior skull base tumors, new techniques for reconstruction of complex defects involving the scalp, skull base, cheek, nose, orbit, maxilla, mandible,and ear.
    • -
    • Evidence-based recommendations for management of many common disorders based on their genetic basis.
    • -
    • An assessment of the real-world effectiveness and costs associated with emergent technologies and surgical approaches introduced to OHNS over the past 10 years.
    • -
    -

    Why should you download Cummings Otolaryngology 6th Edition PDF for free?

    -

    There are many reasons why you should download Cummings Otolaryngology 6th Edition PDF for free. Here are some of them:

    -
      -
    • You will get access to the most comprehensive, multi-disciplinary text in the field of head and neck surgery.
    • -
    • You will learn from the world's leading experts who share their insights and experiences on all aspects of clinical practice and research.
    • -
    • You will be able to apply the latest discoveries, techniques, and technologies that are shaping patient outcomes.
    • -
    • You will be able to overcome virtually any clinical challenge with detailed, expert coverage of every area of head and neck surgery.
    • -
    • You will be able to experience clinical scenarios with vivid clarity through a heavily illustrated, full-color format that includes images and videos.
    • -
    • You will be able to find what you need faster through a user-friendly format that expedites reference.
    • -
    • You will save money by downloading it for free instead of buying it from online or offline stores.
    • -
    -

    How can you download Cummings Otolaryngology 6th Edition PDF for free?

    -

    If you are interested in downloading Cummings Otolaryngology 6th Edition PDF for free, you can follow these simple steps:

    -
      -
    1. Click on the link below to go to the download page.
    2. -
    3. Enter your email address to receive a verification code.
    4. -
    5. Enter the verification code to unlock the download link.
    6. -
    7. Click on the download link to start downloading the file.
    8. -
    9. Enjoy reading Cummings Otolaryngology 6th Edition PDF on your device.
    10. -
    - -Download Cummings Otolaryngology 6th Edition PDF for Free - -

    We hope you found this article helpful and informative. If you have any questions or comments about Cummings Otolaryngology 6th Edition PDF or head and neck surgery in general, feel free to leave them below. We would love to hear from you!

    -

    How to use Cummings Otolaryngology 6th Edition PDF effectively?

    -

    Once you have downloaded Cummings Otolaryngology 6th Edition PDF for free, you may wonder how to use it effectively. Here are some tips to help you make the most of this book:

    -
      -
    • Use the table of contents and the index to find the topics that interest you or that you need to study.
    • -
    • Read the chapters that are relevant to your specialty or your clinical cases.
    • -
    • Watch the videos that demonstrate the surgical procedures and techniques.
    • -
    • Review the images and diagrams that illustrate the anatomy, pathology, and radiology.
    • -
    • Refer to the references and suggested readings for further information and research.
    • -
    • Test your knowledge and skills with the self-assessment questions and answers at the end of each chapter.
    • -
    -

    What are the benefits of Cummings Otolaryngology 6th Edition PDF for your career?

    -

    Cummings Otolaryngology 6th Edition PDF is not only a valuable source of information for your current practice, but also a great asset for your career development. Here are some of the benefits of this book for your career:

    -
      -
    • You will enhance your knowledge and skills in all areas of head and neck surgery.
    • -
    • You will stay updated with the latest developments and innovations in the field.
    • -
    • You will learn from the best practices and experiences of the world's leading experts.
    • -
    • You will improve your clinical outcomes and patient satisfaction.
    • -
    • You will increase your confidence and competence in performing complex and challenging procedures.
    • -
    • You will prepare yourself for board exams and certification.
    • -
    • You will advance your career opportunities and reputation in the field.
    • -
    - -

    We hope you enjoyed reading this article and learned more about Cummings Otolaryngology 6th Edition PDF. If you want to download this book for free, don't forget to click on the link below. And if you have any feedback or questions, please leave them in the comments section. We would love to hear from you!

    - -Download Cummings Otolaryngology 6th Edition PDF for Free -

    Where can you find Cummings Otolaryngology 6th Edition PDF for free?

    -

    There are many websites that claim to offer Cummings Otolaryngology 6th Edition PDF for free, but not all of them are reliable or safe. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also require you to register, pay, or complete surveys before you can access the file.

    -

    That is why we recommend you to use our website, which is a trusted and secure source of free medical books. We have a large collection of books in various specialties and formats, including PDF, EPUB, MOBI, and AZW3. You can download any book you want without any hassle or risk.

    -

    Our website is easy to use and navigate. You can search for the book you want by title, author, ISBN, or keyword. You can also browse through our categories and subcategories to find the book you need. You can also check out our featured books and bestsellers to discover new and popular books.

    -

    Once you find the book you want, you can simply click on the download button and enter your email address to receive a verification code. Then, you can enter the verification code and unlock the download link. You can then click on the download link and start downloading the file to your device.

    -

    You can also read the book online on our website if you prefer. You can zoom in and out, adjust the brightness, change the font size and color, bookmark pages, highlight text, and add notes. You can also share the book with your friends and colleagues via email or social media.

    -

    What are the reviews of Cummings Otolaryngology 6th Edition PDF?

    -

    Cummings Otolaryngology 6th Edition PDF has received many positive reviews from readers and critics alike. Here are some of the reviews that we have collected from various sources:

    -
      -
    • "This is an outstanding textbook that covers all aspects of head and neck surgery in great detail and with excellent illustrations. It is a must-have for anyone who practices or studies otolaryngology-head and neck surgery." - Amazon customer
    • -
    • "This is the most comprehensive and authoritative text on head and neck surgery available today. It is well-written, well-organized, well-illustrated, and well-referenced. It covers everything from basic science to clinical practice and research. It is a valuable resource for both beginners and experts in the field." - Doody's Review Service
    • -
    • "This is a superb book that provides a wealth of information on all facets of head and neck surgery. It is updated with the latest advances and innovations in the field. It is written by leading experts who share their insights and experiences on all aspects of clinical practice and research. It is a must-read for anyone who wants to excel in this field." - Goodreads user
    • -
    - -

    We hope you enjoyed reading this article and learned more about Cummings Otolaryngology 6th Edition PDF. If you want to download this book for free, don't forget to click on the link below. And if you have any feedback or questions, please leave them in the comments section. We would love to hear from you!

    - -Download Cummings Otolaryngology 6th Edition PDF for Free -

    In conclusion, Cummings Otolaryngology 6th Edition PDF is a comprehensive and authoritative text that covers all aspects of head and neck surgery. It is written by leading experts in the field and provides up-to-date information on the latest techniques and technologies. It is a valuable resource for anyone who is interested in or involved in head and neck surgery. You can download it for free from our website, which is a trusted and secure source of free medical books. You can also read it online on our website, which offers many features to enhance your reading experience. We hope you found this article helpful and informative. If you have any questions or comments about Cummings Otolaryngology 6th Edition PDF or head and neck surgery in general, feel free to leave them below. We would love to hear from you!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Cycorefx Hd 1.7.1 Crack Cs4.md b/spaces/inreVtussa/clothingai/Examples/Cycorefx Hd 1.7.1 Crack Cs4.md deleted file mode 100644 index 28298ab47debe93760fb246995f755c34878f3e4..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cycorefx Hd 1.7.1 Crack Cs4.md +++ /dev/null @@ -1,6 +0,0 @@ -

    cycorefx hd 1.7.1 crack cs4


    Download File ✶✶✶ https://tiurll.com/2uCjEz



    -
    -Download X-Force 2019 is the keygen that will... ... x-Force keygen v2 for ALL Autodesk products v2020. ... cycorefx hd 1.7.1 crack cs4 1fdad05405
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Descargar Crack Lex Doctor 8 ((FREE)).md b/spaces/inreVtussa/clothingai/Examples/Descargar Crack Lex Doctor 8 ((FREE)).md deleted file mode 100644 index 1852450295a3f4b935a0144b5b4a5d4bff753cdb..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Descargar Crack Lex Doctor 8 ((FREE)).md +++ /dev/null @@ -1,6 +0,0 @@ -

    Descargar Crack Lex Doctor 8


    Download Zip ••• https://tiurll.com/2uCjwS



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/ioanniskarkanias/chatbot-with-sources/README.md b/spaces/ioanniskarkanias/chatbot-with-sources/README.md deleted file mode 100644 index b31e315c08041251f9ae6513e1db4d0695aaf357..0000000000000000000000000000000000000000 --- a/spaces/ioanniskarkanias/chatbot-with-sources/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatbot With Sources -emoji: 🤖 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jacklindsai/is_it_elon_musk/README.md b/spaces/jacklindsai/is_it_elon_musk/README.md deleted file mode 100644 index 5bd955a61784dc5c69fe5b40ec41de37551d7aed..0000000000000000000000000000000000000000 --- a/spaces/jacklindsai/is_it_elon_musk/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Is_it_elon_musk -emoji: 🐦 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/jackyccl/segment-anything/segment_anything/utils/transforms.py b/spaces/jackyccl/segment-anything/segment_anything/utils/transforms.py deleted file mode 100644 index c08ba1e3db751f3a5483a003be38c69c2cf2df85..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to the longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[2], image.shape[3], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/video-actions.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/video-actions.tsx deleted file mode 100644 index e70045de88ddad8c9d201e4f50293a3adce19020..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/business/videos/video-actions.tsx +++ /dev/null @@ -1,52 +0,0 @@ -"use client" - -import { DotsHorizontalIcon } from "@radix-ui/react-icons" -import { Row } from "@tanstack/react-table" - -import { Button } from "@/components/ui/button" -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuTrigger, -} from "@/components/ui/dropdown-menu" - -import { Video } from "@/app/types" - -export function VideoActions({ - row, -}: { - row: Row